text
stringlengths
11
320k
source
stringlengths
26
161
Non-linear least squaresis the form ofleast squaresanalysis used to fit a set ofmobservations with a model that is non-linear innunknown parameters (m≥n). It is used in some forms ofnonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities tolinear least squares, but also somesignificant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors (m(x,θi)=θ1+θ2x(θ3){\displaystyle m(x,\theta _{i})=\theta _{1}+\theta _{2}x^{(\theta _{3})}}). Consider a set ofm{\displaystyle m}data points,(x1,y1),(x2,y2),…,(xm,ym),{\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\dots ,(x_{m},y_{m}),}and a curve (model function)y^=f(x,β),{\displaystyle {\hat {y}}=f(x,{\boldsymbol {\beta }}),}that in addition to the variablex{\displaystyle x}also depends onn{\displaystyle n}parameters,β=(β1,β2,…,βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\beta _{2},\dots ,\beta _{n}),}withm≥n.{\displaystyle m\geq n.}It is desired to find the vectorβ{\displaystyle {\boldsymbol {\beta }}}of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squaresS=∑i=1mri2{\displaystyle S=\sum _{i=1}^{m}r_{i}^{2}}is minimized, where theresiduals(in-sample prediction errors)riare given byri=yi−f(xi,β){\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})}fori=1,2,…,m.{\displaystyle i=1,2,\dots ,m.} Theminimumvalue ofSoccurs when thegradientis zero. Since the model containsnparameters there arengradient equations:∂S∂βj=2∑iri∂ri∂βj=0(j=1,…,n).{\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}=0\quad (j=1,\ldots ,n).} In a nonlinear system, the derivatives∂ri∂βj{\textstyle {\frac {\partial r_{i}}{\partial \beta _{j}}}}are functions of both the independent variable and the parameters, so in general these gradient equations do not have a closed solution. Instead, initial values must be chosen for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation,βj≈βjk+1=βjk+Δβj.{\displaystyle \beta _{j}\approx \beta _{j}^{k+1}=\beta _{j}^{k}+\Delta \beta _{j}.} Here,kis an iteration number and the vector of increments,Δβ{\displaystyle \Delta {\boldsymbol {\beta }}}is known as the shift vector. At each iteration the model is linearized by approximation to a first-orderTaylor polynomialexpansion aboutβk{\displaystyle {\boldsymbol {\beta }}^{k}}f(xi,β)≈f(xi,βk)+∑j∂f(xi,βk)∂βj(βj−βjk)=f(xi,βk)+∑jJijΔβj.{\displaystyle f(x_{i},{\boldsymbol {\beta }})\approx f(x_{i},{\boldsymbol {\beta }}^{k})+\sum _{j}{\frac {\partial f(x_{i},{\boldsymbol {\beta }}^{k})}{\partial \beta _{j}}}\left(\beta _{j}-\beta _{j}^{k}\right)=f(x_{i},{\boldsymbol {\beta }}^{k})+\sum _{j}J_{ij}\,\Delta \beta _{j}.}TheJacobian matrix,J, is a function of constants, the independent variableandthe parameters, so it changes from one iteration to the next. Thus, in terms of the linearized model,∂ri∂βj=−Jij{\displaystyle {\frac {\partial r_{i}}{\partial \beta _{j}}}=-J_{ij}}and the residuals are given byΔyi=yi−f(xi,βk),{\displaystyle \Delta y_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }}^{k}),}ri=yi−f(xi,β)=(yi−f(xi,βk))+(f(xi,βk)−f(xi,β))≈Δyi−∑s=1nJisΔβs.{\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})=\left(y_{i}-f(x_{i},{\boldsymbol {\beta }}^{k})\right)+\left(f(x_{i},{\boldsymbol {\beta }}^{k})-f(x_{i},{\boldsymbol {\beta }})\right)\approx \Delta y_{i}-\sum _{s=1}^{n}J_{is}\Delta \beta _{s}.} Substituting these expressions into the gradient equations, they become−2∑i=1mJij(Δyi−∑s=1nJisΔβs)=0,{\displaystyle -2\sum _{i=1}^{m}J_{ij}\left(\Delta y_{i}-\sum _{s=1}^{n}J_{is}\ \Delta \beta _{s}\right)=0,}which, on rearrangement, becomensimultaneous linear equations, thenormal equations∑i=1m∑s=1nJijJisΔβs=∑i=1mJijΔyi(j=1,…,n).{\displaystyle \sum _{i=1}^{m}\sum _{s=1}^{n}J_{ij}J_{is}\ \Delta \beta _{s}=\sum _{i=1}^{m}J_{ij}\ \Delta y_{i}\qquad (j=1,\dots ,n).} The normal equations are written in matrix notation as(JTJ)Δβ=JTΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {J} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\ \Delta \mathbf {y} .} These equations form the basis for theGauss–Newton algorithmfor a non-linear least squares problem. Note the sign convention in the definition of the Jacobian matrix in terms of the derivatives. Formulas linear inJ{\displaystyle J}may appear with factor of−1{\displaystyle -1}in other articles or the literature. When the observations are not equally reliable, a weighted sum of squares may be minimized,S=∑i=1mWiiri2.{\displaystyle S=\sum _{i=1}^{m}W_{ii}r_{i}^{2}.} Each element of thediagonalweight matrixWshould, ideally, be equal to the reciprocal of the errorvarianceof the measurement.[1]The normal equations are then, more generally,(JTWJ)Δβ=JTWΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\mathbf {W} \ \Delta \mathbf {y} .} In linear least squares theobjective function,S, is aquadratic functionof the parameters.S=∑iWii(yi−∑jXijβj)2{\displaystyle S=\sum _{i}W_{ii}\left(y_{i}-\sum _{j}X_{ij}\beta _{j}\right)^{2}}When there is only one parameter the graph ofSwith respect to that parameter will be aparabola. With two or more parameters the contours ofSwith respect to any pair of parameters will be concentricellipses(assuming that the normal equations matrixXTWX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {WX} }ispositive definite). The minimum parameter values are to be found at the centre of the ellipses. The geometry of the general objective function can be described as paraboloid elliptical. In NLLSQ the objective function is quadratic with respect to the parameters only ina region closeto its minimum value, where the truncated Taylor series is a good approximation to the model.S≈∑iWii(yi−∑jJijβj)2{\displaystyle S\approx \sum _{i}W_{ii}\left(y_{i}-\sum _{j}J_{ij}\beta _{j}\right)^{2}}The more the parameter values differ from their optimal values, the more the contours deviate from elliptical shape. A consequence of this is that initial parameter estimates should be as close as practicable to their (unknown!) optimal values. It also explains how divergence can come about as the Gauss–Newton algorithm is convergent only when the objective function is approximately quadratic in the parameters. Some problems of ill-conditioning and divergence can be corrected by finding initial parameter estimates that are near to the optimal values. A good way to do this is bycomputer simulation. Both the observed and calculated data are displayed on a screen. The parameters of the model are adjusted by hand until the agreement between observed and calculated data is reasonably good. Although this will be a subjective judgment, it is sufficient to find a good starting point for the non-linear refinement. Initial parameter estimates can be created using transformations or linearizations. Better still evolutionary algorithms such as the Stochastic Funnel Algorithm can lead to the convex basin of attraction that surrounds the optimal parameter estimates.[citation needed]Hybrid algorithms that use randomization and elitism, followed by Newton methods have been shown to be useful and computationally efficient[citation needed]. Any method among the ones describedbelowcan be applied to find a solution. The common sense criterion for convergence is that the sum of squares does not increase from one iteration to the next. However this criterion is often difficult to implement in practice, for various reasons. A useful convergence criterion is|Sk−Sk+1Sk|<0.0001.{\displaystyle \left|{\frac {S^{k}-S^{k+1}}{S^{k}}}\right|<0.0001.}The value 0.0001 is somewhat arbitrary and may need to be changed. In particular it may need to be increased when experimental errors are large. An alternative criterion is|Δβjβj|<0.001,j=1,…,n.{\displaystyle \left|{\frac {\Delta \beta _{j}}{\beta _{j}}}\right|<0.001,\qquad j=1,\dots ,n.} Again, the numerical value is somewhat arbitrary; 0.001 is equivalent to specifying that each parameter should be refined to 0.1% precision. This is reasonable when it is less than the largest relative standard deviation on the parameters. There are models for which it is either very difficult or even impossible to derive analytical expressions for the elements of the Jacobian. Then, the numerical approximation∂f(xi,β)∂βj≈δf(xi,β)δβj{\displaystyle {\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}\approx {\frac {\delta f(x_{i},{\boldsymbol {\beta }})}{\delta \beta _{j}}}}is obtained by calculation off(xi,β){\displaystyle f(x_{i},{\boldsymbol {\beta }})}forβj{\displaystyle \beta _{j}}andβj+δβj{\displaystyle \beta _{j}+\delta \beta _{j}}. The increment,δβj{\displaystyle \delta \beta _{j}}, size should be chosen so the numerical derivative is not subject to approximation error by being too large, orround-offerror by being too small. Some information is given inthe corresponding sectionon theWeighted least squarespage. Multiple minima can occur in a variety of circumstances some of which are: Not all multiple minima have equal values of the objective function. False minima, also known as local minima, occur when the objective function value is greater than its value at the so-called global minimum. To be certain that the minimum found is the global minimum, the refinement should be started with widely differing initial values of the parameters. When the same minimum is found regardless of starting point, it is likely to be the global minimum. When multiple minima exist there is an important consequence: the objective function will have a stationary point (e.g. a maximum or asaddle point) somewhere between two minima. The normal equations matrix is not positive definite at a stationary point in the objective function, because the gradient vanishes and no unique direction of descent exists. Refinement from a point (a set of parameter values) close to a stationary point will be ill-conditioned and should be avoided as a starting point. For example, when fitting a Lorentzian the normal equations matrix is not positive definite when the half-width of the Lorentzian is zero.[2] A non-linear model can sometimes be transformed into a linear one. Such an approximation is, for instance, often applicable in the vicinity of the best estimator, and it is one of the basic assumption in most iterative minimization algorithms. When a linear approximation is valid, the model can directly be used for inference with ageneralized least squares, where the equations of theLinear Template Fit[3]apply. Another example of a linear approximation would be when the model is a simple exponential function,f(xi,β)=αeβxi,{\displaystyle f(x_{i},{\boldsymbol {\beta }})=\alpha e^{\beta x_{i}},}which can be transformed into a linear model by taking logarithms.log⁡f(xi,β)=log⁡α+βxi{\displaystyle \log f(x_{i},{\boldsymbol {\beta }})=\log \alpha +\beta x_{i}}Graphically this corresponds to working on asemi-log plot. The sum of squares becomesS=∑i(log⁡yi−log⁡α−βxi)2.{\displaystyle S=\sum _{i}(\log y_{i}-\log \alpha -\beta x_{i})^{2}.}This procedure should be avoided unless the errors are multiplicative andlog-normally distributedbecause it can give misleading results. This comes from the fact that whatever the experimental errors onymight be, the errors onlogyare different. Therefore, when the transformed sum of squares is minimized, different results will be obtained both for the parameter values and their calculated standard deviations. However, with multiplicative errors that are log-normally distributed, this procedure gives unbiased and consistent parameter estimates. Another example is furnished byMichaelis–Menten kinetics, used to determine two parametersVmax{\displaystyle V_{\max }}andKm{\displaystyle K_{m}}:v=Vmax[S]Km+[S].{\displaystyle v={\frac {V_{\max }[S]}{K_{m}+[S]}}.}TheLineweaver–Burk plot1v=1Vmax+KmVmax[S]{\displaystyle {\frac {1}{v}}={\frac {1}{V_{\max }}}+{\frac {K_{m}}{V_{\max }[S]}}}of1v{\textstyle {\frac {1}{v}}}against1[S]{\textstyle {\frac {1}{[S]}}}is linear in the parameters1Vmax{\textstyle {\frac {1}{V_{\max }}}}andKmVmax{\textstyle {\frac {K_{m}}{V_{\max }}}}but very sensitive to data error and strongly biased toward fitting the data in a particular range of the independent variable[S]{\displaystyle [S]}. The normal equations(JTWJ)Δβ=(JTW)Δy{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)\Delta {\boldsymbol {\beta }}=\left(\mathbf {J} ^{\mathsf {T}}\mathbf {W} \right)\Delta \mathbf {y} }may be solved forΔβ{\displaystyle \Delta {\boldsymbol {\beta }}}byCholesky decomposition, as described inlinear least squares. The parameters are updated iterativelyβk+1=βk+Δβ{\displaystyle {\boldsymbol {\beta }}^{k+1}={\boldsymbol {\beta }}^{k}+\Delta {\boldsymbol {\beta }}}wherekis an iteration number. While this method may be adequate for simple models, it will fail if divergence occurs. Therefore, protection against divergence is essential. If divergence occurs, a simple expedient is to reduce the length of the shift vector,Δβ{\displaystyle \Delta {\boldsymbol {\beta }}}, by a fraction,fβk+1=βk+fΔβ.{\displaystyle {\boldsymbol {\beta }}^{k+1}={\boldsymbol {\beta }}^{k}+f\ \Delta {\boldsymbol {\beta }}.}For example, the length of the shift vector may be successively halved until the new value of the objective function is less than its value at the last iteration. The fraction,fcould be optimized by aline search.[4]As each trial value offrequires the objective function to be re-calculated it is not worth optimizing its value too stringently. When using shift-cutting, the direction of the shift vector remains unchanged. This limits the applicability of the method to situations where the direction of the shift vector is not very different from what it would be if the objective function were approximately quadratic in the parameters,βk.{\displaystyle {\boldsymbol {\beta }}^{k}.} If divergence occurs and the direction of the shift vector is so far from its "ideal" direction that shift-cutting is not very effective, that is, the fraction,frequired to avoid divergence is very small, the direction must be changed. This can be achieved by using theMarquardtparameter.[5]In this method the normal equations are modified(JTWJ+λI)Δβ=(JTW)Δy{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} +\lambda \mathbf {I} \right)\Delta {\boldsymbol {\beta }}=\left(\mathbf {J} ^{\mathsf {T}}\mathbf {W} \right)\Delta \mathbf {y} }whereλ{\displaystyle \lambda }is the Marquardt parameter andIis an identity matrix. Increasing the value ofλ{\displaystyle \lambda }has the effect of changing both the direction and the length of the shift vector. The shift vector is rotated towards the direction ofsteepest descentwhenλI≫JTWJ,Δβ≈1λJTWΔy.{\displaystyle \lambda \mathbf {I} \gg \mathbf {J} ^{\mathsf {T}}\mathbf {WJ} ,\ {\Delta {\boldsymbol {\beta }}}\approx {\frac {1}{\lambda }}\mathbf {J} ^{\mathsf {T}}\mathbf {W} \ \Delta \mathbf {y} .}JTWΔy{\displaystyle \mathbf {J} ^{\mathsf {T}}\mathbf {W} \,\Delta \mathbf {y} }is the steepest descent vector. So, whenλ{\displaystyle \lambda }becomes very large, the shift vector becomes a small fraction of the steepest descent vector. Various strategies have been proposed for the determination of the Marquardt parameter. As with shift-cutting, it is wasteful to optimize this parameter too stringently. Rather, once a value has been found that brings about a reduction in the value of the objective function, that value of the parameter is carried to the next iteration, reduced if possible, or increased if need be. When reducing the value of the Marquardt parameter, there is a cut-off value below which it is safe to set it to zero, that is, to continue with the unmodified Gauss–Newton method. The cut-off value may be set equal to the smallest singular value of the Jacobian.[6]A bound for this value is given by1/tr⁡(JTWJ)−1{\displaystyle 1/\operatorname {tr} \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)^{-1}}wheretris thetrace function.[7] The minimum in the sum of squares can be found by a method that does not involve forming the normal equations. The residuals with the linearized model can be written asr=Δy−JΔβ.{\displaystyle \mathbf {r} =\Delta \mathbf {y} -\mathbf {J} \,\Delta {\boldsymbol {\beta }}.}The Jacobian is subjected to an orthogonal decomposition; theQR decompositionwill serve to illustrate the process.J=QR{\displaystyle \mathbf {J} =\mathbf {QR} }whereQis anorthogonalm×m{\displaystyle m\times m}matrix andRis anm×n{\displaystyle m\times n}matrix which ispartitionedinto ann×n{\displaystyle n\times n}block,Rn{\displaystyle \mathbf {R} _{n}}, and a(m−n)×n{\displaystyle (m-n)\times n}zero block.Rn{\displaystyle \mathbf {R} _{n}}is upper triangular. R=[Rn0]{\displaystyle \mathbf {R} ={\begin{bmatrix}\mathbf {R} _{n}\\\mathbf {0} \end{bmatrix}}} The residual vector is left-multiplied byQT{\displaystyle \mathbf {Q} ^{\mathsf {T}}}. QTr=QTΔy−RΔβ=[(QTΔy−RΔβ)n(QTΔy)m−n]{\displaystyle \mathbf {Q} ^{\mathsf {T}}\mathbf {r} =\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} -\mathbf {R} \ \Delta {\boldsymbol {\beta }}={\begin{bmatrix}\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} -\mathbf {R} \ \Delta {\boldsymbol {\beta }}\right)_{n}\\\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{m-n}\end{bmatrix}}} This has no effect on the sum of squares sinceS=rTQQTr=rTr{\displaystyle S=\mathbf {r} ^{\mathsf {T}}\mathbf {Q} \mathbf {Q} ^{\mathsf {T}}\mathbf {r} =\mathbf {r} ^{\mathsf {T}}\mathbf {r} }becauseQisorthogonal. The minimum value ofSis attained when the upper block is zero. Therefore, the shift vector is found by solvingRnΔβ=(QTΔy)n.{\displaystyle \mathbf {R} _{n}\ \Delta {\boldsymbol {\beta }}=\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{n}.} These equations are easily solved asRis upper triangular. A variant of the method of orthogonal decomposition involvessingular value decomposition, in whichRis diagonalized by further orthogonal transformations. J=UΣVT{\displaystyle \mathbf {J} =\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{\mathsf {T}}}whereU{\displaystyle \mathbf {U} }is orthogonal,Σ{\displaystyle {\boldsymbol {\Sigma }}}is a diagonal matrix of singular values andV{\displaystyle \mathbf {V} }is the orthogonal matrix of the eigenvectors ofJTJ{\displaystyle \mathbf {J} ^{\mathsf {T}}\mathbf {J} }or equivalently the right singular vectors ofJ{\displaystyle \mathbf {J} }. In this case the shift vector is given byΔβ=VΣ−1(UTΔy)n.{\displaystyle \Delta {\boldsymbol {\beta }}=\mathbf {V} {\boldsymbol {\Sigma }}^{-1}\left(\mathbf {U} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{n}.} The relative simplicity of this expression is very useful in theoretical analysis of non-linear least squares. The application of singular value decomposition is discussed in detail in Lawson and Hanson.[6] There are many examples in the scientific literature where different methods have been used for non-linear data-fitting problems. Direct search methods depend on evaluations of the objective function at a variety of parameter values and do not use derivatives at all. They offer alternatives to the use of numerical derivatives in the Gauss–Newton method and gradient methods. More detailed descriptions of these, and other, methods are available, inNumerical Recipes, together with computer code in various languages.
https://en.wikipedia.org/wiki/Nonlinear_least_squares
Instatistics,simple linear regression(SLR) is alinear regressionmodel with a singleexplanatory variable.[1][2][3][4][5]That is, it concerns two-dimensional sample points withone independent variable and one dependent variable(conventionally, thexandycoordinates in aCartesian coordinate system) and finds a linear function (a non-verticalstraight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjectivesimplerefers to the fact that the outcome variable is related to a single predictor. It is common to make the additional stipulation that theordinary least squares(OLS) method should be used: the accuracy of each predicted value is measured by its squaredresidual(vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. In this case, the slope of the fitted line is equal to thecorrelationbetweenyandxcorrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass(x,y)of the data points. Consider themodelfunction which describes a line with slopeβandy-interceptα. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation theerrors. Suppose we observendata pairs and call them{(xi,yi),i= 1, ...,n}. We can describe the underlying relationship betweenyiandxiinvolving this error termεiby This relationship between the true (but unobserved) underlying parametersαandβand the data points is called a linear regression model. The goal is to find estimated valuesα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}for the parametersαandβwhich would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in theleast-squaresapproach: a line that minimizes thesum of squared residuals(see alsoErrors and residuals)ε^i{\displaystyle {\widehat {\varepsilon }}_{i}}(differences between actual and predicted values of the dependent variabley), each of which is given by, for any candidate parameter valuesα{\displaystyle \alpha }andβ{\displaystyle \beta }, In other words,α^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}solve the followingminimization problem: where theobjective functionQis: By expanding to get a quadratic expression inα{\displaystyle \alpha }andβ,{\displaystyle \beta ,}we can derive minimizing values of the function arguments, denotedα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}:[6] α^=y¯−(β^x¯),β^=∑i=1n(xi−x¯)(yi−y¯)∑i=1n(xi−x¯)2=∑i=1nΔxiΔyi∑i=1nΔxi2{\displaystyle {\begin{aligned}{\widehat {\alpha }}&={\bar {y}}-({\widehat {\beta }}\,{\bar {x}}),\\[5pt]{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}={\frac {\sum _{i=1}^{n}\Delta x_{i}\Delta y_{i}}{\sum _{i=1}^{n}\Delta x_{i}^{2}}}\end{aligned}}} Here we have introduced The above equations are efficient to use if the mean of the x and y variables (x¯andy¯{\displaystyle {\bar {x}}{\text{ and }}{\bar {y}}}) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of theα^andβ^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}}equations. These expanded equations may be derived from the more generalpolynomial regressionequations[7][8]by defining the regression polynomial to be of order 1, as follows. [n∑i=1nxi∑i=1nxi∑i=1nxi2][α^β^]=[∑i=1nyi∑i=1nyixi]{\displaystyle {\begin{bmatrix}n&\sum _{i=1}^{n}x_{i}\\\sum _{i=1}^{n}x_{i}&\sum _{i=1}^{n}x_{i}^{2}\end{bmatrix}}{\begin{bmatrix}{\widehat {\alpha }}\\{\widehat {\beta }}\end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}y_{i}\\\sum _{i=1}^{n}y_{i}x_{i}\end{bmatrix}}} The abovesystem of linear equationsmay be solved directly, or stand-alone equations forα^andβ^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}}may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.[9][7] α^=∑i=1nyi∑i=1nxi2−∑i=1nxi∑i=1nxiyin∑i=1nxi2−(∑i=1nxi)2β^=n∑i=1nxiyi−∑i=1nxi∑i=1nyin∑i=1nxi2−(∑i=1nxi)2{\displaystyle {\begin{aligned}&\qquad {\widehat {\alpha }}={\frac {\sum _{i=1}^{n}y_{i}\sum _{i=1}^{n}x_{i}^{2}-\sum _{i=1}^{n}x_{i}\sum _{i=1}^{n}x_{i}y_{i}}{n\sum _{i=1}^{n}x_{i}^{2}-(\sum _{i=1}^{n}x_{i})^{2}}}\\\\&\qquad {\widehat {\beta }}={\frac {n\sum _{i=1}^{n}x_{i}y_{i}-\sum _{i=1}^{n}x_{i}\sum _{i=1}^{n}y_{i}}{n\sum _{i=1}^{n}x_{i}^{2}-(\sum _{i=1}^{n}x_{i})^{2}}}\\&\qquad \end{aligned}}} The solution can be reformulated using elements of thecovariance matrix:β^=sx,ysx2=rxysysx{\displaystyle {\widehat {\beta }}={\frac {s_{x,y}}{s_{x}^{2}}}=r_{xy}{\frac {s_{y}}{s_{x}}}} where Substituting the above expressions forα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}into the original solution yields This shows thatrxyis the slope of the regression line of thestandardizeddata points (and that this line passes through the origin). Since−1≤rxy≤1{\displaystyle -1\leq r_{xy}\leq 1}then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known asregressions toward the mean. Generalizing thex¯{\displaystyle {\bar {x}}}notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example: This notation allows us a concise formula forrxy: Thecoefficient of determination("R squared") is equal torxy2{\displaystyle r_{xy}^{2}}when the model is linear with a single independent variable. Seesample correlation coefficientfor additional details. By multiplying all members of the summation in the numerator by :(xi−x¯)(xi−x¯)=1{\displaystyle {\begin{aligned}{\frac {(x_{i}-{\bar {x}})}{(x_{i}-{\bar {x}})}}=1\end{aligned}}}(thereby not changing it): We can see that the slope (tangent of angle) of the regression line is the weighted average of(yi−y¯)(xi−x¯){\displaystyle {\frac {(y_{i}-{\bar {y}})}{(x_{i}-{\bar {x}})}}}that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by(xi−x¯)2{\displaystyle (x_{i}-{\bar {x}})^{2}}because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more. Givenβ^=tan⁡(θ)=dy/dx→dy=dx×β^{\displaystyle {\widehat {\beta }}=\tan(\theta )=dy/dx\rightarrow dy=dx\times {\widehat {\beta }}}withθ{\displaystyle \theta }the angle the line makes with the positive x axis, we haveyintersection=y¯−dx×β^=y¯−dy{\displaystyle y_{\rm {intersection}}={\bar {y}}-dx\times {\widehat {\beta }}={\bar {y}}-dy}[remove orclarification needed] In the above formulation, notice that eachxi{\displaystyle x_{i}}is a constant ("known upfront") value, while theyi{\displaystyle y_{i}}are random variables that depend on the linear function ofxi{\displaystyle x_{i}}and the random termεi{\displaystyle \varepsilon _{i}}. This assumption is used when deriving the standard error of the slope and showing that it isunbiased. In this framing, whenxi{\displaystyle x_{i}}is not actually arandom variable, what type of parameter does the empirical correlationrxy{\displaystyle r_{xy}}estimate? The issue is that for each value i we'll have:E(xi)=xi{\displaystyle E(x_{i})=x_{i}}andVar(xi)=0{\displaystyle Var(x_{i})=0}. A possible interpretation ofrxy{\displaystyle r_{xy}}is to imagine thatxi{\displaystyle x_{i}}defines a random variable drawn from theempirical distributionof the x values in our sample. For example, if x had 10 values from thenatural numbers: [1,2,3...,10], then we can imagine x to be aDiscrete uniform distribution. Under this interpretation allxi{\displaystyle x_{i}}have the same expectation and some positive variance. With this interpretation we can think ofrxy{\displaystyle r_{xy}}as the estimator of thePearson's correlationbetween the random variable y and the random variable x (as we just defined it). Description of the statistical properties of estimators from the simple linear regression estimates requires the use of astatistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such asinhomogeneity, but this is discussed elsewhere.[clarification needed] The estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}areunbiased. To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residualsεias random variables drawn independently from some distribution with mean zero. In other words, for each value ofx, the corresponding value ofyis generated as a mean responseα+βxplus an additional random variableεcalled theerror term, equal to zero on average. Under such interpretation, the least-squares estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}will themselves be random variables whose means will equal the "true values"αandβ. This is the definition of an unbiased estimator. Since the data in this context is defined to be (x,y) pairs for every observation, themean responseat a given value ofx, sayxd, is an estimate of the mean of theyvalues in the population at thexvalue ofxd, that isE^(y∣xd)≡y^d{\displaystyle {\hat {E}}(y\mid x_{d})\equiv {\hat {y}}_{d}\!}. The variance of the mean response is given by:[11] This expression can be simplified to wheremis the number of data points. To demonstrate this simplification, one can make use of the identity Thepredicted responsedistribution is the predicted distribution of the residuals at the given pointxd. So the variance is given by The second line follows from the fact thatCov⁡(yd,[α^+β^xd]){\displaystyle \operatorname {Cov} \left(y_{d},\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)}is zero because the new prediction point is independent of the data used to fit the model. Additionally, the termVar⁡(α^+β^xd){\displaystyle \operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)}was calculated earlier for the mean response. SinceVar⁡(yd)=σ2{\displaystyle \operatorname {Var} (y_{d})=\sigma ^{2}}(a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by The formulas given in the previous section allow one to calculate thepoint estimatesofαandβ— that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}vary from sample to sample for the specified sample size.Confidence intervalswere devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times. The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either: The latter case is justified by thecentral limit theorem. Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with meanβand varianceσ2/∑(xi−x¯)2,{\displaystyle \sigma ^{2}\left/\sum (x_{i}-{\bar {x}})^{2}\right.,}whereσ2is the variance of the error terms (seeProofs involving ordinary least squares). At the same time the sum of squared residualsQis distributed proportionally toχ2withn− 2degrees of freedom, and independently fromβ^{\displaystyle {\widehat {\beta }}}. This allows us to construct at-value where is the unbiasedstandard errorestimator of the estimatorβ^{\displaystyle {\widehat {\beta }}}. Thist-value has aStudent'st-distribution withn− 2degrees of freedom. Using it we can construct a confidence interval forβ: at confidence level(1 −γ), wheretn−2∗{\displaystyle t_{n-2}^{*}}is the(1−γ2)-th{\displaystyle \scriptstyle \left(1\;-\;{\frac {\gamma }{2}}\right){\text{-th}}}quantile of thetn−2distribution. For example, ifγ= 0.05then the confidence level is 95%. Similarly, the confidence interval for the intercept coefficientαis given by at confidence level (1 −γ), where The confidence intervals forαandβgive us the general idea where these regression coefficients are most likely to be. For example, in theOkun's lawregression shown here the point estimates are The 95% confidence intervals for these estimates are In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown[12]that at confidence level (1 −γ) the confidence band has hyperbolic form given by the equation When the model assumed the intercept is fixed and equal to 0 (α=0{\displaystyle \alpha =0}), the standard error of the slope turns into: With:ε^i=yi−y^i{\displaystyle {\hat {\varepsilon }}_{i}=y_{i}-{\hat {y}}_{i}} The alternative second assumption states that when the number of points in the dataset is "large enough", thelaw of large numbersand thecentral limit theorembecome applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantilet*n−2ofStudent'stdistribution is replaced with the quantileq*of thestandard normal distribution. Occasionally the fraction⁠1/n−2⁠is replaced with⁠1/n⁠. Whennis large such a change does not alter the results appreciably. This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although theOLSarticle argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead. There aren= 15 points in this data set. Hand calculations would be started by finding the following five sums: These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors. The 0.975 quantile of Student'st-distribution with 13 degrees of freedom ist*13= 2.1604, and thus the 95% confidence intervals forαandβare Theproduct-moment correlation coefficientmight also be calculated: In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due toregression dilution. Other estimation methods that can be used in place of ordinary least squares includeleast absolute deviations(minimizing the sum of absolute values of residuals) and theTheil–Sen estimator(which chooses a line whoseslopeis themedianof the slopes determined by pairs of sample points). Deming regression(total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data. Line fittingis the process of constructing astraight linethat has the best fit to a series of data points. Several methods exist, considering: Sometimes it is appropriate to force the regression line to pass through the origin, becausexandyare assumed to be proportional. For the model without the intercept term,y=βx, the OLS estimator forβsimplifies to Substituting(x−h,y−k)in place of(x,y)gives the regression through(h,k): where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias). The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.
https://en.wikipedia.org/wiki/Simple_linear_regression
Partial least squares (PLS) regressionis astatisticalmethod that bears some relation toprincipal components regressionand is areduced rank regression;[1]instead of findinghyperplanesof maximumvariancebetween the response and independent variables, it finds alinear regressionmodel by projecting thepredicted variablesand theobservable variablesto a new space of maximum covariance (see below). Because both theXandYdata are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when theYis categorical. PLS is used to find the fundamental relations between twomatrices(XandY), i.e. alatent variableapproach to modeling thecovariancestructures in these two spaces. A PLS model will try to find the multidimensional direction in theXspace that explains the maximum multidimensional variance direction in theYspace. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there ismulticollinearityamongXvalues. By contrast, standard regression will fail in these cases (unless it isregularized). Partial least squares was introduced by the Swedish statisticianHerman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS isprojection to latent structures,[2][3]but the termpartial least squaresis still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used inchemometricsand related areas. It is also used inbioinformatics,sensometrics,neuroscience, andanthropology. We are given a sample ofn{\displaystyle n}pairedobservations(x→i,y→i),i∈1,…,n{\displaystyle ({\vec {x}}_{i},{\vec {y}}_{i}),i\in {1,\ldots ,n}}. In the first stepj=1{\displaystyle j=1}, the partial least squares regression searches for the normalized directionp→j{\displaystyle {\vec {p}}_{j}},q→j{\displaystyle {\vec {q}}_{j}}that maximizes the covariance[4] Note below, the algorithm is denoted in matrix notation. The general underlying model of multivariate PLS withℓ{\displaystyle \ell }components is where The decompositions ofXandYare made so as to maximise thecovariancebetweenTandU. Note that this covariance is defined pair by pair: the covariance of columniofT(lengthn) with the columniofU(lengthn) is maximized. Additionally, the covariance of the column i ofTwith the columnjofU(withi≠j{\displaystyle i\neq j}) is zero. In PLSR, the loadings are thus chosen so that the scores form an orthogonal basis. This is a major difference with PCA where orthogonality is imposed onto loadings (and not the scores). A number of variants of PLS exist for estimating the factor and loading matricesT, U, PandQ. Most of them construct estimates of the linear regression betweenXandYasY=XB~+B~0{\displaystyle Y=X{\tilde {B}}+{\tilde {B}}_{0}}. Some PLS algorithms are only appropriate for the case whereYis a column vector, while others deal with the general case of a matrixY. Algorithms also differ on whether they estimate the factor matrixTas an orthogonal (that is,orthonormal) matrix or not.[5][6][7][8][9][10]The final prediction will be the same for all these varieties of PLS, but the components will differ. PLS is composed of iteratively repeating the following stepsktimes (forkcomponents): PLS1 is a widely used algorithm appropriate for the vectorYcase. It estimatesTas an orthonormal matrix. (Caution: thetvectors in the code below may not be normalized appropriately; see talk.) In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted). This form of the algorithm does not require centering of the inputXandY, as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrixX(subtraction oftkt(k)p(k)T{\displaystyle t_{k}t^{(k)}{p^{(k)}}^{\mathrm {T} }}), but deflation of the vectoryis not performed, as it is not necessary (it can be proved that deflatingyyields the same results as not deflating[11]). The user-supplied variablelis the limit on the number of latent factors in the regression; if it equals the rank of the matrixX, the algorithm will yield the least squares regression estimates forBandB0{\displaystyle B_{0}} In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated (orthogonal) information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models.[12]Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies. The general underlying model of OPLS is or in O2-PLS[13] Another extension of PLS regression, named L-PLS for its L-shaped matrices, connects 3 related data blocks to improve predictability.[14]In brief, a newZmatrix, with the same number of columns as theXmatrix, is added to the PLS regression analysis and may be suitable for including additional background information on the interdependence of the predictor variables. In 2015 partial least squares was related to a procedure called the three-pass regression filter (3PRF).[15]Supposing the number of observations and variables are large, the 3PRF (and hence PLS) is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth.[16] A PLS version based onsingular value decomposition (SVD)provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware.[17] PLS correlation (PLSC) is another methodology related to PLS regression,[18]which has been used in neuroimaging[18][19][20]and sport science,[21]to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks (sub-groups) each containing one or more variables, and then usessingular value decomposition (SVD)to establish the strength of any relationship (i.e. the amount of shared information) that might exist between the two component sub-groups.[22]It does this by using SVD to determine the inertia (i.e. the sum of the singular values) of the covariance matrix of the sub-groups under consideration.[22][18]
https://en.wikipedia.org/wiki/Partial_least_squares_regression
Inmathematics, the termlinear functionrefers to two distinct but related notions:[1] In calculus,analytic geometryand related areas, a linear function is a polynomial of degree one or less, including thezero polynomial(the latter not being considered to have degree zero). When the function is of only onevariable, it is of the form whereaandbareconstants, oftenreal numbers. Thegraphof such a function of one variable is a nonvertical line.ais frequently referred to as the slope of the line, andbas the intercept. Ifa > 0then thegradientis positive and the graph slopes upwards. Ifa < 0then thegradientis negative and the graph slopes downwards. For a functionf(x1,…,xk){\displaystyle f(x_{1},\ldots ,x_{k})}of any finite number of variables, the general formula is and the graph is ahyperplaneof dimensionk. Aconstant functionis also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial. Its graph, when there is only one variable, is a horizontal line. In this context, a function that is also a linear map (the other meaning) may be referred to as ahomogeneouslinear function or alinear form. In the context of linear algebra, the polynomial functions of degree 0 or 1 are the scalar-valuedaffine maps. In linear algebra, a linear function is a mapfbetween twovector spacessuch that Hereadenotes a constant belonging to somefieldKofscalars(for example, thereal numbers) andxandyare elements of avector space, which might beKitself. In other terms the linear function preservesvector additionandscalar multiplication. Some authors use "linear function" only for linear maps that take values in the scalar field;[6]these are more commonly calledlinear forms. The "linear functions" of calculus qualify as "linear maps" when (and only when)f(0, ..., 0) = 0, or, equivalently, when the constantbequals zero in the one-degree polynomial above. Geometrically, the graph of the function must pass through the origin.
https://en.wikipedia.org/wiki/Linear_function
Adocument-term matrixis a mathematicalmatrixthat describes the frequency of terms that occur in each document in a collection. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix is a specific instance of adocument-feature matrixwhere "features" may refer to other properties of a document besides terms.[1]It is also common to encounter the transpose, orterm-document matrixwhere documents are the columns and terms are the rows. They are useful in the field ofnatural language processingandcomputational text analysis.[2] While the value of the cells is commonly the raw count of a given term, there are various schemes for weighting the raw counts such as row normalizing (i.e. relative frequency/proportions) andtf-idf. Terms are commonly single words separated by whitespace or punctuation on either side (a.k.a. unigrams). In such a case, this is also referred to as "bag of words" representation because the counts of individual words is retained, but not the order of the words in the document. When creating a data-set oftermsthat appear in a corpus ofdocuments, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Eachijcell, then, is the number of times wordjoccurs in documenti. As such, each row is a vector of term counts that represents the content of the document corresponding to that row. For instance if one has the following two (short) documents: then the document-term matrix would be: which shows which documents contain which terms and how many times they appear. Note that, unlike representing a document as just a token-count list, the document-term matrix includes all terms in the corpus (i.e. the corpus vocabulary), which is why there are zero-counts for terms in the corpus which do not also occur in a specific document. For this reason, document-term matrices are usually stored in a sparse matrix format. As a result of the power-law distribution of tokens in nearly every corpus (seeZipf's law), it is common to weight the counts. This can be as simple as dividing counts by the total number of tokens in a document (called relative frequency or proportions), dividing by the maximum frequency in each document (called prop max), or taking the log of frequencies (called log count). If one desires to weight the words most unique to an individual document as compared to the corpus as a whole, it is common to usetf-idf, which divides the term frequency by the term's document frequency. The document-term matrix emerged in the earliest years of the computerization of text. The increasing capacity for storing documents created the problem of retrieving a given document in an efficient manner. While previously the work of classifying and indexing was accomplished by hand, researchers explored the possibility of doing this automatically using word frequency information. One of the first published document-term matrices was inHarold Borko's 1962 article "The construction of an empirically based mathematically derived classification system" (page 282, see also his 1965 article[3]). Borko references two computer programs, "FEAT" which stood for "Frequency of Every Allowable Term," written by John C. Olney of the System Development Corporation and the Descriptor Word Index Program, written byEileen Stonealso of the System Development Corporation: Having selected the documents which were to make up the experimental library, the next step consisted of keypunching the entire body of text preparatory to computer processing.  The program used for this analysis was FEAT (Frequency of Every Allowable Term).  it was written by John C. Olney of the System Development Corporation and is designed to perform frequency and summary counts of individual words and of word pairs.  The  output of this program is an alphabetical listing, by frequency of occurrence, of all word types which appeared in the text.  Certain function words such as and, the,  at, a, etc., were placed in a "forbidden word list" table, and the frequency of these words was recorded  in a separate listing... A special computer program, called the Descriptor Word Index Program, was written to provide this information and to prepare a document-term matrix in a form suitable for in-put to the Factor Analysis Program. The Descriptor Word Index program was prepared by Eileen Stone of the System Development Corporation.[4] Shortly thereafter,Gerard Saltonpublished "Some hierarchical models for automatic document retrieval" in 1963 which also included a visual depiction of a document-term matrix.[5]Salton was at Harvard University at the time and his work was supported by the Air Force Cambridge Research Laboratories and Sylvania Electric Products, Inc. In this paper, Salton introduces the document-term matrix by comparison to a kind of term-context matrix used to measure similarities between words: If it is desired to generate document associations or document clusters instead of word associations, the same procedures can be used with slight modifications. Instead of starting with a word-sentence matrixC,... it is now convenient to construct a word-document matrixF,listing frequency of occurrence of word Wiin Document Dj... Document similarities can now be computed as before by comparing pairs of rows and by obtaining similarity coefficients based on the frequency of co-occurrences of the content words included in the given document. This procedure produces a document-document similarity matrix which can in turn be used for the generation of document clusters...[5] In addition to Borko and Salton, in 1964, F.W. Lancaster published a comprehensive review of automated indexing and retrieval. While the work was published while he worked at the Herner and Company in Washington D.C., the paper was written while he was "employed in research work at Aslib, on the Aslib Cranfield Project."[6]Lancaster credits Borko with the document-term matrix: Harold Borko, of the System Development Corporation, has carried this operation a little further. A significant group of clue words is chosen from the vocabulary of an experimental collection. These are arranged in a document/term matrix to show the frequency of occurrence of each term in each document.... A correlation coefficient for each word pair is then computed, based on their co-occurrence in the document set. The resulting term/term matrix... is then factor analysed and a series of factors are isolated. These factors, when interpreted and named on the basis of the terms with high loadings which appear in each of the factors, become the classes of an empirical classification. The terms with high loadings in each factor are the clue words or predictors of the categories. A point of view on the matrix is that each row represents a document. In thevectorial semantic model, which is normally the one used to compute a document-term matrix, the goal is to represent the topic of a document by the frequency of semantically significant terms. The terms are semantic units of the documents. It is often assumed, forIndo-European languages, that nouns, verbs and adjectives are the more significantcategories, and that words from those categories should be kept as terms. Addingcollocationas terms improves the quality of the vectors, especially when computing similarities between documents. Latent semantic analysis(LSA, performingsingular-value decompositionon the document-term matrix) can improve search results bydisambiguatingpolysemous wordsand searching forsynonymsof the query. However, searching in the high-dimensional continuous space is much slower than searching the standardtriedata structure of search engines. Multivariate analysisof the document-term matrix can reveal topics/themes of the corpus. Specifically,latent semantic analysisanddata clusteringcan be used, and, more recently,probabilistic latent semantic analysiswith its generalizationLatent Dirichlet allocation, andnon-negative matrix factorization, have been found to perform well for this task.
https://en.wikipedia.org/wiki/Document-term_matrix
Alanguage modelis amodelof natural language.[1]Language models are useful for a variety of tasks, includingspeech recognition,[2]machine translation,[3]natural language generation(generating more human-like text),optical character recognition,route optimization,[4]handwriting recognition,[5]grammar induction,[6]andinformation retrieval.[7][8] Large language models(LLMs), currently their most advanced form, are predominantly based ontransformerstrained on larger datasets (frequently using wordsscrapedfrom the publicinternet). They have supersededrecurrent neural network-based models, which had previously superseded the purely statistical models, such aswordn-gram language model. Noam Chomskydid pioneering work on language models in the 1950s by developing a theory offormal grammars.[9] In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations likewordn-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such asword embeddings, began to replace discrete representations.[10]Typically, the representation is areal-valuedvector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.[11] Awordn-gram language modelis a purely statistical model of language. It has been superseded byrecurrent neural network–based models, which have been superseded bylarge language models.[12]It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word is considered, it is called a bigram model; if two words, a trigram model; ifn− 1 words, ann-gram model.[13]Special tokens are introduced to denote the start and end of a sentence⟨s⟩{\displaystyle \langle s\rangle }and⟨/s⟩{\displaystyle \langle /s\rangle }. Maximum entropylanguage models encode the relationship between a word and then-gram history using feature functions. The equation is P(wm∣w1,…,wm−1)=1Z(w1,…,wm−1)exp⁡(aTf(w1,…,wm)){\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} whereZ(w1,…,wm−1){\displaystyle Z(w_{1},\ldots ,w_{m-1})}is thepartition function,a{\displaystyle a}is the parameter vector, andf(w1,…,wm){\displaystyle f(w_{1},\ldots ,w_{m})}is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certainn-gram. It is helpful to use a prior ona{\displaystyle a}or some form ofregularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. wordn-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that areskippedover (thus the name "skip-gram").[14] Formally, ak-skip-n-gram is a length-nsubsequence where the components occur at distance at mostkfrom each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented bylinear combinations, capturing a form ofcompositionality. For example, in some such models, ifvis the function that maps a wordwto itsn-d vector representation, then v(king)−v(male)+v(female)≈v(queen){\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} Continuous representations orembeddings of wordsare produced inrecurrent neural network-based language models (known also ascontinuous space language models).[17]Such continuous space embeddings help to alleviate thecurse of dimensionality, which is the consequence of the number of possible sequences of words increasingexponentiallywith the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.[18] Alarge language model(LLM) is a type ofmachine learningmodeldesigned fornatural language processingtasks such as languagegeneration. LLMs are language models with many parameters, and are trained withself-supervised learningon a vast amount of text. Although sometimes matching human performance, it is not clear whether they are plausiblecognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.[22] Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.[23] Various data sets have been developed for use in evaluating language processing systems.[24]These include:
https://en.wikipedia.org/wiki/Language_model#Neural_network
Thought vectoris a term popularized byGeoffrey Hinton, the prominentdeep-learningresearcher, which uses vectors based onnatural language[1]to improve its search results.[2] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Thought_vector
GloVe, coined from Global Vectors, is a model for distributed word representation. The model is anunsupervised learningalgorithm for obtainingvector representationsfor words. This is achieved by mapping words into a meaningful space where the distance between words is related to semantic similarity.[1]Training is performed on aggregated global word-wordco-occurrencestatisticsfrom a corpus, and the resulting representations showcase interesting linear substructures of theword vector space. As log-bilinear regression model for unsupervised learning of word representations, it combines the features of two model families, namely the global matrix factorization and local context window methods. It is developed as anopen-sourceproject atStanford[2]and was launched in 2014. It was designed as a competitor toword2vec, and the original paper noted multiple improvements of GloVe over word2vec. As of 2022[update], both approaches are outdated, andTransformer-based models, such asBERT, which add multiple neural-network attention layers on top of a word embedding model similar to Word2vec, have come to be regarded as the state of the art in NLP.[3] You shall know a word by the company it keeps (Firth, J. R. 1957:11)[4] The idea of GloVe is to construct, for each wordi{\displaystyle i}, two vectorswi,w~i{\displaystyle w_{i},{\tilde {w}}_{i}}, such that the relative positions of the vectors capture part of the statistical regularities of the wordi{\displaystyle i}. The statistical regularity is defined as the co-occurrence probabilities. Words that resemble each other in meaning should also resemble each other in co-occurrence probabilities. Let thevocabularybeV{\displaystyle V}, the set of all possible words (aka "tokens"). Punctuation is either ignored, or treated as vocabulary, and similarly for capitalization and other typographical details.[1] If two words occur close to each other, then we say that they occur in thecontextof each other. For example, if the context length is 3, then we say that in the following sentence GloVe1, coined2from3Global4Vectors5, is6a7model8for9distributed10word11representation12 the word "model8" is in the context of "word11" but not the context of "representation12". A word is not in the context of itself, so "model8" is not in the context of the word "model8", although, if a word appears again in the same context, then it does count. LetXij{\displaystyle X_{ij}}be the number of times that the wordj{\displaystyle j}appears in the context of the wordi{\displaystyle i}over the entire corpus. For example, if the corpus is just "I don't think that that is a problem." we haveXthat,that=2{\displaystyle X_{{\text{that}},{\text{that}}}=2}since the first "that" appears in the second one's context, and vice versa. LetXi=∑j∈VXij{\displaystyle X_{i}=\sum _{j\in V}X_{ij}}be the number of words in the context of all instances of wordi{\displaystyle i}. By counting, we haveXi=2×(context size)×#(occurrences of wordi){\displaystyle X_{i}=2\times ({\text{context size}})\times \#({\text{occurrences of word }}i)}(except for words occurring right at the start and end of the corpus) LetPik:=P(k|i):=XikXi{\displaystyle P_{ik}:=P(k|i):={\frac {X_{ik}}{X_{i}}}}be theco-occurrence probability. That is, if one samples a random occurrence of the wordi{\displaystyle i}in the entire document, and a random word within its context, that word isk{\displaystyle k}with probabilityPik{\displaystyle P_{ik}}. Note thatPik≠Pki{\displaystyle P_{ik}\neq P_{ki}}in general. For example, in a typical modern English corpus,Pado,much{\displaystyle P_{{\text{ado}},{\text{much}}}}is close to one, butPmuch,ado{\displaystyle P_{{\text{much}},{\text{ado}}}}is close to zero. This is because the word "ado" is almost only used in the context of the archaic phrase "much ado about", but the word "much" occurs in all kinds of contexts. For example, in a 6 billion token corpus, we have Inspecting the table, we see that the words "ice" and "steam" are indistinguishable along the "water" (often co-occurring with both) and "fashion" (rarely co-occurring with either), but distinguishable along the "solid" (co-occurring more with ice) and "gas" (co-occurring more with "steam"). The idea is to learn two vectorswi,w~i{\displaystyle w_{i},{\tilde {w}}_{i}}for each wordi{\displaystyle i}, such that we have amultinomial logistic regression:wiTw~j+bi+b~j≈ln⁡Pij{\displaystyle w_{i}^{T}{\tilde {w}}_{j}+b_{i}+{\tilde {b}}_{j}\approx \ln P_{ij}}and the termsbi,b~j{\displaystyle b_{i},{\tilde {b}}_{j}}are unimportant parameters. This means that if the wordsi,j{\displaystyle i,j}have similar co-occurrence probabilities(Pik)k∈V≈(Pjk)k∈V{\displaystyle (P_{ik})_{k\in V}\approx (P_{jk})_{k\in V}}, then their vectors should also be similar:wi≈wj{\displaystyle w_{i}\approx w_{j}}. Naively, logistic regression can be run by minimizing the squared loss:L=∑i,j∈V(wiTw~j+bi+b~j−ln⁡Pij)2{\displaystyle L=\sum _{i,j\in V}(w_{i}^{T}{\tilde {w}}_{j}+b_{i}+{\tilde {b}}_{j}-\ln P_{ij})^{2}}However, this would be noisy for rare co-occurrences. To fix the issue, the squared loss is weighted so that the loss is slowly ramped-up as the absolute number of co-occurrencesXij{\displaystyle X_{ij}}increases:L=∑i,j∈Vf(Xij)(wiTw~j+bi+b~j−ln⁡Pij)2{\displaystyle L=\sum _{i,j\in V}f(X_{ij})(w_{i}^{T}{\tilde {w}}_{j}+b_{i}+{\tilde {b}}_{j}-\ln P_{ij})^{2}}wheref(x)={(x/xmax)αifx<xmax1otherwise{\displaystyle f(x)=\left\{{\begin{array}{cc}\left(x/x_{\max }\right)^{\alpha }&{\text{ if }}x<x_{\max }\\1&{\text{ otherwise }}\end{array}}\right.}andxmax,α{\displaystyle x_{\max },\alpha }arehyperparameters. In the original paper, the authors found thatxmax=100,α=3/4{\displaystyle x_{\max }=100,\alpha =3/4}seem to work well in practice. Once a model is trained, we have 4 trained parameters for each word:wi,w~i,bi,b~i{\displaystyle w_{i},{\tilde {w}}_{i},b_{i},{\tilde {b}}_{i}}. The parametersbi,b~i{\displaystyle b_{i},{\tilde {b}}_{i}}are irrelevant, and onlywi,w~i{\displaystyle w_{i},{\tilde {w}}_{i}}are relevant. The authors recommended usingwi+w~i{\displaystyle w_{i}+{\tilde {w}}_{i}}as the final representation vector for wordi{\displaystyle i}, because empirically it worked better thanwi{\displaystyle w_{i}}orw~i{\displaystyle {\tilde {w}}_{i}}alone. GloVe can be used to find relations between words like synonyms, company-product relations, zip codes and cities, etc. However, the unsupervised learning algorithm is not effective in identifying homographs, i.e., words with the same spelling and different meanings. This is as the unsupervised learning algorithm calculates a single set of vectors for words with the same morphological structure.[5]The algorithm is also used by theSpaCylibrary to build semantic word embedding features, while computing the top list words that match with distance measures such ascosine similarityandEuclidean distanceapproach.[6]GloVe was also used as the word representation framework for the online and offline systems designed to detect psychological distress in patient interviews.[7]
https://en.wikipedia.org/wiki/GloVe_(machine_learning)
ELMo(embeddings from language model) is aword embeddingmethod for representing a sequence of words as a corresponding sequence of vectors.[1]It was created by researchers at theAllen Institute for Artificial Intelligence,[2]andUniversity of Washingtonand first released in February 2018. It is a bidirectionalLSTMwhich takes character-level as inputs and produces word-level embeddings, trained on a corpus of about 30 million sentences and 1 billion words. The architecture of ELMo accomplishes acontextualunderstanding oftokens. Deep contextualized word representation is useful for many natural language processing tasks, such ascoreference resolutionandpolysemyresolution. ELMo was historically important as a pioneer ofself-supervised generative pretrainingfollowed byfine-tuning, where a large model is trained to reproduce a large corpus, then the large model is augmented with additional task-specific weights and fine-tuned on supervised task data. It was an instrumental step in the evolution towardstransformer-basedlanguage modelling. ELMo is a multilayered bidirectional LSTM on top of a token embedding layer. The output of all LSTMs concatenated together consists of the token embedding. The input text sequence is first mapped by an embedding layer into a sequence of vectors. Then two parts are run in parallel over it. The forward part is a 2-layered LSTM with 4096 units and 512 dimension projections, and a residual connection from the first to second layer. The backward part has the same architecture, but processes the sequence back-to-front. The outputs from all 5 components (embedding layer, two forward LSTM layers, and two backward LSTM layers) are concatenated and multiplied by a linear matrix ("projection matrix") to produce a 512-dimensional representation per input token. ELMo was pretrained on a text corpus of 1 billion words.[3]The forward part is trained by repeatedly predicting the next token, and the backward part is trained by repeatedly predicting the previous token. After the ELMo model is pretrained, its parameters are frozen, except for the projection matrix, which can be fine-tuned to minimize loss on specific language tasks. This is an early example of thepretraining-fine-tune paradigm. The original paper demonstrated this by improving state of the art on six benchmark NLP tasks. The architecture of ELMo accomplishes a contextual understanding of tokens. For example, the first forward LSTM of ELMo would process each input token in the context of all previous tokens, and the first backward LSTM would process each token in the context of all subsequent tokens. The second forward LSTM would then incorporate those to further contextualize each token. Deep contextualized word representation is useful for many natural language processing tasks, such as coreference resolution and polysemy resolution. For example, consider the sentence She went to thebankto withdraw money. In order to represent the token "bank", the model must resolve its polysemy in context. ELMo is one link in a historical evolution of language modelling. Consider a simple problem ofdocument classification, where we want to assign a label (e.g., "spam", "not spam", "politics", "sports") to a given piece of text. The simplest approach is the "bag of words" approach, where each word in the document is treated independently, and its frequency is used as a feature for classification. This was computationally cheap but ignored the order of words and their context within the sentence.GloVeandWord2Vecbuilt upon this by learning fixed vector representations (embeddings) for words based on their co-occurrence patterns in large text corpora. LikeBERT(but unlike "bag of words" such as Word2Vec and GloVe), ELMo word embeddings are context-sensitive, producing different representations for words that share the same spelling. It was trained on a corpus of about 30 million sentences and 1 billion words.[3]Previously, bidirectional LSTM was used for contextualized word representation.[4]ELMo applied the idea to a large scale, achieving state of the art performance. After the 2017 publication ofTransformer architecture, the architecture of ELMo was changed from a multilayered bidirectional LSTM to a Transformer encoder, giving rise toBERT. BERT has the same pretrain-fine-tune workflow, but uses a Transformer for parallelizable training.
https://en.wikipedia.org/wiki/ELMo
Normalized compression distance(NCD) is a way of measuring thesimilaritybetween two objects, be it two documents, two letters, two emails, two music scores, two languages, two programs, two pictures, two systems, two genomes, to name a few. Such a measurement should not be application dependent or arbitrary. A reasonable definition for the similarity between two objects is how difficult it is to transform them into each other. It can be used ininformation retrievalanddata miningforcluster analysis. We assume that the objects one talks about are finitestrings of 0s and 1s. Thus we meanstring similarity. Every computer file is of this form, that is, if an object is a file in a computer it is of this form. One can define theinformation distancebetween stringsx{\displaystyle x}andy{\displaystyle y}as the length of the shortest programp{\displaystyle p}that computesx{\displaystyle x}fromy{\displaystyle y}and vice versa. This shortest program is in a fixed programming language. For technical reasons one uses the theoretical notion ofTuring machines. Moreover, to express the length ofp{\displaystyle p}one uses the notion ofKolmogorov complexity. Then, it has been shown[1] up to logarithmic additive terms which can be ignored. This information distance is shown to be ametric(it satisfies the metric inequalities up to a logarithmic additive term), is universal (it minorizes every computable distance as computed for example from features up to a constant additive term).[1] The information distance is absolute, but if we want to express similarity, then we are more interested in relative ones. For example, if two strings of length 1,000,000 differ by 1000 bits, then we consider that those strings are relatively more similar than two strings of 1000 bits that differ by 1000 bits. Hence we need to normalize to obtain a similarity metric. This way one obtains the normalized information distance (NID), whereK(x∣y){\displaystyle K(x\mid y)}isalgorithmic informationofx{\displaystyle x}giveny{\displaystyle y}as input. The NID is called the similarity metric. Since the functionNID(x,y){\displaystyle NID(x,y)}has been shown to satisfy the basic requirements for a metric distance measure.[2][3]However, it is not computable or even semicomputable.[4] While the NID metric is not computable, it has an abundance of applications. Simply approximatingK{\displaystyle K}by real-world compressors, withZ(x){\displaystyle Z(x)}is the binary length of the filex{\displaystyle x}compressed with compressor Z (for example "gzip", "bzip2", "PPMZ") in order to make NID easy to apply.[2]VitanyiandCilibrasirewrote the NID to obtain the Normalized Compression Distance (NCD) The NCD is actually a family of distances parametrized with the compressor Z. The better Z is, the closer the NCD approaches the NID, and the better the results are.[3] The normalized compression distance has been used to fully automatically reconstruct language and phylogenetic trees.[2][3]It can also be used for new applications of generalclusteringandclassificationof natural data in arbitrary domains,[3]for clustering of heterogeneous data,[3]and foranomaly detectionacross domains.[5]The NID and NCD have been applied to numerous subjects, including music classification,[3]to analyze network traffic and cluster computer worms and viruses,[6]authorship attribution,[7]gene expression dynamics,[8]predicting useful versus useless stem cells,[9]critical networks,[10]image registration,[11]question-answer systems.[12] Researchers from thedataminingcommunity use NCD and variants as "parameter-free, feature-free" data-mining tools.[5]One group have experimentally tested a closely related metric on a large variety of sequence benchmarks. Comparing their compression method with 51 major methods found in 7 major data-mining conferences over the past decade, they established superiority of the compression method for clustering heterogeneous data, and for anomaly detection, and competitiveness in clustering domain data. NCD has an advantage of beingrobustto noise.[13]However, although NCD appears "parameter-free", practical questions include which compressor to use in computing the NCD and other possible problems.[14] In order to measure the information of a string relative to another there is the need to rely on relative semi-distances (NRC).[15]These are measures that do not need to respect symmetry and triangle inequality distance properties. Although the NCD and the NRC seem very similar, they address different questions. The NCD measures how similar both strings are, mostly using the information content, while the NRC indicates the fraction of a target string that cannot be constructed using information from another string. For a comparison, with application to the evolution of primate genomes, see.[16] Objects can be given literally, like the literal four-lettergenome of a mouse, or the literal text ofWar and Peaceby Tolstoy. For simplicity we take it that all meaning of the object is represented by the literal object itself. Objects can also be given by name, like "the four-letter genome of a mouse," or "the text of `War and Peace' by Tolstoy." There are also objects that cannot be given literally, but only by name, and that acquire their meaning from their contexts in background common knowledge in humankind, like "home" or "red." We are interested insemantic similarity. Using code-word lengths obtained from the page-hit counts returned by Google from the web, we obtain a semantic distance using the NCD formula and viewing Google as a compressor useful for data mining, text comprehension, classification, and translation. The associated NCD, called thenormalized Google distance(NGD) can be rewritten as wheref(x){\displaystyle f(x)}denotes the number of pages containing the search termx{\displaystyle x}, andf(x,y){\displaystyle f(x,y)}denotes the number of pages containing bothx{\displaystyle x}andy{\displaystyle y},) as returned by Google or any search engine capable of returning an aggregate page count. The numberN{\displaystyle N}can be set to the number of pages indexed although it is more proper to count each page according to the number of search terms or phrases it contains. As rule of the thumb one can multiply the number of pages by, say, a thousand...[17]
https://en.wikipedia.org/wiki/Normalized_compression_distance
TheNetflix Prizewas an open competition for the bestcollaborative filteringalgorithmto predict user ratings forfilms, based on previous ratings without any other information about the users or films, i.e. without the users being identified except by numbers assigned for the contest. The competition was held byNetflix, a video streaming service, and was open to anyone who was neither connected with Netflix (current and former employees, agents, close relatives of Netflix employees, etc.) nor a resident of certain blocked countries (such as Cuba or North Korea).[1]On September 21, 2009, the grand prize ofUS$1,000,000was given to the BellKor's Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings by 10.06%.[2] Netflix provided atrainingdata set of 100,480,507 ratings that 480,189 users gave to 17,770 movies. Each training rating is a quadruplet of the form<user, movie, date of grade, grade>. The user and movie fields areintegerIDs, while grades are from 1 to 5 (integer) stars.[3] Thequalifyingdata set contains over 2,817,131tripletsof the form<user, movie, date of grade>, with grades known only to the jury. A participating team's algorithm must predict grades on the entire qualifying set, but they are informed of the score for only half of the data: aquizset of 1,408,342 ratings. The other half is thetestset of 1,408,789, and performance on this is used by the jury to determine potential prize winners. Only the judges know which ratings are in the quiz set, and which are in the test set—this arrangement is intended to make it difficult tohill climbon the test set. Submitted predictions are scored against the true grades in the form ofroot mean squared error(RMSE), and the goal is to reduce this error as much as possible. Note that, while the actual grades are integers in the range 1 to 5, submitted predictions need not be. Netflix also identified aprobesubset of 1,408,395 ratings within thetrainingdata set. Theprobe,quiz, andtestdata sets were chosen to have similar statistical properties. In summary, the data used in the Netflix Prize looks as follows: For each movie, the title and year of release are provided in a separate dataset. No information at all is provided about users. In order to protect the privacy of the customers, "some of the rating data for some customers in the training and qualifying sets have been deliberately perturbed in one or more of the following ways: deleting ratings; inserting alternative ratings and dates; and modifying rating dates."[2] The training set is constructed such that the average user rated over 200 movies, and the average movie was rated by over 5000 users. But there is widevariancein the data—some movies in the training set have as few as 3 ratings,[4]while one user rated over 17,000 movies.[5] There was some controversy as to the choice ofRMSEas the defining metric. It has been claimed that even as small an improvement as 1% RMSE results in a significant difference in the ranking of the "top-10" most recommended movies for a user.[6] Prizes were based on improvement over Netflix's own algorithm, calledCinematch, or the previous year's score if a team has made improvement beyond a certain threshold. A trivial algorithm that predicts for each movie in the quiz set its average grade from the training data produces an RMSE of 1.0540. Cinematch uses "straightforward statisticallinear modelswith a lot of data conditioning."[7]The performance of Cinematch had plateaued by 2006.[8] Using only the training data, Cinematch scores an RMSE of 0.9514 on the quiz data, roughly a 10% improvement over the trivial algorithm. Cinematch has a similar performance on the test set, 0.9525. In order to win the grand prize of $1,000,000, a participating team had to improve this by another 10%, to achieve 0.8572 on the test set.[2]Such an improvement on the quiz set corresponds to an RMSE of 0.8563. As long as no team won the grand prize, aprogressprize of $50,000 was awarded every year for the best result thus far. However, in order to win this prize, an algorithm had to improve the RMSE on the quiz set by at least 1% over the previous progress prize winner (or over Cinematch, the first year). If no submission succeeded, the progress prize was not to be awarded for that year. To win a progress or grand prize a participant had to provide source code and a description of the algorithm to the jury within one week after being contacted by them. Following verification the winner also had to provide a non-exclusive license to Netflix. Netflix would publish only the description, not the source code, of the system. (To keep their algorithm and source code secret, a team could choose not to claim a prize.) The jury also kept their predictions secret from other participants. A team could send as many attempts to predict grades as they wish. Originally submissions were limited to once a week, but the interval was quickly modified to once a day. A team's best submission so far counted as their current submission. Once one of the teams succeeded to improve the RMSE by 10% or more, the jury would issue alast call, giving all teams 30 days to send their submissions. Only then, the team with best submission was asked for the algorithm description, source code, and non-exclusive license, and, after successful verification; declared a grand prize winner. The contest would last until the grand prize winner was declared. Had no one received the grand prize, it would have lasted for at least five years (until October 2, 2011). After that date, the contest could have been terminated at any time at Netflix's sole discretion. The competition began on October 2, 2006. By October 8, a team called WXYZConsulting had already beaten Cinematch's results.[9] By October 15, there were three teams who had beaten Cinematch, one of them by 1.06%, enough to qualify for the annual progress prize.[10]By June 2007 over 20,000 teams had registered for the competition from over 150 countries. 2,000 teams had submitted over 13,000 prediction sets.[3] Over the first year of the competition, a handful of front-runners traded first place. The more prominent ones were:[11] The algorithms used by the leading teams were usually anensembleofsingular value decomposition,k-nearest neighbor,neural networks, and so on.[12][13] On August 12, 2007, many contestants gathered at the KDD Cup and Workshop 2007, held atSan Jose, California.[14]During the workshop all four of the top teams on the leaderboard at that time presented their techniques. The team from IBM Research—Yan Liu, Saharon Rosset, Claudia Perlich, and Zhenzhen Kou—won the third place in Task 1 and first place in Task 2. Over the second year of the competition, only three teams reached the leading position: On September 2, 2007, the competition entered the "last call" period for the 2007 Progress Prize. Over 40,000 teams from 186 countries had entered the contest. They had thirty days to tender submissions for consideration. At the beginning of this period the leading team was BellKor, with an RMSE of 0.8728 (8.26% improvement), followed by Dinosaur Planet (RMSE = 0.8769; 7.83% improvement),[15]and Gravity (RMSE = 0.8785; 7.66% improvement). In the last hour of the last call period, an entry by "KorBell" took first place. This turned out to be an alternate name for Team BellKor.[16] On November 13, 2007, team KorBell (formerly BellKor) was declared the winner of the $50,000 Progress Prize with an RMSE of 0.8712 (8.43% improvement).[17]The team consisted of three researchers fromAT&T Labs, Yehuda Koren, Robert Bell, and Chris Volinsky.[18]As required, they published a description of their algorithm.[12] The 2008 Progress Prize was awarded to the team BellKor. Their submission combined with a different team, BigChaos achieved an RMSE of 0.8616 with 207 predictor sets.[19]The joint-team consisted of two researchers fromCommendoResearch & Consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos) and three researchers fromAT&T Labs, Yehuda Koren, Robert Bell, and Chris Volinsky (originally team BellKor).[20]As required, they published a description of their algorithm.[21][22] This was the final Progress Prize because obtaining the required 1% improvement over the 2008 Progress Prize would be sufficient to qualify for the Grand Prize. The prize money was donated to the charities chosen by the winners. On July 25, 2009 the team "The Ensemble", a merger of the teams "Grand Prize Team" and "Opera Solutions and Vandelay United," achieved a 10.09% improvement over Cinematch (a Quiz RMSE of 0.8554).[23][24] On June 26, 2009 the team "BellKor's Pragmatic Chaos", a merger of teams "Bellkor in BigChaos" and "Pragmatic Theory", achieved a 10.05% improvement over Cinematch (a Quiz RMSE of 0.8558). The Netflix Prize competition then entered the "last call" period for the Grand Prize. In accord with the Rules, teams had thirty days, until July 26, 2009 18:42:37 UTC, to make submissions that will be considered for this Prize.[25] On July 26, 2009, Netflix stopped gathering submissions for the Netflix Prize contest.[26] The final standing of the Leaderboard at that time showed that two teams met the minimum requirements for the Grand Prize. "The Ensemble" with a 10.10% improvement over Cinematch on the Qualifying set (a Quiz RMSE of 0.8553), and "BellKor's Pragmatic Chaos" with a 10.09% improvement over Cinematch on the Qualifying set (a Quiz RMSE of 0.8554).[27][28]The Grand Prize winner was to be the one with the better performance on the Test set. On September 18, 2009, Netflix announced team "BellKor's Pragmatic Chaos" as the prize winner (a Test RMSE of 0.8567), and the prize was awarded to the team in a ceremony on September 21, 2009.[29]"The Ensemble" team had matched BellKor's result, but since BellKor submitted their results 20 minutes earlier, the rules award the prize to BellKor.[24][30] The joint-team "BellKor's Pragmatic Chaos" consisted of two Austrian researchers from Commendo Research & Consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos), two researchers fromAT&T Labs, Robert Bell, and Chris Volinsky, Yehuda Koren fromYahoo!(originally team BellKor) and two researchers from Pragmatic Theory, Martin Piotte and Martin Chabbert.[31]As required, they published a description of their algorithm.[32] The team reported to have achieved the "dubious honors" (sicNetflix) of the worst RMSEs on theQuizandTestdata sets from among the 44,014 submissions made by 5,169 teams was "Lanterne Rouge", led by J.M. Linacre, who was also a member of "The Ensemble" team. Linacre claimed it was made deliberately bad, as befitting the name of "Lanterne rouge".[33] At the conclusion of the competition, Netflix announced a planned sequel. It would present contestants with demographic and behavioral data, including renters' ages, gender, ZIP codes, genre ratings and previously chosen movies, but not ratings. The task is to predict which movies those people will like. There would be no specific accuracy target for winning the prize. Instead, $500,000 would be awarded to the team in the lead after 6 months, and another $500,000 to the leader after 18 months.[30] On March 12, 2010, Netflix announced that it would not pursue a second Prize competition that it had announced the previous August. The decision was in response to a lawsuit and Federal Trade Commission privacy concerns.[34]Some participants, such as Volinsky, expressed disappointment about the cancellation.[13] Although the data sets were constructed to preserve customer privacy, the Prize has been criticized by privacy advocates. In 2007 two researchers fromThe University of Texas at Austin(Vitaly Shmatikovand Arvind Narayanan) were able toidentify individual usersby matching the data sets with film ratings on theInternet Movie Database.[35][36] On December 17, 2009, four Netflix users filed aclass action lawsuitagainst Netflix, alleging that Netflix had violated U.S.fair tradelaws and theVideo Privacy Protection Actby releasing the datasets.[37]There was public debate aboutprivacy for research participants. On March 19, 2010, Netflix reached a settlement with the plaintiffs, after which they voluntarily dismissed the lawsuit.
https://en.wikipedia.org/wiki/Netflix_Prize
The field ofsystem identificationusesstatistical methodsto buildmathematical modelsofdynamical systemsfrom measured data.[1]System identification also includes theoptimaldesign of experimentsfor efficiently generating informative data forfittingsuch models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is calledblack boxsystem identification. A dynamic mathematical model in this context is a mathematical description of the dynamic behavior of asystemor process in either the time or frequency domain. Examples include: One of the many possible applications of system identification is incontrol systems. For example, it is the basis for moderndata-driven control systems, in which concepts of system identification are integrated into the controller design, and lay the foundations for formal controller optimality proofs. System identification techniques can utilize both input and output data (e.g.eigensystem realization algorithm) or can include only the output data (e.g.frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available. In addition, the final estimated responses from arbitrary inputs can be analyzed by investigating their correlation and spectral properties.[2] The quality of system identification depends on the quality of the inputs, which are under the control of the systems engineer. Therefore, systems engineers have long used the principles of thedesign of experiments.[3]In recent decades, engineers have increasingly used the theory ofoptimal experimental designto specify inputs that yieldmaximally preciseestimators.[4][5] One could build awhite-boxmodel based onfirst principles, e.g. a model for a physical process from theNewton equations, but in many cases, such models will be overly complex and possibly even impossible to obtain in reasonable time due to the complex nature of many systems and processes. A more common approach is therefore to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into the details of what is actually happening inside the system. This approach is called system identification. Two types of models are common in the field of system identification: In the context ofnonlinear system identificationJin et al.[10]describe grey-box modeling by assuming a model structure a priori and then estimating the model parameters. Parameter estimation is relatively easy if the model form is known but this is rarely the case. Alternatively, the structure or model terms for both linear and highly complex nonlinear models can be identified usingNARMAXmethods.[11]This approach is completely flexible and can be used with grey box models where the algorithms are primed with the known terms, or with completely black-box models where the model terms are selected as part of the identification procedure. Another advantage of this approach is that the algorithms will just select linear terms if the system under study is linear, and nonlinear terms if the system is nonlinear, which allows a great deal of flexibility in the identification. Incontrol systemsapplications, the objective of engineers is to obtain agood performanceof theclosed-loopsystem, which is the one comprising the physical system, the feedback loop and the controller. This performance is typically achieved by designing the control law relying on a model of the system, which needs to be identified starting from experimental data. If the model identification procedure is aimed at control purposes, what really matters is not to obtain the best possible model that fits the data, as in the classical system identification approach, but to obtain a model satisfying enough for the closed-loop performance. This more recent approach is calledidentification for control, orI4Cin short. The idea behind I4C can be better understood by considering the following simple example.[12]Consider a system withtruetransfer functionG0(s){\displaystyle G_{0}(s)}: and an identified modelG^(s){\displaystyle {\hat {G}}(s)}: From a classical system identification perspective,G^(s){\displaystyle {\hat {G}}(s)}isnot, in general, agoodmodel forG0(s){\displaystyle G_{0}(s)}. In fact, modulus and phase ofG^(s){\displaystyle {\hat {G}}(s)}are different from those ofG0(s){\displaystyle G_{0}(s)}at low frequency. What is more, whileG0(s){\displaystyle G_{0}(s)}is anasymptotically stablesystem,G^(s){\displaystyle {\hat {G}}(s)}is a simply stable system. However,G^(s){\displaystyle {\hat {G}}(s)}may still be a model good enough for control purposes. In fact, if one wants to apply apurely proportionalnegative feedbackcontroller with high gainK{\displaystyle K}, the closed-loop transfer function from the reference to the output is, forG0(s){\displaystyle G_{0}(s)} and forG^(s){\displaystyle {\hat {G}}(s)} SinceK{\displaystyle K}is very large, one has that1+K≈K{\displaystyle 1+K\approx K}. Thus, the two closed-loop transfer functions are indistinguishable. In conclusion,G^(s){\displaystyle {\hat {G}}(s)}is aperfectly acceptableidentified model for thetruesystem if such feedback control law has to be applied. Whether or not a model isappropriatefor control design depends not only on the plant/model mismatch but also on the controller that will be implemented. As such, in the I4C framework, given a control performance objective, the control engineer has to design the identification phase in such a way that the performance achieved by the model-based controller on thetruesystem is as high as possible. Sometimes, it is even more convenient to design a controller without explicitly identifying a model of the system, but directly working on experimental data. This is the case ofdirectdata-driven control systems. A common understanding in Artificial Intelligence is that thecontrollerhas to generate the next move for arobot. For example, the robot starts in the maze and then the robot decides to move forward. Model predictive control determines the next action indirectly. The term"model"is referencing to a forward model which doesn't provide the correct action but simulates a scenario.[13]A forward model is equal to aphysics engineused in game programming. The model takes an input and calculates the future state of the system. The reason why dedicated forward models are constructed is because it allows one to divide the overall control process. The first question is how to predict the future states of the system. That means, to simulate aplantover a timespan for different input values. And the second task is to search for asequenceof input values which brings the plant into a goal state. This is called predictive control. The forward model is the most important aspect of aMPC-controller. It has to be created before thesolvercan be realized. If it's unclear what the behavior of a system is, it's not possible to search for meaningful actions. The workflow for creating a forward model is called system identification. The idea is toformalize a systemin a set of equations which will behave like the original system.[14]The error between the real system and the forward model can be measured. There are many techniques available to create a forward model:ordinary differential equationsis the classical one which is used inphysics engineslikeBox2D. A more recent technique is aneural networkfor creating the forward model.[15]
https://en.wikipedia.org/wiki/System_identification
Inmathematics, anasymmetric normon avector spaceis a generalization of the concept of anorm. Anasymmetric normon arealvector spaceX{\displaystyle X}is afunctionp:X→[0,+∞){\displaystyle p:X\to [0,+\infty )}that has the following properties: Asymmetric norms differ fromnormsin that they need not satisfy the equalityp(−x)=p(x).{\displaystyle p(-x)=p(x).} If the condition of positive definiteness is omitted, thenp{\displaystyle p}is anasymmetric seminorm. A weaker condition than positive definiteness isnon-degeneracy: that forx≠0,{\displaystyle x\neq 0,}at least one of the two numbersp(x){\displaystyle p(x)}andp(−x){\displaystyle p(-x)}is not zero. On thereal lineR,{\displaystyle \mathbb {R} ,}the functionp{\displaystyle p}given byp(x)={|x|,x≤0;2|x|,x≥0;{\displaystyle p(x)={\begin{cases}|x|,&x\leq 0;\\2|x|,&x\geq 0;\end{cases}}}is an asymmetric norm but not a norm. In a real vector spaceX,{\displaystyle X,}theMinkowski functionalpB{\displaystyle p_{B}}of a convex subsetB⊆X{\displaystyle B\subseteq X}that contains the origin is defined by the formulapB(x)=inf{r≥0:x∈rB}{\displaystyle p_{B}(x)=\inf \left\{r\geq 0:x\in rB\right\}\,}forx∈X{\displaystyle x\in X}. This functional is an asymmetric seminorm ifB{\displaystyle B}is an absorbing set, which means that⋃r≥0rB=X,{\displaystyle \bigcup _{r\geq 0}rB=X,}and ensures thatp(x){\displaystyle p(x)}is finite for eachx∈X.{\displaystyle x\in X.} IfB∗⊆Rn{\displaystyle B^{*}\subseteq \mathbb {R} ^{n}}is aconvex setthat contains the origin, then an asymmetric seminormp{\displaystyle p}can be defined onRn{\displaystyle \mathbb {R} ^{n}}by the formulap(x)=maxφ∈B∗⟨φ,x⟩.{\displaystyle p(x)=\max _{\varphi \in B^{*}}\langle \varphi ,x\rangle .}For instance, ifB∗⊆R2{\displaystyle B^{*}\subseteq \mathbb {R} ^{2}}is the square with vertices(±1,±1),{\displaystyle (\pm 1,\pm 1),}thenp{\displaystyle p}is thetaxicab normx=(x0,x1)↦|x0|+|x1|.{\displaystyle x=\left(x_{0},x_{1}\right)\mapsto \left|x_{0}\right|+\left|x_{1}\right|.}Different convex sets yield different seminorms, and every asymmetric seminorm onRn{\displaystyle \mathbb {R} ^{n}}can be obtained from some convex set, called itsdual unit ball. Therefore, asymmetric seminorms are inone-to-one correspondencewith convex sets that contain the origin. The seminormp{\displaystyle p}is More generally, ifX{\displaystyle X}is afinite-dimensionalreal vector space andB∗⊆X∗{\displaystyle B^{*}\subseteq X^{*}}is a compact convex subset of thedual spaceX∗{\displaystyle X^{*}}that contains the origin, thenp(x)=maxφ∈B∗φ(x){\displaystyle p(x)=\max _{\varphi \in B^{*}}\varphi (x)}is an asymmetric seminorm onX.{\displaystyle X.} Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Asymmetric_norm
Infunctional analysisand related areas ofmathematics, ametrizable(resp.pseudometrizable)topological vector space(TVS) is a TVS whose topology is induced by a metric (resp.pseudometric). AnLM-spaceis aninductive limitof a sequence oflocally convexmetrizable TVS. Apseudometricon a setX{\displaystyle X}is a mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }satisfying the following properties: A pseudometric is called ametricif it satisfies: Ultrapseudometric A pseudometricd{\displaystyle d}onX{\displaystyle X}is called aultrapseudometricor astrong pseudometricif it satisfies: Pseudometric space Apseudometric spaceis a pair(X,d){\displaystyle (X,d)}consisting of a setX{\displaystyle X}and a pseudometricd{\displaystyle d}onX{\displaystyle X}such thatX{\displaystyle X}'s topology is identical to the topology onX{\displaystyle X}induced byd.{\displaystyle d.}We call a pseudometric space(X,d){\displaystyle (X,d)}ametric space(resp.ultrapseudometric space) whend{\displaystyle d}is a metric (resp. ultrapseudometric). Ifd{\displaystyle d}is a pseudometric on a setX{\displaystyle X}then collection ofopen balls:Br(z):={x∈X:d(x,z)<r}{\displaystyle B_{r}(z):=\{x\in X:d(x,z)<r\}}asz{\displaystyle z}ranges overX{\displaystyle X}andr>0{\displaystyle r>0}ranges over the positive real numbers, forms a basis for a topology onX{\displaystyle X}that is called thed{\displaystyle d}-topologyor thepseudometric topologyonX{\displaystyle X}induced byd.{\displaystyle d.} Pseudometrizable space A topological space(X,τ){\displaystyle (X,\tau )}is calledpseudometrizable(resp.metrizable,ultrapseudometrizable) if there exists a pseudometric (resp. metric, ultrapseudometric)d{\displaystyle d}onX{\displaystyle X}such thatτ{\displaystyle \tau }is equal to the topology induced byd.{\displaystyle d.}[1] An additivetopological groupis an additive group endowed with a topology, called agroup topology, under which addition and negation become continuous operators. A topologyτ{\displaystyle \tau }on a real or complex vector spaceX{\displaystyle X}is called avector topologyor aTVS topologyif it makes the operations of vector addition and scalar multiplication continuous (that is, if it makesX{\displaystyle X}into atopological vector space). Everytopological vector space(TVS)X{\displaystyle X}is an additive commutative topological group but not all group topologies onX{\displaystyle X}are vector topologies. This is because despite it making addition and negation continuous, a group topology on a vector spaceX{\displaystyle X}may fail to make scalar multiplication continuous. For instance, thediscrete topologyon any non-trivial vector space makes addition and negation continuous but do not make scalar multiplication continuous. IfX{\displaystyle X}is an additive group then we say that a pseudometricd{\displaystyle d}onX{\displaystyle X}istranslation invariantor justinvariantif it satisfies any of the following equivalent conditions: IfX{\displaystyle X}is atopological groupthe avalueorG-seminormonX{\displaystyle X}(theGstands for Group) is a real-valued mapp:X→R{\displaystyle p:X\rightarrow \mathbb {R} }with the following properties:[2] where we call a G-seminorm aG-normif it satisfies the additional condition: Ifp{\displaystyle p}is a value on a vector spaceX{\displaystyle X}then: Theorem[2]—Suppose thatX{\displaystyle X}is an additive commutative group. Ifd{\displaystyle d}is a translation invariant pseudometric onX{\displaystyle X}then the mapp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a value onX{\displaystyle X}calledthe value associated withd{\displaystyle d}, and moreover,d{\displaystyle d}generates a group topology onX{\displaystyle X}(i.e. thed{\displaystyle d}-topology onX{\displaystyle X}makesX{\displaystyle X}into a topological group). Conversely, ifp{\displaystyle p}is a value onX{\displaystyle X}then the mapd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}and the value associated withd{\displaystyle d}is justp.{\displaystyle p.} Theorem[2]—If(X,τ){\displaystyle (X,\tau )}is an additive commutativetopological groupthen the following are equivalent: If(X,τ){\displaystyle (X,\tau )}is Hausdorff then the word "pseudometric" in the above statement may be replaced by the word "metric." A commutative topological group is metrizable if and only if it is Hausdorff and pseudometrizable. LetX{\displaystyle X}be a non-trivial (i.e.X≠{0}{\displaystyle X\neq \{0\}}) real or complex vector space and letd{\displaystyle d}be the translation-invarianttrivial metriconX{\displaystyle X}defined byd(x,x)=0{\displaystyle d(x,x)=0}andd(x,y)=1for allx,y∈X{\displaystyle d(x,y)=1{\text{ for all }}x,y\in X}such thatx≠y.{\displaystyle x\neq y.}The topologyτ{\displaystyle \tau }thatd{\displaystyle d}induces onX{\displaystyle X}is thediscrete topology, which makes(X,τ){\displaystyle (X,\tau )}into a commutative topological group under addition but doesnotform a vector topology onX{\displaystyle X}because(X,τ){\displaystyle (X,\tau )}isdisconnectedbut every vector topology is connected. What fails is that scalar multiplication isn't continuous on(X,τ).{\displaystyle (X,\tau ).} This example shows that a translation-invariant (pseudo)metric isnotenough to guarantee a vector topology, which leads us to define paranorms andF-seminorms. A collectionN{\displaystyle {\mathcal {N}}}of subsets of a vector space is calledadditive[5]if for everyN∈N,{\displaystyle N\in {\mathcal {N}},}there exists someU∈N{\displaystyle U\in {\mathcal {N}}}such thatU+U⊆N.{\displaystyle U+U\subseteq N.} Continuity of addition at 0—If(X,+){\displaystyle (X,+)}is agroup(as all vector spaces are),τ{\displaystyle \tau }is a topology onX,{\displaystyle X,}andX×X{\displaystyle X\times X}is endowed with theproduct topology, then the addition mapX×X→X{\displaystyle X\times X\to X}(i.e. the map(x,y)↦x+y{\displaystyle (x,y)\mapsto x+y}) is continuous at the origin ofX×X{\displaystyle X\times X}if and only if the set ofneighborhoodsof the origin in(X,τ){\displaystyle (X,\tau )}is additive. This statement remains true if the word "neighborhood" is replaced by "open neighborhood."[5] All of the above conditions are consequently a necessary for a topology to form a vector topology. Additive sequences of sets have the particularly nice property that they define non-negative continuous real-valuedsubadditivefunctions. These functions can then be used to prove many of the basic properties of topological vector spaces and also show that a Hausdorff TVS with a countable basis of neighborhoods is metrizable. The following theorem is true more generally for commutative additivetopological groups. Theorem—LetU∙=(Ui)i=0∞{\displaystyle U_{\bullet }=\left(U_{i}\right)_{i=0}^{\infty }}be a collection of subsets of a vector space such that0∈Ui{\displaystyle 0\in U_{i}}andUi+1+Ui+1⊆Ui{\displaystyle U_{i+1}+U_{i+1}\subseteq U_{i}}for alli≥0.{\displaystyle i\geq 0.}For allu∈U0,{\displaystyle u\in U_{0},}letS(u):={n∙=(n1,…,nk):k≥1,ni≥0for alli,andu∈Un1+⋯+Unk}.{\displaystyle \mathbb {S} (u):=\left\{n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)~:~k\geq 1,n_{i}\geq 0{\text{ for all }}i,{\text{ and }}u\in U_{n_{1}}+\cdots +U_{n_{k}}\right\}.} Definef:X→[0,1]{\displaystyle f:X\to [0,1]}byf(x)=1{\displaystyle f(x)=1}ifx∉U0{\displaystyle x\not \in U_{0}}and otherwise letf(x):=inf{2−n1+⋯2−nk:n∙=(n1,…,nk)∈S(x)}.{\displaystyle f(x):=\inf _{}\left\{2^{-n_{1}}+\cdots 2^{-n_{k}}~:~n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)\in \mathbb {S} (x)\right\}.} Thenf{\displaystyle f}issubadditive(meaningf(x+y)≤f(x)+f(y)for allx,y∈X{\displaystyle f(x+y)\leq f(x)+f(y){\text{ for all }}x,y\in X}) andf=0{\displaystyle f=0}on⋂i≥0Ui,{\displaystyle \bigcap _{i\geq 0}U_{i},}so in particularf(0)=0.{\displaystyle f(0)=0.}If allUi{\displaystyle U_{i}}aresymmetric setsthenf(−x)=f(x){\displaystyle f(-x)=f(x)}and if allUi{\displaystyle U_{i}}are balanced thenf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}and allx∈X.{\displaystyle x\in X.}IfX{\displaystyle X}is a topological vector space and if allUi{\displaystyle U_{i}}are neighborhoods of the origin thenf{\displaystyle f}is continuous, where if in additionX{\displaystyle X}is Hausdorff andU∙{\displaystyle U_{\bullet }}forms a basis of balanced neighborhoods of the origin inX{\displaystyle X}thend(x,y):=f(x−y){\displaystyle d(x,y):=f(x-y)}is a metric defining the vector topology onX.{\displaystyle X.} Assume thatn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}always denotes a finite sequence of non-negative integers and use the notation:∑2−n∙:=2−n1+⋯+2−nkand∑Un∙:=Un1+⋯+Unk.{\displaystyle \sum 2^{-n_{\bullet }}:=2^{-n_{1}}+\cdots +2^{-n_{k}}\quad {\text{ and }}\quad \sum U_{n_{\bullet }}:=U_{n_{1}}+\cdots +U_{n_{k}}.} For any integersn≥0{\displaystyle n\geq 0}andd>2,{\displaystyle d>2,}Un⊇Un+1+Un+1⊇Un+1+Un+2+Un+2⊇Un+1+Un+2+⋯+Un+d+Un+d+1+Un+d+1.{\displaystyle U_{n}\supseteq U_{n+1}+U_{n+1}\supseteq U_{n+1}+U_{n+2}+U_{n+2}\supseteq U_{n+1}+U_{n+2}+\cdots +U_{n+d}+U_{n+d+1}+U_{n+d+1}.} From this it follows that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of distinct positive integers then∑Un∙⊆U−1+min(n∙).{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{-1+\min \left(n_{\bullet }\right)}.} It will now be shown by induction onk{\displaystyle k}that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of non-negative integers such that∑2−n∙≤2−M{\displaystyle \sum 2^{-n_{\bullet }}\leq 2^{-M}}for some integerM≥0{\displaystyle M\geq 0}then∑Un∙⊆UM.{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{M}.}This is clearly true fork=1{\displaystyle k=1}andk=2{\displaystyle k=2}so assume thatk>2,{\displaystyle k>2,}which implies that allni{\displaystyle n_{i}}are positive. If allni{\displaystyle n_{i}}are distinct then this step is done, and otherwise pick distinct indicesi<j{\displaystyle i<j}such thatni=nj{\displaystyle n_{i}=n_{j}}and constructm∙=(m1,…,mk−1){\displaystyle m_{\bullet }=\left(m_{1},\ldots ,m_{k-1}\right)}fromn∙{\displaystyle n_{\bullet }}by replacing eachni{\displaystyle n_{i}}withni−1{\displaystyle n_{i}-1}and deleting thejth{\displaystyle j^{\text{th}}}element ofn∙{\displaystyle n_{\bullet }}(all other elements ofn∙{\displaystyle n_{\bullet }}are transferred tom∙{\displaystyle m_{\bullet }}unchanged). Observe that∑2−n∙=∑2−m∙{\displaystyle \sum 2^{-n_{\bullet }}=\sum 2^{-m_{\bullet }}}and∑Un∙⊆∑Um∙{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}}(becauseUni+Unj⊆Uni−1{\displaystyle U_{n_{i}}+U_{n_{j}}\subseteq U_{n_{i}-1}}) so by appealing to the inductive hypothesis we conclude that∑Un∙⊆∑Um∙⊆UM,{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}\subseteq U_{M},}as desired. It is clear thatf(0)=0{\displaystyle f(0)=0}and that0≤f≤1{\displaystyle 0\leq f\leq 1}so to prove thatf{\displaystyle f}is subadditive, it suffices to prove thatf(x+y)≤f(x)+f(y){\displaystyle f(x+y)\leq f(x)+f(y)}whenx,y∈X{\displaystyle x,y\in X}are such thatf(x)+f(y)<1,{\displaystyle f(x)+f(y)<1,}which implies thatx,y∈U0.{\displaystyle x,y\in U_{0}.}This is an exercise. If allUi{\displaystyle U_{i}}are symmetric thenx∈∑Un∙{\displaystyle x\in \sum U_{n_{\bullet }}}if and only if−x∈∑Un∙{\displaystyle -x\in \sum U_{n_{\bullet }}}from which it follows thatf(−x)≤f(x){\displaystyle f(-x)\leq f(x)}andf(−x)≥f(x).{\displaystyle f(-x)\geq f(x).}If allUi{\displaystyle U_{i}}are balanced then the inequalityf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all unit scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}is proved similarly. Becausef{\displaystyle f}is a nonnegative subadditive function satisfyingf(0)=0,{\displaystyle f(0)=0,}as described in the article onsublinear functionals,f{\displaystyle f}is uniformly continuous onX{\displaystyle X}if and only iff{\displaystyle f}is continuous at the origin. If allUi{\displaystyle U_{i}}are neighborhoods of the origin then for any realr>0,{\displaystyle r>0,}pick an integerM>1{\displaystyle M>1}such that2−M<r{\displaystyle 2^{-M}<r}so thatx∈UM{\displaystyle x\in U_{M}}impliesf(x)≤2−M<r.{\displaystyle f(x)\leq 2^{-M}<r.}If the set of allUi{\displaystyle U_{i}}form basis of balanced neighborhoods of the origin then it may be shown that for anyn>1,{\displaystyle n>1,}there exists some0<r≤2−n{\displaystyle 0<r\leq 2^{-n}}such thatf(x)<r{\displaystyle f(x)<r}impliesx∈Un.{\displaystyle x\in U_{n}.}◼{\displaystyle \blacksquare } IfX{\displaystyle X}is a vector space over the real or complex numbers then aparanormonX{\displaystyle X}is a G-seminorm (defined above)p:X→R{\displaystyle p:X\rightarrow \mathbb {R} }onX{\displaystyle X}that satisfies any of the following additional conditions, each of which begins with "for all sequencesx∙=(xi)i=1∞{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}and all convergent sequences of scalarss∙=(si)i=1∞{\displaystyle s_{\bullet }=\left(s_{i}\right)_{i=1}^{\infty }}":[6] A paranorm is calledtotalif in addition it satisfies: Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then the mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}that defines avector topologyonX.{\displaystyle X.}[8] Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then: IfX{\displaystyle X}is a vector space over the real or complex numbers then anF-seminormonX{\displaystyle X}(theF{\displaystyle F}stands forFréchet) is a real-valued mapp:X→R{\displaystyle p:X\to \mathbb {R} }with the following four properties:[11] AnF-seminorm is called anF-normif in addition it satisfies: AnF-seminorm is calledmonotoneif it satisfies: AnF-seminormed space(resp.F-normed space)[12]is a pair(X,p){\displaystyle (X,p)}consisting of a vector spaceX{\displaystyle X}and anF-seminorm (resp.F-norm)p{\displaystyle p}onX.{\displaystyle X.} If(X,p){\displaystyle (X,p)}and(Z,q){\displaystyle (Z,q)}areF-seminormed spaces then a mapf:X→Z{\displaystyle f:X\to Z}is called anisometric embedding[12]ifq(f(x)−f(y))=p(x,y)for allx,y∈X.{\displaystyle q(f(x)-f(y))=p(x,y){\text{ for all }}x,y\in X.} Every isometric embedding of oneF-seminormed space into another is atopological embedding, but the converse is not true in general.[12] EveryF-seminorm is a paranorm and every paranorm is equivalent to someF-seminorm.[7]EveryF-seminorm on a vector spaceX{\displaystyle X}is a value onX.{\displaystyle X.}In particular,p(x)=0,{\displaystyle p(x)=0,}andp(x)=p(−x){\displaystyle p(x)=p(-x)}for allx∈X.{\displaystyle x\in X.} Theorem[11]—Letp{\displaystyle p}be anF-seminorm on a vector spaceX.{\displaystyle X.}Then the mapd:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation invariant pseudometric onX{\displaystyle X}that defines a vector topologyτ{\displaystyle \tau }onX.{\displaystyle X.}Ifp{\displaystyle p}is anF-norm thend{\displaystyle d}is a metric. WhenX{\displaystyle X}is endowed with this topology thenp{\displaystyle p}is a continuous map onX.{\displaystyle X.} The balanced sets{x∈X:p(x)≤r},{\displaystyle \{x\in X~:~p(x)\leq r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of closed set. Similarly, the balanced sets{x∈X:p(x)<r},{\displaystyle \{x\in X~:~p(x)<r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of open sets. Suppose thatL{\displaystyle {\mathcal {L}}}is a non-empty collection ofF-seminorms on a vector spaceX{\displaystyle X}and for any finite subsetF⊆L{\displaystyle {\mathcal {F}}\subseteq {\mathcal {L}}}and anyr>0,{\displaystyle r>0,}letUF,r:=⋂p∈F{x∈X:p(x)<r}.{\displaystyle U_{{\mathcal {F}},r}:=\bigcap _{p\in {\mathcal {F}}}\{x\in X:p(x)<r\}.} The set{UF,r:r>0,F⊆L,Ffinite}{\displaystyle \left\{U_{{\mathcal {F}},r}~:~r>0,{\mathcal {F}}\subseteq {\mathcal {L}},{\mathcal {F}}{\text{ finite }}\right\}}forms a filter base onX{\displaystyle X}that also forms a neighborhood basis at the origin for a vector topology onX{\displaystyle X}denoted byτL.{\displaystyle \tau _{\mathcal {L}}.}[12]EachUF,r{\displaystyle U_{{\mathcal {F}},r}}is abalancedandabsorbingsubset ofX.{\displaystyle X.}[12]These sets satisfy[12]UF,r/2+UF,r/2⊆UF,r.{\displaystyle U_{{\mathcal {F}},r/2}+U_{{\mathcal {F}},r/2}\subseteq U_{{\mathcal {F}},r}.} Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negative subadditive functions on a vector spaceX.{\displaystyle X.} TheFréchet combination[8]ofp∙{\displaystyle p_{\bullet }}is defined to be the real-valued mapp(x):=∑i=1∞pi(x)2i[1+pi(x)].{\displaystyle p(x):=\sum _{i=1}^{\infty }{\frac {p_{i}(x)}{2^{i}\left[1+p_{i}(x)\right]}}.} Assume thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is an increasing sequence of seminorms onX{\displaystyle X}and letp{\displaystyle p}be the Fréchet combination ofp∙.{\displaystyle p_{\bullet }.}Thenp{\displaystyle p}is anF-seminorm onX{\displaystyle X}that induces the same locally convex topology as the familyp∙{\displaystyle p_{\bullet }}of seminorms.[13] Sincep∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is increasing, a basis of open neighborhoods of the origin consists of all sets of the form{x∈X:pi(x)<r}{\displaystyle \left\{x\in X~:~p_{i}(x)<r\right\}}asi{\displaystyle i}ranges over all positive integers andr>0{\displaystyle r>0}ranges over all positive real numbers. Thetranslation invariantpseudometriconX{\displaystyle X}induced by thisF-seminormp{\displaystyle p}isd(x,y)=∑i=1∞12ipi(x−y)1+pi(x−y).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {p_{i}(x-y)}{1+p_{i}(x-y)}}.} This metric was discovered byFréchetin his 1906 thesis for the spaces of real and complex sequences with pointwise operations.[14] If eachpi{\displaystyle p_{i}}is a paranorm then so isp{\displaystyle p}and moreover,p{\displaystyle p}induces the same topology onX{\displaystyle X}as the familyp∙{\displaystyle p_{\bullet }}of paranorms.[8]This is also true of the following paranorms onX{\displaystyle X}: The Fréchet combination can be generalized by use of a bounded remetrization function. Abounded remetrization function[15]is a continuous non-negative non-decreasing mapR:[0,∞)→[0,∞){\displaystyle R:[0,\infty )\to [0,\infty )}that has a bounded range, issubadditive(meaning thatR(s+t)≤R(s)+R(t){\displaystyle R(s+t)\leq R(s)+R(t)}for alls,t≥0{\displaystyle s,t\geq 0}), and satisfiesR(s)=0{\displaystyle R(s)=0}if and only ifs=0.{\displaystyle s=0.} Examples of bounded remetrization functions includearctan⁡t,{\displaystyle \arctan t,}tanh⁡t,{\displaystyle \tanh t,}t↦min{t,1},{\displaystyle t\mapsto \min\{t,1\},}andt↦t1+t.{\displaystyle t\mapsto {\frac {t}{1+t}}.}[15]Ifd{\displaystyle d}is a pseudometric (respectively, metric) onX{\displaystyle X}andR{\displaystyle R}is a bounded remetrization function thenR∘d{\displaystyle R\circ d}is a bounded pseudometric (respectively, bounded metric) onX{\displaystyle X}that is uniformly equivalent tod.{\displaystyle d.}[15] Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negativeF-seminorm on a vector spaceX,{\displaystyle X,}R{\displaystyle R}is a bounded remetrization function, andr∙=(ri)i=1∞{\displaystyle r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }}is a sequence of positive real numbers whose sum is finite. Thenp(x):=∑i=1∞riR(pi(x)){\displaystyle p(x):=\sum _{i=1}^{\infty }r_{i}R\left(p_{i}(x)\right)}defines a boundedF-seminorm that is uniformly equivalent to thep∙.{\displaystyle p_{\bullet }.}[16]It has the property that for any netx∙=(xa)a∈A{\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}inX,{\displaystyle X,}p(x∙)→0{\displaystyle p\left(x_{\bullet }\right)\to 0}if and only ifpi(x∙)→0{\displaystyle p_{i}\left(x_{\bullet }\right)\to 0}for alli.{\displaystyle i.}[16]p{\displaystyle p}is anF-norm if and only if thep∙{\displaystyle p_{\bullet }}separate points onX.{\displaystyle X.}[16] A pseudometric (resp. metric)d{\displaystyle d}is induced by a seminorm (resp. norm) on a vector spaceX{\displaystyle X}if and only ifd{\displaystyle d}is translation invariant andabsolutely homogeneous, which means that for all scalarss{\displaystyle s}and allx,y∈X,{\displaystyle x,y\in X,}in which case the function defined byp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a seminorm (resp. norm) and the pseudometric (resp. metric) induced byp{\displaystyle p}is equal tod.{\displaystyle d.} If(X,τ){\displaystyle (X,\tau )}is atopological vector space(TVS) (where note in particular thatτ{\displaystyle \tau }is assumed to be a vector topology) then the following are equivalent:[11] If(X,τ){\displaystyle (X,\tau )}is a TVS then the following are equivalent: Birkhoff–Kakutani theorem—If(X,τ){\displaystyle (X,\tau )}is a topological vector space then the following three conditions are equivalent:[17][note 1] By the Birkhoff–Kakutani theorem, it follows that there is anequivalent metricthat is translation-invariant. If(X,τ){\displaystyle (X,\tau )}is TVS then the following are equivalent:[13] LetM{\displaystyle M}be a vector subspace of a topological vector space(X,τ).{\displaystyle (X,\tau ).} IfX{\displaystyle X}is Hausdorff locally convex TVS thenX{\displaystyle X}with thestrong topology,(X,b(X,X′)),{\displaystyle \left(X,b\left(X,X^{\prime }\right)\right),}is metrizable if and only if there exists a countable setB{\displaystyle {\mathcal {B}}}of bounded subsets ofX{\displaystyle X}such that every bounded subset ofX{\displaystyle X}is contained in some element ofB.{\displaystyle {\mathcal {B}}.}[22] Thestrong dual spaceXb′{\displaystyle X_{b}^{\prime }}of a metrizable locally convex space (such as aFréchet space[23])X{\displaystyle X}is aDF-space.[24]The strong dual of a DF-space is aFréchet space.[25]The strong dual of areflexiveFréchet space is abornological space.[24]The strong bidual (that is, thestrong dual spaceof the strong dual space) of a metrizable locally convex space is a Fréchet space.[26]IfX{\displaystyle X}is a metrizable locally convex space then its strong dualXb′{\displaystyle X_{b}^{\prime }}has one of the following properties, if and only if it has all of these properties: (1)bornological, (2)infrabarreled, (3)barreled.[26] A topological vector space isseminormableif and only if it has aconvexbounded neighborhood of the origin. Moreover, a TVS isnormableif and only if it isHausdorffand seminormable.[14]Every metrizable TVS on a finite-dimensionalvector space is a normablelocally convexcomplete TVS, beingTVS-isomorphictoEuclidean space. Consequently, any metrizable TVS that isnotnormable must be infinite dimensional. IfM{\displaystyle M}is a metrizablelocally convex TVSthat possess acountablefundamental system of bounded sets, thenM{\displaystyle M}is normable.[27] IfX{\displaystyle X}is a Hausdorfflocally convex spacethen the following are equivalent: and if this locally convex spaceX{\displaystyle X}is also metrizable, then the following may be appended to this list: In particular, if a metrizable locally convex spaceX{\displaystyle X}(such as aFréchet space) isnotnormable then itsstrong dual spaceXb′{\displaystyle X_{b}^{\prime }}is not aFréchet–Urysohn spaceand consequently, thiscompleteHausdorff locally convex spaceXb′{\displaystyle X_{b}^{\prime }}is also neither metrizable nor normable. Another consequence of this is that ifX{\displaystyle X}is areflexivelocally convexTVS whose strong dualXb′{\displaystyle X_{b}^{\prime }}is metrizable thenXb′{\displaystyle X_{b}^{\prime }}is necessarily a reflexive Fréchet space,X{\displaystyle X}is aDF-space, bothX{\displaystyle X}andXb′{\displaystyle X_{b}^{\prime }}are necessarilycompleteHausdorffultrabornologicaldistinguishedwebbed spaces, and moreover,Xb′{\displaystyle X_{b}^{\prime }}is normable if and only ifX{\displaystyle X}is normable if and only ifX{\displaystyle X}is Fréchet–Urysohn if and only ifX{\displaystyle X}is metrizable. In particular, such a spaceX{\displaystyle X}is either aBanach spaceor else it is not even a Fréchet–Urysohn space. Suppose that(X,d){\displaystyle (X,d)}is a pseudometric space andB⊆X.{\displaystyle B\subseteq X.}The setB{\displaystyle B}ismetrically boundedord{\displaystyle d}-boundedif there exists a real numberR>0{\displaystyle R>0}such thatd(x,y)≤R{\displaystyle d(x,y)\leq R}for allx,y∈B{\displaystyle x,y\in B}; the smallest suchR{\displaystyle R}is then called thediameterord{\displaystyle d}-diameterofB.{\displaystyle B.}[14]IfB{\displaystyle B}isboundedin a pseudometrizable TVSX{\displaystyle X}then it is metrically bounded; the converse is in general false but it is true forlocally convexmetrizable TVSs.[14] Theorem[29]—All infinite-dimensionalseparablecomplete metrizable TVS arehomeomorphic. Everytopological vector space(and more generally, atopological group) has a canonicaluniform structure, induced by its topology, which allows the notions of completeness and uniform continuity to be applied to it. IfX{\displaystyle X}is a metrizable TVS andd{\displaystyle d}is a metric that definesX{\displaystyle X}'s topology, then its possible thatX{\displaystyle X}is complete as a TVS (i.e. relative to its uniformity) but the metricd{\displaystyle d}isnotacomplete metric(such metrics exist even forX=R{\displaystyle X=\mathbb {R} }). Thus, ifX{\displaystyle X}is a TVS whose topology is induced by a pseudometricd,{\displaystyle d,}then the notion of completeness ofX{\displaystyle X}(as a TVS) and the notion of completeness of the pseudometric space(X,d){\displaystyle (X,d)}are not always equivalent. The next theorem gives a condition for when they are equivalent: Theorem—IfX{\displaystyle X}is a pseudometrizable TVS whose topology is induced by atranslation invariantpseudometricd,{\displaystyle d,}thend{\displaystyle d}is a complete pseudometric onX{\displaystyle X}if and only ifX{\displaystyle X}is complete as a TVS.[36] Theorem[37][38](Klee)—Letd{\displaystyle d}beany[note 2]metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is a complete-TVS. Theorem—IfX{\displaystyle X}is a TVS whose topology is induced by a paranormp,{\displaystyle p,}thenX{\displaystyle X}is complete if and only if for every sequence(xi)i=1∞{\displaystyle \left(x_{i}\right)_{i=1}^{\infty }}inX,{\displaystyle X,}if∑i=1∞p(xi)<∞{\displaystyle \sum _{i=1}^{\infty }p\left(x_{i}\right)<\infty }then∑i=1∞xi{\displaystyle \sum _{i=1}^{\infty }x_{i}}converges inX.{\displaystyle X.}[39] IfM{\displaystyle M}is a closed vector subspace of a complete pseudometrizable TVSX,{\displaystyle X,}then the quotient spaceX/M{\displaystyle X/M}is complete.[40]IfM{\displaystyle M}is acompletevector subspace of a metrizable TVSX{\displaystyle X}and if the quotient spaceX/M{\displaystyle X/M}is complete then so isX.{\displaystyle X.}[40]IfX{\displaystyle X}is not complete thenM:=X,{\displaystyle M:=X,}but not complete, vector subspace ofX.{\displaystyle X.} ABaireseparabletopological groupis metrizable if and only if it is cosmic.[23] Banach-Saks theorem[45]—If(xn)n=1∞{\displaystyle \left(x_{n}\right)_{n=1}^{\infty }}is a sequence in alocally convexmetrizable TVS(X,τ){\displaystyle (X,\tau )}that convergesweaklyto somex∈X,{\displaystyle x\in X,}then there exists a sequencey∙=(yi)i=1∞{\displaystyle y_{\bullet }=\left(y_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}such thaty∙→x{\displaystyle y_{\bullet }\to x}in(X,τ){\displaystyle (X,\tau )}and eachyi{\displaystyle y_{i}}is a convex combination of finitely manyxn.{\displaystyle x_{n}.} Mackey's countability condition[14]—Suppose thatX{\displaystyle X}is a locally convex metrizable TVS and that(Bi)i=1∞{\displaystyle \left(B_{i}\right)_{i=1}^{\infty }}is a countable sequence of bounded subsets ofX.{\displaystyle X.}Then there exists a bounded subsetB{\displaystyle B}ofX{\displaystyle X}and a sequence(ri)i=1∞{\displaystyle \left(r_{i}\right)_{i=1}^{\infty }}of positive real numbers such thatBi⊆riB{\displaystyle B_{i}\subseteq r_{i}B}for alli.{\displaystyle i.} Generalized series As describedin this article's section on generalized series, for anyI{\displaystyle I}-indexed familyfamily(ri)i∈I{\displaystyle \left(r_{i}\right)_{i\in I}}of vectors from a TVSX,{\displaystyle X,}it is possible to define their sum∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}as the limit of thenetof finite partial sumsF∈FiniteSubsets⁡(I)↦∑i∈Fri{\displaystyle F\in \operatorname {FiniteSubsets} (I)\mapsto \textstyle \sum \limits _{i\in F}r_{i}}where the domainFiniteSubsets⁡(I){\displaystyle \operatorname {FiniteSubsets} (I)}isdirectedby⊆.{\displaystyle \,\subseteq .\,}IfI=N{\displaystyle I=\mathbb {N} }andX=R,{\displaystyle X=\mathbb {R} ,}for instance, then the generalized series∑i∈Nri{\displaystyle \textstyle \sum \limits _{i\in \mathbb {N} }r_{i}}converges if and only if∑i=1∞ri{\displaystyle \textstyle \sum \limits _{i=1}^{\infty }r_{i}}converges unconditionallyin the usual sense (which for real numbers,is equivalenttoabsolute convergence). If a generalized series∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}converges in a metrizable TVS, then the set{i∈I:ri≠0}{\displaystyle \left\{i\in I:r_{i}\neq 0\right\}}is necessarilycountable(that is, either finite orcountably infinite);[proof 1]in other words, all but at most countably manyri{\displaystyle r_{i}}will be zero and so this generalized series∑i∈Iri=∑ri≠0i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}~=~\textstyle \sum \limits _{\stackrel {i\in I}{r_{i}\neq 0}}r_{i}}is actually a sum of at most countably many non-zero terms. IfX{\displaystyle X}is a pseudometrizable TVS andA{\displaystyle A}maps bounded subsets ofX{\displaystyle X}to bounded subsets ofY,{\displaystyle Y,}thenA{\displaystyle A}is continuous.[14]Discontinuous linear functionals exist on any infinite-dimensional pseudometrizable TVS.[46]Thus, a pseudometrizable TVS is finite-dimensional if and only if its continuous dual space is equal to itsalgebraic dual space.[46] IfF:X→Y{\displaystyle F:X\to Y}is a linear map between TVSs andX{\displaystyle X}is metrizable then the following are equivalent: Open and almost open maps A vector subspaceM{\displaystyle M}of a TVSX{\displaystyle X}hasthe extension propertyif any continuous linear functional onM{\displaystyle M}can be extended to a continuous linear functional onX.{\displaystyle X.}[22]Say that a TVSX{\displaystyle X}has theHahn-Banachextension property(HBEP) if every vector subspace ofX{\displaystyle X}has the extension property.[22] TheHahn-Banach theoremguarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable TVSs there is a converse: Theorem(Kalton)—Every complete metrizable TVS with the Hahn-Banach extension property is locally convex.[22] If a vector spaceX{\displaystyle X}has uncountable dimension and if we endow it with thefinest vector topologythen this is a TVS with the HBEP that is neither locally convex or metrizable.[22] Proofs
https://en.wikipedia.org/wiki/F-seminorm
Inmathematics, in the field ofadditive combinatorics, aGowers normoruniformity normis a class ofnormsonfunctionson a finitegroupor group-like object which quantify the amount of structure present, or conversely, the amount ofrandomness.[1]They are used in the study ofarithmetic progressionsin the group. They are named afterTimothy Gowers, who introduced it in his work onSzemerédi's theorem.[2] Letf{\displaystyle f}be acomplex-valued function on a finiteabelian groupG{\displaystyle G}and letJ{\displaystyle J}denotecomplex conjugation. The Gowersd{\displaystyle d}-norm is Gowers norms are also defined for complex-valued functionsfon a segment[N]=0,1,2,...,N−1{\displaystyle [N]={0,1,2,...,N-1}}, whereNis a positiveinteger. In this context, the uniformity norm is given as‖f‖Ud[N]=‖f~‖Ud(Z/N~Z)/‖1[N]‖Ud(Z/N~Z){\displaystyle \Vert f\Vert _{U^{d}[N]}=\Vert {\tilde {f}}\Vert _{U^{d}(\mathbb {Z} /{\tilde {N}}\mathbb {Z} )}/\Vert 1_{[N]}\Vert _{U^{d}(\mathbb {Z} /{\tilde {N}}\mathbb {Z} )}}, whereN~{\displaystyle {\tilde {N}}}is a large integer,1[N]{\displaystyle 1_{[N]}}denotes theindicator functionof [N], andf~(x){\displaystyle {\tilde {f}}(x)}is equal tof(x){\displaystyle f(x)}forx∈[N]{\displaystyle x\in [N]}and0{\displaystyle 0}for all otherx{\displaystyle x}. This definition does not depend onN~{\displaystyle {\tilde {N}}}, as long asN~>2dN{\displaystyle {\tilde {N}}>2^{d}N}. Aninverse conjecturefor these norms is a statement asserting that if abounded functionfhas a large Gowersd-norm thenfcorrelates with a polynomial phase of degreed− 1 or other object with polynomial behaviour (e.g. a (d− 1)-stepnilsequence). The precise statement depends on the Gowers norm under consideration. The Inverse Conjecture forvector spacesover afinite fieldF{\displaystyle \mathbb {F} }asserts that for anyδ>0{\displaystyle \delta >0}there exists a constantc>0{\displaystyle c>0}such that for anyfinite-dimensionalvector spaceVoverF{\displaystyle \mathbb {F} }and any complex-valued functionf{\displaystyle f}onV{\displaystyle V}, bounded by 1, such that‖f‖Ud[V]≥δ{\displaystyle \Vert f\Vert _{U^{d}[V]}\geq \delta }, there exists a polynomial sequenceP:V→R/Z{\displaystyle P\colon V\to \mathbb {R} /\mathbb {Z} }such that wheree(x):=e2πix{\displaystyle e(x):=e^{2\pi ix}}. This conjecture was proved to be true by Bergelson, Tao, and Ziegler.[3][4][5] The Inverse Conjecture for GowersUd[N]{\displaystyle U^{d}[N]}norm asserts that for anyδ>0{\displaystyle \delta >0}, a finite collection of (d− 1)-stepnilmanifoldsMδ{\displaystyle {\mathcal {M}}_{\delta }}and constantsc,C{\displaystyle c,C}can be found, so that the following is true. IfN{\displaystyle N}is a positive integer andf:[N]→C{\displaystyle f\colon [N]\to \mathbb {C} }is bounded in absolute value by 1 and‖f‖Ud[N]≥δ{\displaystyle \Vert f\Vert _{U^{d}[N]}\geq \delta }, then there exists a nilmanifoldG/Γ∈Mδ{\displaystyle G/\Gamma \in {\mathcal {M}}_{\delta }}and anilsequenceF(gnx){\displaystyle F(g^{n}x)}whereg∈G,x∈G/Γ{\displaystyle g\in G,\ x\in G/\Gamma }andF:G/Γ→C{\displaystyle F\colon G/\Gamma \to \mathbb {C} }bounded by 1 in absolute value and with Lipschitz constant bounded byC{\displaystyle C}such that: This conjecture was proved to be true by Green, Tao, and Ziegler.[6][7]It should be stressed that the appearance of nilsequences in the above statement is necessary. The statement is no longer true if we only consider polynomial phases.
https://en.wikipedia.org/wiki/Gowers_norm
Inmathematics, in the areas oftopologyandfunctional analysis, theAnderson–Kadec theoremstates[1]that any twoinfinite-dimensional,separableBanach spaces, or, more generally,Fréchet spaces, arehomeomorphicas topological spaces. The theorem was proved byMikhail Kadec(1966) andRichard Davis Anderson. Every infinite-dimensional, separable Fréchet space is homeomorphic toRN,{\displaystyle \mathbb {R} ^{\mathbb {N} },}theCartesian productofcountably manycopies of the real lineR.{\displaystyle \mathbb {R} .} Kadec norm:Anorm‖⋅‖{\displaystyle \|\,\cdot \,\|}on anormed linearspaceX{\displaystyle X}is called aKadec normwith respect to atotal subsetA⊆X∗{\displaystyle A\subseteq X^{*}}of the dual spaceX∗{\displaystyle X^{*}}if for each sequencexn∈X{\displaystyle x_{n}\in X}the following condition is satisfied: Eidelheittheorem:A Fréchet spaceE{\displaystyle E}is either isomorphic to a Banach space, or has a quotient space isomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.} Kadec renorming theorem:Every separable Banach spaceX{\displaystyle X}admits a Kadec norm with respect to a countable total subsetA⊆X∗{\displaystyle A\subseteq X^{*}}ofX∗.{\displaystyle X^{*}.}The new norm is equivalent to the original norm‖⋅‖{\displaystyle \|\,\cdot \,\|}ofX.{\displaystyle X.}The setA{\displaystyle A}can be taken to be any weak-star dense countable subset of the unit ball ofX∗{\displaystyle X^{*}} In the argument belowE{\displaystyle E}denotes an infinite-dimensional separable Fréchet space and≃{\displaystyle \simeq }the relation of topological equivalence (existence of homeomorphism). A starting point of the proof of the Anderson–Kadec theorem is Kadec's proof that any infinite-dimensional separable Banach space is homeomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.} From Eidelheit theorem, it is enough to consider Fréchet space that are not isomorphic to a Banach space. In that case there they have a quotient that is isomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.}A result of Bartle-Graves-Michael proves that thenE≃Y×RN{\displaystyle E\simeq Y\times \mathbb {R} ^{\mathbb {N} }}for some Fréchet spaceY.{\displaystyle Y.} On the other hand,E{\displaystyle E}is a closed subspace of a countable infinite product of separable Banach spacesX=∏n=1∞Xi{\textstyle X=\prod _{n=1}^{\infty }X_{i}}of separable Banach spaces. The same result of Bartle-Graves-Michael applied toX{\displaystyle X}gives a homeomorphismX≃E×Z{\displaystyle X\simeq E\times Z}for some Fréchet spaceZ.{\displaystyle Z.}From Kadec's result the countable product of infinite-dimensional separable Banach spacesX{\displaystyle X}is homeomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.} The proof of Anderson–Kadec theorem consists of the sequence of equivalencesRN≃(E×Z)N≃EN×ZN≃E×EN×ZN≃E×RN≃Y×RN×RN≃Y×RN≃E{\displaystyle {\begin{aligned}\mathbb {R} ^{\mathbb {N} }&\simeq (E\times Z)^{\mathbb {N} }\\&\simeq E^{\mathbb {N} }\times Z^{\mathbb {N} }\\&\simeq E\times E^{\mathbb {N} }\times Z^{\mathbb {N} }\\&\simeq E\times \mathbb {R} ^{\mathbb {N} }\\&\simeq Y\times \mathbb {R} ^{\mathbb {N} }\times \mathbb {R} ^{\mathbb {N} }\\&\simeq Y\times \mathbb {R} ^{\mathbb {N} }\\&\simeq E\end{aligned}}}
https://en.wikipedia.org/wiki/Kadec_norm
Inmathematics, themagnitudeorsizeof amathematical objectis a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of anordering(or ranking) of theclassof objects to which it belongs. Magnitude as a concept dates toAncient Greeceand has been applied as ameasureof distance from one object to another. For numbers, theabsolute valueof a number is commonly applied as the measure of units between a number and zero. In vector spaces, theEuclidean normis a measure of magnitude used to define a distance between two points in space. Inphysics, magnitude can be defined as quantity or distance. Anorder of magnitudeis typically defined as a unit of distance between one number and another's numerical places on the decimal scale. Ancient Greeksdistinguished between several types of magnitude,[1]including: They proved that the first two could not be the same, or evenisomorphicsystems of magnitude.[2]They did not considernegativemagnitudes to be meaningful, andmagnitudeis still primarily used in contexts in whichzerois either the smallest size or less than all possible sizes. The magnitude of anynumberx{\displaystyle x}is usually called itsabsolute valueormodulus, denoted by|x|{\displaystyle |x|}.[3] The absolute value of areal numberris defined by:[4] Absolute value may also be thought of as the number'sdistancefrom zero on the realnumber line. For example, the absolute value of both 70 and −70 is 70. Acomplex numberzmay be viewed as the position of a pointPin a2-dimensional space, called thecomplex plane. The absolute value (ormodulus) ofzmay be thought of as the distance ofPfrom the origin of that space. The formula for the absolute value ofz=a+biis similar to that for theEuclidean normof a vector in a 2-dimensionalEuclidean space:[5] where the real numbersaandbare thereal partand theimaginary partofz, respectively. For instance, the modulus of−3 + 4iis(−3)2+42=5{\displaystyle {\sqrt {(-3)^{2}+4^{2}}}=5}. Alternatively, the magnitude of a complex numberzmay be defined as the square root of the product of itself and itscomplex conjugate,z¯{\displaystyle {\bar {z}}}, where for any complex numberz=a+bi{\displaystyle z=a+bi}, its complex conjugate isz¯=a−bi{\displaystyle {\bar {z}}=a-bi}. (wherei2=−1{\displaystyle i^{2}=-1}). AEuclidean vectorrepresents the position of a pointPin aEuclidean space. Geometrically, it can be described as an arrow from the origin of the space (vector tail) to that point (vector tip). Mathematically, a vectorxin ann-dimensional Euclidean space can be defined as an ordered list ofnreal numbers (theCartesian coordinatesofP):x= [x1,x2, ...,xn]. Itsmagnitudeorlength, denoted by‖x‖{\displaystyle \|x\|},[6]is most commonly defined as itsEuclidean norm(or Euclidean length):[7] For instance, in a 3-dimensional space, the magnitude of [3, 4, 12] is 13 because32+42+122=169=13.{\displaystyle {\sqrt {3^{2}+4^{2}+12^{2}}}={\sqrt {169}}=13.}This is equivalent to thesquare rootof thedot productof the vector with itself: The Euclidean norm of a vector is just a special case ofEuclidean distance: the distance between its tail and its tip. Two similar notations are used for the Euclidean norm of a vectorx: A disadvantage of the second notation is that it can also be used to denote theabsolute valueofscalarsand thedeterminantsofmatrices, which introduces an element of ambiguity. By definition, all Euclidean vectors have a magnitude (see above). However, a vector in an abstractvector spacedoes not possess a magnitude. Avector spaceendowed with anorm, such as the Euclidean space, is called anormed vector space.[8]The norm of a vectorvin a normed vector space can be considered to be the magnitude ofv. In apseudo-Euclidean space, the magnitude of a vector is the value of thequadratic formfor that vector. When comparing magnitudes, alogarithmic scaleis often used. Examples include theloudnessof asound(measured indecibels), thebrightnessof astar, and theRichter scaleof earthquake intensity. Logarithmic magnitudes can be negative. In thenatural sciences, a logarithmic magnitude is typically referred to as alevel. Orders of magnitude denote differences in numeric quantities, usually measurements, by a factor of 10—that is, a difference of one digit in the location of the decimal point. Inmathematics, the concept of ameasureis a generalization and formalization ofgeometrical measures(length,area,volume) and other common notions, such as magnitude,mass, andprobabilityof events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational inprobability theory,integration theory, and can be generalized to assumenegative values, as withelectrical charge. Far-reaching generalizations (such asspectral measuresandprojection-valued measures) of measure are widely used inquantum physicsand physics in general.
https://en.wikipedia.org/wiki/Magnitude_(mathematics)
In the field ofmathematics,normsare defined for elements within avector space. Specifically, when the vector space comprises matrices, such norms are referred to asmatrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication. Given afieldK{\displaystyle \ K\ }of eitherrealorcomplex numbers(or any complete subset thereof), letKm×n{\displaystyle \ K^{m\times n}\ }be theK-vector spaceof matrices withm{\displaystyle m}rows andn{\displaystyle n}columns and entries in the fieldK.{\displaystyle \ K~.}A matrix norm is anormonKm×n.{\displaystyle \ K^{m\times n}~.} Norms are often expressed withdouble vertical bars(like so:‖A‖{\displaystyle \ \|A\|\ }). Thus, the matrix norm is afunction‖⋅‖:Km×n→R0+{\displaystyle \ \|\cdot \|:K^{m\times n}\to \mathbb {R} ^{0+}\ }that must satisfy the following properties:[1][2] For all scalarsα∈K{\displaystyle \ \alpha \in K\ }and matricesA,B∈Km×n,{\displaystyle \ A,B\in K^{m\times n}\ ,} The only feature distinguishing matrices from rearranged vectors ismultiplication. Matrix norms are particularly useful if they are alsosub-multiplicative:[1][2][3] Every norm onKn×n{\displaystyle \ K^{n\times n}\ }can be rescaled to be sub-multiplicative; in some books, the terminologymatrix normis reserved for sub-multiplicative norms.[4] Suppose avector norm‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}onKn{\displaystyle K^{n}}and a vector norm‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}onKm{\displaystyle K^{m}}are given. Anym×n{\displaystyle m\times n}matrixAinduces a linear operator fromKn{\displaystyle K^{n}}toKm{\displaystyle K^{m}}with respect to the standard basis, and one defines the correspondinginduced normoroperator normorsubordinate normon the spaceKm×n{\displaystyle K^{m\times n}}of allm×n{\displaystyle m\times n}matrices as follows:‖A‖α,β=sup{‖Ax‖β:x∈Knsuch that‖x‖α≤1}{\displaystyle \|A\|_{\alpha ,\beta }=\sup\{\|Ax\|_{\beta }:x\in K^{n}{\text{ such that }}\|x\|_{\alpha }\leq 1\}}wheresup{\displaystyle \sup }denotes thesupremum. This norm measures how much the mapping induced byA{\displaystyle A}can stretch vectors. Depending on the vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }},‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}used, notation other than‖⋅‖α,β{\displaystyle \|\cdot \|_{\alpha ,\beta }}can be used for the operator norm. If thep-norm for vectors(1≤p≤∞{\displaystyle 1\leq p\leq \infty }) is used for both spacesKn{\displaystyle K^{n}}andKm,{\displaystyle K^{m},}then the corresponding operator norm is:[2]‖A‖p=sup{‖Ax‖p:x∈Knsuch that‖x‖p≤1}.{\displaystyle \|A\|_{p}=\sup\{\|Ax\|_{p}:x\in K^{n}{\text{ such that }}\|x\|_{p}\leq 1\}.}These induced norms are different from the"entry-wise"p-norms and theSchattenp-normsfor matrices treated below, which are also usually denoted by‖A‖p.{\displaystyle \|A\|_{p}.} Geometrically speaking, one can imagine ap-norm unit ballVp,n={x∈Kn:‖x‖p≤1}{\displaystyle V_{p,n}=\{x\in K^{n}:\|x\|_{p}\leq 1\}}inKn{\displaystyle K^{n}}, then apply the linear mapA{\displaystyle A}to the ball. It would end up becoming a distorted convex shapeAVp,n⊂Km{\displaystyle AV_{p,n}\subset K^{m}}, and‖A‖p{\displaystyle \|A\|_{p}}measures the longest "radius" of the distorted convex shape. In other words, we must take ap-norm unit ballVp,m{\displaystyle V_{p,m}}inKm{\displaystyle K^{m}}, then multiply it by at least‖A‖p{\displaystyle \|A\|_{p}}, in order for it to be large enough to containAVp,n{\displaystyle AV_{p,n}}. Whenp=1,{\displaystyle \ p=1\ ,}orp=∞,{\displaystyle \ p=\infty \ ,}we have simple formulas. which is simply the maximum absolute column sum of the matrix.‖A‖∞=max1≤i≤m∑j=1n|aij|,{\displaystyle \|A\|_{\infty }=\max _{1\leq i\leq m}\sum _{j=1}^{n}\left|a_{ij}\right|\ ,}which is simply the maximum absolute row sum of the matrix. For example, forA=[−357264028],{\displaystyle A={\begin{bmatrix}-3&5&7\\~~2&6&4\\~~0&2&8\\\end{bmatrix}}\ ,}we have that‖A‖1=max{|−3|+2+0,5+6+2,7+4+8}=max{5,13,19}=19,{\displaystyle \|A\|_{1}=\max {\bigl \{}\ |{-3}|+2+0\ ,~5+6+2\ ,~7+4+8\ {\bigr \}}=\max {\bigl \{}\ 5\ ,~13\ ,~19\ {\bigr \}}=19\ ,}‖A‖∞=max{|−3|+5+7,2+6+4,0+2+8}=max{15,12,10}=15.{\displaystyle \|A\|_{\infty }=\max {\bigl \{}\ |{-3}|+5+7\ ,~2+6+4\ ,~0+2+8\ {\bigr \}}=\max {\bigl \{}\ 15\ ,~12\ ,~10\ {\bigr \}}=15~.} Whenp=2{\displaystyle p=2}(theEuclidean normorℓ2{\displaystyle \ell _{2}}-norm for vectors), the induced matrix norm is thespectral norm. The two values donotcoincide in infinite dimensions — seeSpectral radiusfor further discussion. The spectral radius should not be confused with the spectral norm. The spectral norm of a matrixA{\displaystyle A}is the largestsingular valueofA{\displaystyle A}, i.e., the square root of the largesteigenvalueof the matrixA∗A,{\displaystyle A^{*}A,}whereA∗{\displaystyle A^{*}}denotes theconjugate transposeofA{\displaystyle A}:[5]‖A‖2=λmax(A∗A)=σmax(A).{\displaystyle \|A\|_{2}={\sqrt {\lambda _{\max }\left(A^{*}A\right)}}=\sigma _{\max }(A).}whereσmax(A){\displaystyle \sigma _{\max }(A)}represents the largest singular value of matrixA.{\displaystyle A.} There are further properties: We can generalize the above definition. Suppose we have vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}for spacesKn{\displaystyle K^{n}}andKm{\displaystyle K^{m}}respectively; the corresponding operator norm is‖A‖α,β=sup{‖Ax‖β:x∈Knsuch that‖x‖α≤1}{\displaystyle \|A\|_{\alpha ,\beta }=\sup\{\|Ax\|_{\beta }:x\in K^{n}{\text{ such that }}\|x\|_{\alpha }\leq 1\}}In particular, the‖A‖p{\displaystyle \|A\|_{p}}defined previously is the special case of‖A‖p,p{\displaystyle \|A\|_{p,p}}. In the special cases ofα=2{\displaystyle \alpha =2}andβ=∞{\displaystyle \beta =\infty }, the induced matrix norms can be computed by‖A‖2,∞=max1≤i≤m‖Ai:‖2,{\displaystyle \|A\|_{2,\infty }=\max _{1\leq i\leq m}\|A_{i:}\|_{2},}whereAi:{\displaystyle A_{i:}}is the i-th row of matrixA{\displaystyle A}. In the special cases ofα=1{\displaystyle \alpha =1}andβ=2{\displaystyle \beta =2}, the induced matrix norms can be computed by‖A‖1,2=max1≤j≤n‖A:j‖2,{\displaystyle \|A\|_{1,2}=\max _{1\leq j\leq n}\|A_{:j}\|_{2},}whereA:j{\displaystyle A_{:j}}is the j-th column of matrixA{\displaystyle A}. Hence,‖A‖2,∞{\displaystyle \|A\|_{2,\infty }}and‖A‖1,2{\displaystyle \|A\|_{1,2}}are the maximum row and column 2-norm of the matrix, respectively. Any operator norm isconsistentwith the vector norms that induce it, giving‖Ax‖β≤‖A‖α,β‖x‖α.{\displaystyle \|Ax\|_{\beta }\leq \|A\|_{\alpha ,\beta }\|x\|_{\alpha }.} Suppose‖⋅‖α,β{\displaystyle \|\cdot \|_{\alpha ,\beta }};‖⋅‖β,γ{\displaystyle \|\cdot \|_{\beta ,\gamma }}; and‖⋅‖α,γ{\displaystyle \|\cdot \|_{\alpha ,\gamma }}are operator norms induced by the respective pairs of vector norms(‖⋅‖α,‖⋅‖β){\displaystyle (\|\cdot \|_{\alpha },\|\cdot \|_{\beta })};(‖⋅‖β,‖⋅‖γ){\displaystyle (\|\cdot \|_{\beta },\|\cdot \|_{\gamma })}; and(‖⋅‖α,‖⋅‖γ){\displaystyle (\|\cdot \|_{\alpha },\|\cdot \|_{\gamma })}. Then, this follows from‖ABx‖γ≤‖A‖β,γ‖Bx‖β≤‖A‖β,γ‖B‖α,β‖x‖α{\displaystyle \|ABx\|_{\gamma }\leq \|A\|_{\beta ,\gamma }\|Bx\|_{\beta }\leq \|A\|_{\beta ,\gamma }\|B\|_{\alpha ,\beta }\|x\|_{\alpha }}andsup‖x‖α=1‖ABx‖γ=‖AB‖α,γ.{\displaystyle \sup _{\|x\|_{\alpha }=1}\|ABx\|_{\gamma }=\|AB\|_{\alpha ,\gamma }.} Suppose‖⋅‖α,α{\displaystyle \|\cdot \|_{\alpha ,\alpha }}is an operator norm on the space of square matricesKn×n{\displaystyle K^{n\times n}}induced by vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}. Then, the operator norm is a sub-multiplicative matrix norm:‖AB‖α,α≤‖A‖α,α‖B‖α,α.{\displaystyle \|AB\|_{\alpha ,\alpha }\leq \|A\|_{\alpha ,\alpha }\|B\|_{\alpha ,\alpha }.} Moreover, any such norm satisfies the inequality for all positive integersr, whereρ(A)is thespectral radiusofA. ForsymmetricorhermitianA, we have equality in (1) for the 2-norm, since in this case the 2-normisprecisely the spectral radius ofA. For an arbitrary matrix, we may not have equality for any norm; a counterexample would beA=[0100],{\displaystyle A={\begin{bmatrix}0&1\\0&0\end{bmatrix}},}which has vanishing spectral radius. In any case, for any matrix norm, we have thespectral radius formula:limr→∞‖Ar‖1/r=ρ(A).{\displaystyle \lim _{r\to \infty }\|A^{r}\|^{1/r}=\rho (A).} If the vector norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}are given in terms ofenergy normsbased onsymmetricpositive definitematricesP{\displaystyle P}andQ{\displaystyle Q}respectively, the resulting operator norm is given as‖A‖P,Q=sup{‖Ax‖Q:‖x‖P≤1}.{\displaystyle \|A\|_{P,Q}=\sup\{\|Ax\|_{Q}:\|x\|_{P}\leq 1\}.} Using the symmetricmatrix square rootsofP{\displaystyle P}andQ{\displaystyle Q}respectively, the operator norm can be expressed as the spectral norm of a modified matrix: ‖A‖P,Q=‖Q1/2AP−1/2‖2.{\displaystyle \|A\|_{P,Q}=\|Q^{1/2}AP^{-1/2}\|_{2}.} A matrix norm‖⋅‖{\displaystyle \|\cdot \|}onKm×n{\displaystyle K^{m\times n}}is calledconsistentwith a vector norm‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}onKn{\displaystyle K^{n}}and a vector norm‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}onKm{\displaystyle K^{m}}, if:‖Ax‖β≤‖A‖‖x‖α{\displaystyle \left\|Ax\right\|_{\beta }\leq \left\|A\right\|\left\|x\right\|_{\alpha }}for allA∈Km×n{\displaystyle A\in K^{m\times n}}and allx∈Kn{\displaystyle x\in K^{n}}. In the special case ofm=nandα=β{\displaystyle \alpha =\beta },‖⋅‖{\displaystyle \|\cdot \|}is also calledcompatiblewith‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}. All induced norms are consistent by definition. Also, any sub-multiplicative matrix norm onKn×n{\displaystyle K^{n\times n}}induces a compatible vector norm onKn{\displaystyle K^{n}}by defining‖v‖:=‖(v,v,…,v)‖{\displaystyle \left\|v\right\|:=\left\|\left(v,v,\dots ,v\right)\right\|}. These norms treat anm×n{\displaystyle m\times n}matrix as a vector of sizem⋅n{\displaystyle m\cdot n}, and use one of the familiar vector norms. For example, using thep-norm for vectors,p≥ 1, we get: This is a different norm from the inducedp-norm (see above) and the Schattenp-norm (see below), but the notation is the same. The special casep= 2 is the Frobenius norm, andp= ∞ yields the maximum norm. Let(a1,…,an){\displaystyle (a_{1},\ldots ,a_{n})}be the dimensionmcolumns of matrixA{\displaystyle A}. From the original definition, the matrixA{\displaystyle A}presentsndata points in anm-dimensional space. TheL2,1{\displaystyle L_{2,1}}norm[6]is the sum of the Euclidean norms of the columns of the matrix: TheL2,1{\displaystyle L_{2,1}}norm as an error function is more robust, since the error for each data point (a column) is not squared. It is used inrobust data analysisandsparse coding. Forp,q≥ 1, theL2,1{\displaystyle L_{2,1}}norm can be generalized to theLp,q{\displaystyle L_{p,q}}norm as follows: Whenp=q= 2for theLp,q{\displaystyle L_{p,q}}norm, it is called theFrobenius normor theHilbert–Schmidt norm, though the latter term is used more frequently in the context of operators on (possibly infinite-dimensional)Hilbert space. This norm can be defined in various ways: where thetraceis the sum of diagonal entries, andσi(A){\displaystyle \sigma _{i}(A)}are thesingular valuesofA{\displaystyle A}. The second equality is proven by explicit computation oftrace(A∗A){\displaystyle \mathrm {trace} (A^{*}A)}. The third equality is proven bysingular value decompositionofA{\displaystyle A}, and the fact that the trace is invariant under circular shifts. The Frobenius norm is an extension of the Euclidean norm toKn×n{\displaystyle K^{n\times n}}and comes from theFrobenius inner producton the space of all matrices. The Frobenius norm is sub-multiplicative and is very useful fornumerical linear algebra. The sub-multiplicativity of Frobenius norm can be proved using theCauchy–Schwarz inequality. In fact, it is more than sub-multiplicative, as‖AB‖F≤‖A‖op‖B‖F{\displaystyle \|AB\|_{F}\leq \|A\|_{op}\|B\|_{F}}where the operator norm‖⋅‖op≤‖⋅‖F{\displaystyle \|\cdot \|_{op}\leq \|\cdot \|_{F}}. Frobenius norm is often easier to compute than induced norms, and has the useful property of being invariant underrotations(andunitaryoperations in general). That is,‖A‖F=‖AU‖F=‖UA‖F{\displaystyle \|A\|_{\text{F}}=\|AU\|_{\text{F}}=\|UA\|_{\text{F}}}for any unitary matrixU{\displaystyle U}. This property follows from the cyclic nature of the trace (trace⁡(XYZ)=trace⁡(YZX)=trace⁡(ZXY){\displaystyle \operatorname {trace} (XYZ)=\operatorname {trace} (YZX)=\operatorname {trace} (ZXY)}): and analogously: where we have used the unitary nature ofU{\displaystyle U}(that is,U∗U=UU∗=I{\displaystyle U^{*}U=UU^{*}=\mathbf {I} }). It also satisfies and where⟨A,B⟩F{\displaystyle \langle A,B\rangle _{\text{F}}}is theFrobenius inner product, and Re is the real part of a complex number (irrelevant for real matrices) Themax normis the elementwise norm in the limit asp=qgoes to infinity: This norm is notsub-multiplicative; but modifying the right-hand side tomnmaxi,j|aij|{\displaystyle {\sqrt {mn}}\max _{i,j}\vert a_{ij}\vert }makes it so. Note that in some literature (such asCommunication complexity), an alternative definition of max-norm, also called theγ2{\displaystyle \gamma _{2}}-norm, refers to the factorization norm: The Schattenp-norms arise when applying thep-norm to the vector ofsingular valuesof a matrix.[2]If the singular values of them×n{\displaystyle m\times n}matrixA{\displaystyle A}are denoted byσi, then the Schattenp-norm is defined by These norms again share the notation with the induced and entry-wisep-norms, but they are different. All Schatten norms are sub-multiplicative. They are also unitarily invariant, which means that‖A‖=‖UAV‖{\displaystyle \|A\|=\|UAV\|}for all matricesA{\displaystyle A}and allunitary matricesU{\displaystyle U}andV{\displaystyle V}. The most familiar cases arep= 1, 2, ∞. The casep= 2 yields the Frobenius norm, introduced before. The casep= ∞ yields the spectral norm, which is the operator norm induced by the vector 2-norm (see above). Finally,p= 1 yields thenuclear norm(also known as thetrace norm, or theKy Fan'n'-norm[7]), defined as: whereA∗A{\displaystyle {\sqrt {A^{*}A}}}denotes a positive semidefinite matrixB{\displaystyle B}such thatBB=A∗A{\displaystyle BB=A^{*}A}. More precisely, sinceA∗A{\displaystyle A^{*}A}is apositive semidefinite matrix, itssquare rootis well defined. The nuclear norm‖A‖∗{\displaystyle \|A\|_{*}}is aconvex envelopeof the rank functionrank(A){\displaystyle {\text{rank}}(A)}, so it is often used inmathematical optimizationto search for low-rank matrices. Combiningvon Neumann's trace inequalitywithHölder's inequalityfor Euclidean space yields a version ofHölder's inequalityfor Schatten norms for1/p+1/q=1{\displaystyle 1/p+1/q=1}: In particular, this implies the Schatten norm inequality A matrix norm‖⋅‖{\displaystyle \|\cdot \|}is calledmonotoneif it is monotonic with respect to theLoewner order. Thus, a matrix norm is increasing if The Frobenius norm and spectral norm are examples of monotone norms.[8] Another source of inspiration for matrix norms arises from considering a matrix as theadjacency matrixof aweighted,directed graph.[9]The so-called "cut norm" measures how close the associated graph is to beingbipartite:‖A‖◻=maxS⊆[n],T⊆[m]|∑s∈S,t∈TAt,s|{\displaystyle \|A\|_{\Box }=\max _{S\subseteq [n],T\subseteq [m]}{\left|\sum _{s\in S,t\in T}{A_{t,s}}\right|}}whereA∈Km×n.[9][10][11]Equivalent definitions (up to a constant factor) impose the conditions2|S| >n& 2|T| >m;S=T; orS∩T= ∅.[10] The cut-norm is equivalent to the induced operator norm‖·‖∞→1, which is itself equivalent to another norm, called theGrothendiecknorm.[11] To define the Grothendieck norm, first note that a linear operatorK1→K1is just a scalar, and thus extends to a linear operator on anyKk→Kk. Moreover, given any choice of basis forKnandKm, any linear operatorKn→Kmextends to a linear operator(Kk)n→ (Kk)m, by letting each matrix element on elements ofKkvia scalar multiplication. The Grothendieck norm is the norm of that extended operator; in symbols:[11]‖A‖G,k=supeachuj,vj∈Kk;‖uj‖=‖vj‖=1∑j∈[n],ℓ∈[m](uj⋅vj)Aℓ,j{\displaystyle \|A\|_{G,k}=\sup _{{\text{each }}u_{j},v_{j}\in K^{k};\|u_{j}\|=\|v_{j}\|=1}{\sum _{j\in [n],\ell \in [m]}{(u_{j}\cdot v_{j})A_{\ell ,j}}}} The Grothendieck norm depends on choice of basis (usually taken to be thestandard basis) andk. For any two matrix norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}, we have that: for some positive numbersrands, for all matricesA∈Km×n{\displaystyle A\in K^{m\times n}}. In other words, all norms onKm×n{\displaystyle K^{m\times n}}areequivalent; they induce the sametopologyonKm×n{\displaystyle K^{m\times n}}. This is true because the vector spaceKm×n{\displaystyle K^{m\times n}}has the finitedimensionm×n{\displaystyle m\times n}. Moreover, for every matrix norm‖⋅‖{\displaystyle \|\cdot \|}onRn×n{\displaystyle \mathbb {R} ^{n\times n}}there exists a unique positive real numberk{\displaystyle k}such thatℓ‖⋅‖{\displaystyle \ell \|\cdot \|}is a sub-multiplicative matrix norm for everyℓ≥k{\displaystyle \ell \geq k}; to wit, A sub-multiplicative matrix norm‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}is said to beminimal, if there exists no other sub-multiplicative matrix norm‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}satisfying‖⋅‖β<‖⋅‖α{\displaystyle \|\cdot \|_{\beta }<\|\cdot \|_{\alpha }}. Let‖A‖p{\displaystyle \|A\|_{p}}once again refer to the norm induced by the vectorp-norm (as above in the Induced norm section). For matrixA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}ofrankr{\displaystyle r}, the following inequalities hold:[12][13]
https://en.wikipedia.org/wiki/Matrix_norm
Inmathematics, in the field offunctional analysis, aMinkowski functional(afterHermann Minkowski) orgauge functionis a function that recovers a notion of distance on a linear space. IfK{\textstyle K}is a subset of arealorcomplexvector spaceX,{\textstyle X,}then theMinkowski functionalorgaugeofK{\textstyle K}is defined to be thefunctionpK:X→[0,∞],{\textstyle p_{K}:X\to [0,\infty ],}valued in theextended real numbers, defined bypK(x):=inf{r∈R:r>0andx∈rK}for everyx∈X,{\displaystyle p_{K}(x):=\inf\{r\in \mathbb {R} :r>0{\text{ and }}x\in rK\}\quad {\text{ for every }}x\in X,}where theinfimumof the empty set is defined to bepositive infinity∞{\textstyle \,\infty \,}(which isnota real number so thatpK(x){\textstyle p_{K}(x)}would thennotbe real-valued). The setK{\textstyle K}is often assumed/picked to have properties, such as being an absorbingdiskinX{\textstyle X}, that guarantee thatpK{\textstyle p_{K}}will be a real-valuedseminormonX.{\textstyle X.}In fact, every seminormp{\textstyle p}onX{\textstyle X}is equal to the Minkowski functional (that is,p=pK{\textstyle p=p_{K}}) of any subsetK{\textstyle K}ofX{\textstyle X}satisfying {x∈X:p(x)<1}⊆K⊆{x∈X:p(x)≤1}{\displaystyle \{x\in X:p(x)<1\}\subseteq K\subseteq \{x\in X:p(x)\leq 1\}} (where all three of these sets are necessarily absorbing inX{\textstyle X}and the first and last are also disks). Thus every seminorm (which is afunctiondefined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is asetwith certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm). These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis. In particular, through these relationships, Minkowski functionals allow one to "translate" certaingeometricproperties of a subset ofX{\textstyle X}into certainalgebraicproperties of a function onX.{\textstyle X.} The Minkowski function is always non-negative (meaningpK≥0{\textstyle p_{K}\geq 0}). This property of being nonnegative stands in contrast to other classes of functions, such assublinear functionsand reallinear functionals, that do allow negative values. However,pK{\textstyle p_{K}}might not be real-valued since for any givenx∈X,{\textstyle x\in X,}the valuepK(x){\textstyle p_{K}(x)}is a real number if and only if{r>0:x∈rK}{\textstyle \{r>0:x\in rK\}}is notempty. Consequently,K{\textstyle K}is usually assumed to have properties (such as beingabsorbinginX,{\textstyle X,}for instance) that will guarantee thatpK{\textstyle p_{K}}is real-valued. LetK{\textstyle K}be a subset of a real or complex vector spaceX.{\textstyle X.}Define thegaugeofK{\textstyle K}or theMinkowski functionalassociated with or induced byK{\textstyle K}as being the functionpK:X→[0,∞],{\textstyle p_{K}:X\to [0,\infty ],}valued in theextended real numbers, defined by pK(x):=inf{r>0:x∈rK},{\displaystyle p_{K}(x):=\inf\{r>0:x\in rK\},} (recall that theinfimumof the empty set is∞{\textstyle \,\infty }, that is,inf∅=∞{\textstyle \inf \varnothing =\infty }). Here,{r>0:x∈rK}{\textstyle \{r>0:x\in rK\}}is shorthand for{r∈R:r>0andx∈rK}.{\textstyle \{r\in \mathbb {R} :r>0{\text{ and }}x\in rK\}.} For anyx∈X,{\textstyle x\in X,}pK(x)≠∞{\textstyle p_{K}(x)\neq \infty }if and only if{r>0:x∈rK}{\textstyle \{r>0:x\in rK\}}is not empty. The arithmetic operations onR{\textstyle \mathbb {R} }can be extendedto operate on±∞,{\textstyle \pm \infty ,}wherer±∞:=0{\textstyle {\frac {r}{\pm \infty }}:=0}for all non-zero real−∞<r<∞.{\textstyle -\infty <r<\infty .}The products0⋅∞{\textstyle 0\cdot \infty }and0⋅−∞{\textstyle 0\cdot -\infty }remain undefined. In the field ofconvex analysis, the mappK{\textstyle p_{K}}taking on the value of∞{\textstyle \,\infty \,}is not necessarily an issue. However, in functional analysispK{\textstyle p_{K}}is almost always real-valued (that is, to never take on the value of∞{\textstyle \,\infty \,}), which happens if and only if the set{r>0:x∈rK}{\textstyle \{r>0:x\in rK\}}is non-empty for everyx∈X.{\textstyle x\in X.} In order forpK{\textstyle p_{K}}to be real-valued, it suffices for the origin ofX{\textstyle X}to belong to thealgebraic interiororcoreofK{\textstyle K}inX.{\textstyle X.}[1]IfK{\textstyle K}isabsorbinginX,{\textstyle X,}where recall that this implies that0∈K,{\textstyle 0\in K,}then the origin belongs to thealgebraic interiorofK{\textstyle K}inX{\textstyle X}and thuspK{\textstyle p_{K}}is real-valued. Characterizations of whenpK{\textstyle p_{K}}is real-valued are given below. Consider anormed vector space(X,‖⋅‖),{\textstyle (X,\|\,\cdot \,\|),}with the norm‖⋅‖{\textstyle \|\,\cdot \,\|}and letU:={x∈X:‖x‖≤1}{\textstyle U:=\{x\in X:\|x\|\leq 1\}}be the unit ball inX.{\textstyle X.}Then for everyx∈X,{\textstyle x\in X,}‖x‖=pU(x).{\textstyle \|x\|=p_{U}(x).}Thus the Minkowski functionalpU{\textstyle p_{U}}is just the norm onX.{\textstyle X.} LetX{\textstyle X}be a vector space without topology with underlying scalar fieldK.{\textstyle \mathbb {K} .}Letf:X→K{\textstyle f:X\to \mathbb {K} }be anylinear functionalonX{\textstyle X}(not necessarily continuous). Fixa>0.{\textstyle a>0.}LetK{\textstyle K}be the setK:={x∈X:|f(x)|≤a}{\displaystyle K:=\{x\in X:|f(x)|\leq a\}}and letpK{\textstyle p_{K}}be the Minkowski functional ofK.{\textstyle K.}ThenpK(x)=1a|f(x)|for allx∈X.{\displaystyle p_{K}(x)={\frac {1}{a}}|f(x)|\quad {\text{ for all }}x\in X.}The functionpK{\textstyle p_{K}}has the following properties: Therefore,pK{\textstyle p_{K}}is aseminormonX,{\textstyle X,}with an induced topology. This is characteristic of Minkowski functionals defined via "nice" sets. There is a one-to-one correspondence between seminorms and the Minkowski functional given by such sets. What is meant precisely by "nice" is discussed in the section below. Notice that, in contrast to a stronger requirement for a norm,pK(x)=0{\textstyle p_{K}(x)=0}need not implyx=0.{\textstyle x=0.}In the above example, one can take a nonzerox{\textstyle x}from the kernel off.{\textstyle f.}Consequently, the resulting topology need not beHausdorff. To guarantee thatpK(0)=0,{\textstyle p_{K}(0)=0,}it will henceforth be assumed that0∈K.{\textstyle 0\in K.} In order forpK{\textstyle p_{K}}to be a seminorm, it suffices forK{\textstyle K}to be adisk(that is, convex and balanced) and absorbing inX,{\textstyle X,}which are the most common assumption placed onK.{\textstyle K.} Theorem[2]—IfK{\textstyle K}is anabsorbingdiskin a vector spaceX{\textstyle X}then the Minkowski functional ofK,{\textstyle K,}which is the mappK:X→[0,∞){\textstyle p_{K}:X\to [0,\infty )}defined bypK(x):=inf{r>0:x∈rK},{\displaystyle p_{K}(x):=\inf\{r>0:x\in rK\},}is a seminorm onX.{\textstyle X.}Moreover,pK(x)=1sup{r>0:rx∈K}.{\displaystyle p_{K}(x)={\frac {1}{\sup\{r>0:rx\in K\}}}.} More generally, ifK{\textstyle K}is convex and the origin belongs to thealgebraic interiorofK,{\textstyle K,}thenpK{\textstyle p_{K}}is a nonnegativesublinear functionalonX,{\textstyle X,}which implies in particular that it issubadditiveandpositive homogeneous. IfK{\textstyle K}is absorbing inX{\textstyle X}thenp[0,1]K{\textstyle p_{[0,1]K}}is positive homogeneous, meaning thatp[0,1]K(sx)=sp[0,1]K(x){\textstyle p_{[0,1]K}(sx)=sp_{[0,1]K}(x)}for all reals≥0,{\textstyle s\geq 0,}where[0,1]K={tk:t∈[0,1],k∈K}.{\textstyle [0,1]K=\{tk:t\in [0,1],k\in K\}.}[3]Ifq{\textstyle q}is a nonnegative real-valued function onX{\textstyle X}that is positive homogeneous, then the setsU:={x∈X:q(x)<1}{\textstyle U:=\{x\in X:q(x)<1\}}andD:={x∈X:q(x)≤1}{\textstyle D:=\{x\in X:q(x)\leq 1\}}satisfy[0,1]U=U{\textstyle [0,1]U=U}and[0,1]D=D;{\textstyle [0,1]D=D;}if in additionq{\textstyle q}is absolutely homogeneous then bothU{\textstyle U}andD{\textstyle D}arebalanced.[3] Arguably the most common requirements placed on a setK{\textstyle K}to guarantee thatpK{\textstyle p_{K}}is a seminorm are thatK{\textstyle K}be anabsorbingdiskinX.{\textstyle X.}Due to how common these assumptions are, the properties of a Minkowski functionalpK{\textstyle p_{K}}whenK{\textstyle K}is an absorbing disk will now be investigated. Since all of the results mentioned above made few (if any) assumptions onK,{\textstyle K,}they can be applied in this special case. Theorem—Assume thatK{\textstyle K}is an absorbing subset ofX.{\textstyle X.}It is shown that: Convexity and subadditivity A simple geometric argument that shows convexity ofK{\textstyle K}implies subadditivity is as follows. Suppose for the moment thatpK(x)=pK(y)=r.{\textstyle p_{K}(x)=p_{K}(y)=r.}Then for alle>0,{\textstyle e>0,}x,y∈Ke:=(r,e)K.{\textstyle x,y\in K_{e}:=(r,e)K.}SinceK{\textstyle K}is convex andr+e≠0,{\textstyle r+e\neq 0,}Ke{\textstyle K_{e}}is also convex. Therefore,12x+12y∈Ke.{\textstyle {\frac {1}{2}}x+{\frac {1}{2}}y\in K_{e}.}By definition of the Minkowski functionalpK,{\textstyle p_{K},}pK(12x+12y)≤r+e=12pK(x)+12pK(y)+e.{\displaystyle p_{K}\left({\frac {1}{2}}x+{\frac {1}{2}}y\right)\leq r+e={\frac {1}{2}}p_{K}(x)+{\frac {1}{2}}p_{K}(y)+e.} But the left hand side is12pK(x+y),{\textstyle {\frac {1}{2}}p_{K}(x+y),}so thatpK(x+y)≤pK(x)+pK(y)+2e.{\displaystyle p_{K}(x+y)\leq p_{K}(x)+p_{K}(y)+2e.} Sincee>0{\textstyle e>0}was arbitrary, it follows thatpK(x+y)≤pK(x)+pK(y),{\textstyle p_{K}(x+y)\leq p_{K}(x)+p_{K}(y),}which is the desired inequality. The general casepK(x)>pK(y){\textstyle p_{K}(x)>p_{K}(y)}is obtained after the obvious modification. Convexity ofK,{\textstyle K,}together with the initial assumption that the set{r>0:x∈rK}{\textstyle \{r>0:x\in rK\}}is nonempty, implies thatK{\textstyle K}isabsorbing. Balancedness and absolute homogeneity Notice thatK{\textstyle K}being balanced implies thatλx∈rKif and only ifx∈r|λ|K.{\displaystyle \lambda x\in rK\quad {\mbox{if and only if}}\quad x\in {\frac {r}{|\lambda |}}K.} ThereforepK(λx)=inf{r>0:λx∈rK}=inf{r>0:x∈r|λ|K}=inf{|λ|r|λ|>0:x∈r|λ|K}=|λ|pK(x).{\displaystyle p_{K}(\lambda x)=\inf \left\{r>0:\lambda x\in rK\right\}=\inf \left\{r>0:x\in {\frac {r}{|\lambda |}}K\right\}=\inf \left\{|\lambda |{\frac {r}{|\lambda |}}>0:x\in {\frac {r}{|\lambda |}}K\right\}=|\lambda |p_{K}(x).} LetX{\textstyle X}be a real or complex vector space and letK{\textstyle K}be an absorbing disk inX.{\textstyle X.} Assume thatX{\textstyle X}is a (real or complex)topological vector space(TVS) (not necessarilyHausdorfforlocally convex) and letK{\textstyle K}be an absorbing disk inX.{\textstyle X.}Then IntX⁡K⊆{x∈X:pK(x)<1}⊆K⊆{x∈X:pK(x)≤1}⊆ClX⁡K,{\displaystyle \operatorname {Int} _{X}K\;\subseteq \;\{x\in X:p_{K}(x)<1\}\;\subseteq \;K\;\subseteq \;\{x\in X:p_{K}(x)\leq 1\}\;\subseteq \;\operatorname {Cl} _{X}K,} whereIntX⁡K{\textstyle \operatorname {Int} _{X}K}is thetopological interiorandClX⁡K{\textstyle \operatorname {Cl} _{X}K}is thetopological closureofK{\textstyle K}inX.{\textstyle X.}[6]Importantly, it wasnotassumed thatpK{\textstyle p_{K}}was continuous nor was it assumed thatK{\textstyle K}had any topological properties. Moreover, the Minkowski functionalpK{\textstyle p_{K}}is continuous if and only ifK{\textstyle K}is a neighborhood of the origin inX.{\textstyle X.}[6]IfpK{\textstyle p_{K}}is continuous then[6]IntX⁡K={x∈X:pK(x)<1}andClX⁡K={x∈X:pK(x)≤1}.{\displaystyle \operatorname {Int} _{X}K=\{x\in X:p_{K}(x)<1\}\quad {\text{ and }}\quad \operatorname {Cl} _{X}K=\{x\in X:p_{K}(x)\leq 1\}.} This section will investigate the most general case of the gauge ofanysubsetK{\textstyle K}ofX.{\textstyle X.}The more common special case whereK{\textstyle K}is assumed to be anabsorbingdiskinX{\textstyle X}was discussed above. All results in this section may be applied to the case whereK{\textstyle K}is an absorbing disk. Throughout,K{\textstyle K}is any subset ofX.{\textstyle X.} Summary—Suppose thatK{\textstyle K}is a subset of a real or complex vector spaceX.{\textstyle X.} The proofs of these basic properties are straightforward exercises so only the proofs of the most important statements are given. The proof that a convex subsetA⊆X{\textstyle A\subseteq X}that satisfies(0,∞)A=X{\textstyle (0,\infty )A=X}is necessarilyabsorbinginX{\textstyle X}is straightforward and can be found in the article onabsorbing sets. For any realt>0,{\textstyle t>0,} {r>0:tx∈rK}={t(r/t):x∈(r/t)K}=t{s>0:x∈sK}{\displaystyle \{r>0:tx\in rK\}=\{t(r/t):x\in (r/t)K\}=t\{s>0:x\in sK\}} so that taking the infimum of both sides shows that pK(tx)=inf{r>0:tx∈rK}=tinf{s>0:x∈sK}=tpK(x).{\displaystyle p_{K}(tx)=\inf\{r>0:tx\in rK\}=t\inf\{s>0:x\in sK\}=tp_{K}(x).} This proves that Minkowski functionals are strictly positive homogeneous. For0⋅pK(x){\textstyle 0\cdot p_{K}(x)}to be well-defined, it is necessary and sufficient thatpK(x)≠∞;{\textstyle p_{K}(x)\neq \infty ;}thuspK(tx)=tpK(x){\textstyle p_{K}(tx)=tp_{K}(x)}for allx∈X{\textstyle x\in X}and allnon-negativerealt≥0{\textstyle t\geq 0}if and only ifpK{\textstyle p_{K}}is real-valued. The hypothesis of statement (7) allows us to conclude thatpK(sx)=pK(x){\textstyle p_{K}(sx)=p_{K}(x)}for allx∈X{\textstyle x\in X}and all scalarss{\textstyle s}satisfying|s|=1.{\textstyle |s|=1.}Every scalars{\textstyle s}is of the formreit{\textstyle re^{it}}for some realt{\textstyle t}wherer:=|s|≥0{\textstyle r:=|s|\geq 0}andeit{\textstyle e^{it}}is real if and only ifs{\textstyle s}is real. The results in the statement about absolute homogeneity follow immediately from the aforementioned conclusion, from the strict positive homogeneity ofpK,{\textstyle p_{K},}and from the positive homogeneity ofpK{\textstyle p_{K}}whenpK{\textstyle p_{K}}is real-valued.◼{\textstyle \blacksquare } {x∈X:pL(x)<1for allL∈L}⊆I⊆{x∈X:pL(x)≤1for allL∈L}{\displaystyle \left\{x\in X:p_{L}(x)<1{\text{ for all }}L\in {\mathcal {L}}\right\}\quad \subseteq \quad I\quad \subseteq \quad \left\{x\in X:p_{L}(x)\leq 1{\text{ for all }}L\in {\mathcal {L}}\right\}}thenpI(x)=sup{pL(x):L∈L}{\textstyle p_{I}(x)=\sup \left\{p_{L}(x):L\in {\mathcal {L}}\right\}}for allx∈X.{\textstyle x\in X.} The following examples show that the containment(0,R]K⊆⋂e>0(0,R+e)K{\textstyle (0,R]K\;\subseteq \;{\textstyle \bigcap \limits _{e>0}}(0,R+e)K}could be proper. Example: IfR=0{\textstyle R=0}andK=X{\textstyle K=X}then(0,R]K=(0,0]X=∅X=∅{\textstyle (0,R]K=(0,0]X=\varnothing X=\varnothing }but⋂e>0(0,e)K=⋂e>0X=X,{\textstyle {\textstyle \bigcap \limits _{e>0}}(0,e)K={\textstyle \bigcap \limits _{e>0}}X=X,}which shows that its possible for(0,R]K{\textstyle (0,R]K}to be a proper subset of⋂e>0(0,R+e)K{\textstyle {\textstyle \bigcap \limits _{e>0}}(0,R+e)K}whenR=0.{\textstyle R=0.}◼{\textstyle \blacksquare } The next example shows that the containment can be proper whenR=1;{\textstyle R=1;}the example may be generalized to any realR>0.{\textstyle R>0.}Assuming that[0,1]K⊆K,{\textstyle [0,1]K\subseteq K,}the following example is representative of how it happens thatx∈X{\textstyle x\in X}satisfiespK(x)=1{\textstyle p_{K}(x)=1}butx∉(0,1]K.{\textstyle x\not \in (0,1]K.} Example: Letx∈X{\textstyle x\in X}be non-zero and letK=[0,1)x{\textstyle K=[0,1)x}so that[0,1]K=K{\textstyle [0,1]K=K}andx∉K.{\textstyle x\not \in K.}Fromx∉(0,1)K=K{\textstyle x\not \in (0,1)K=K}it follows thatpK(x)≥1.{\textstyle p_{K}(x)\geq 1.}ThatpK(x)≤1{\textstyle p_{K}(x)\leq 1}follows from observing that for everye>0,{\textstyle e>0,}(0,1+e)K=[0,1+e)([0,1)x)=[0,1+e)x,{\textstyle (0,1+e)K=[0,1+e)([0,1)x)=[0,1+e)x,}which containsx.{\textstyle x.}ThuspK(x)=1{\textstyle p_{K}(x)=1}andx∈⋂e>0(0,1+e)K.{\textstyle x\in {\textstyle \bigcap \limits _{e>0}}(0,1+e)K.}However,(0,1]K=(0,1]([0,1)x)=[0,1)x=K{\textstyle (0,1]K=(0,1]([0,1)x)=[0,1)x=K}so thatx∉(0,1]K,{\textstyle x\not \in (0,1]K,}as desired.◼{\textstyle \blacksquare } The next theorem shows that Minkowski functionals areexactlythose functionsf:X→[0,∞]{\textstyle f:X\to [0,\infty ]}that have a certain purely algebraic property that is commonly encountered. Theorem—Letf:X→[0,∞]{\textstyle f:X\to [0,\infty ]}be any function. The following statements are equivalent: Moreover, iff{\textstyle f}never takes on the value∞{\textstyle \,\infty \,}(so that the product0⋅f(x){\textstyle 0\cdot f(x)}is always well-defined) then this list may be extended to include: Iff(tx)≤tf(x){\textstyle f(tx)\leq tf(x)}holds for allx∈X{\textstyle x\in X}and realt>0{\textstyle t>0}thentf(x)=tf(1t(tx))≤t1tf(tx)=f(tx)≤tf(x){\textstyle tf(x)=tf\left({\tfrac {1}{t}}(tx)\right)\leq t{\tfrac {1}{t}}f(tx)=f(tx)\leq tf(x)}so thattf(x)=f(tx).{\textstyle tf(x)=f(tx).} Only (1) implies (3) will be proven because afterwards, the rest of the theorem follows immediately from the basic properties of Minkowski functionals described earlier; properties that will henceforth be used without comment. So assume thatf:X→[0,∞]{\textstyle f:X\to [0,\infty ]}is a function such thatf(tx)=tf(x){\textstyle f(tx)=tf(x)}for allx∈X{\textstyle x\in X}and all realt>0{\textstyle t>0}and letK:={y∈X:f(y)≤1}.{\textstyle K:=\{y\in X:f(y)\leq 1\}.} For all realt>0,{\textstyle t>0,}f(0)=f(t0)=tf(0){\textstyle f(0)=f(t0)=tf(0)}so by takingt=2{\textstyle t=2}for instance, it follows that eitherf(0)=0{\textstyle f(0)=0}orf(0)=∞.{\textstyle f(0)=\infty .}Letx∈X.{\textstyle x\in X.}It remains to show thatf(x)=pK(x).{\textstyle f(x)=p_{K}(x).} It will now be shown that iff(x)=0{\textstyle f(x)=0}orf(x)=∞{\textstyle f(x)=\infty }thenf(x)=pK(x),{\textstyle f(x)=p_{K}(x),}so that in particular, it will follow thatf(0)=pK(0).{\textstyle f(0)=p_{K}(0).}So suppose thatf(x)=0{\textstyle f(x)=0}orf(x)=∞;{\textstyle f(x)=\infty ;}in either casef(tx)=tf(x)=f(x){\textstyle f(tx)=tf(x)=f(x)}for all realt>0.{\textstyle t>0.}Now iff(x)=0{\textstyle f(x)=0}then this implies that thattx∈K{\textstyle tx\in K}for all realt>0{\textstyle t>0}(sincef(tx)=0≤1{\textstyle f(tx)=0\leq 1}), which implies thatpK(x)=0,{\textstyle p_{K}(x)=0,}as desired. Similarly, iff(x)=∞{\textstyle f(x)=\infty }thentx∉K{\textstyle tx\not \in K}for all realt>0,{\textstyle t>0,}which implies thatpK(x)=∞,{\textstyle p_{K}(x)=\infty ,}as desired. Thus, it will henceforth be assumed thatR:=f(x){\textstyle R:=f(x)}a positive real number and thatx≠0{\textstyle x\neq 0}(importantly, however, the possibility thatpK(x){\textstyle p_{K}(x)}is0{\textstyle 0}or∞{\textstyle \,\infty \,}has not yet been ruled out). Recall that just likef,{\textstyle f,}the functionpK{\textstyle p_{K}}satisfiespK(tx)=tpK(x){\textstyle p_{K}(tx)=tp_{K}(x)}for all realt>0.{\textstyle t>0.}Since0<1R<∞,{\textstyle 0<{\tfrac {1}{R}}<\infty ,}pK(x)=R=f(x){\textstyle p_{K}(x)=R=f(x)}if and only ifpK(1Rx)=1=f(1Rx){\textstyle p_{K}\left({\tfrac {1}{R}}x\right)=1=f\left({\tfrac {1}{R}}x\right)}so assume without loss of generality thatR=1{\textstyle R=1}and it remains to show thatpK(1Rx)=1.{\textstyle p_{K}\left({\tfrac {1}{R}}x\right)=1.}Sincef(x)=1,{\textstyle f(x)=1,}x∈K⊆(0,1]K,{\textstyle x\in K\subseteq (0,1]K,}which implies thatpK(x)≤1{\textstyle p_{K}(x)\leq 1}(so in particular,pK(x)≠∞{\textstyle p_{K}(x)\neq \infty }is guaranteed). It remains to show thatpK(x)≥1,{\textstyle p_{K}(x)\geq 1,}which recall happens if and only ifx∉(0,1)K.{\textstyle x\not \in (0,1)K.}So assume for the sake of contradiction thatx∈(0,1)K{\textstyle x\in (0,1)K}and let0<r<1{\textstyle 0<r<1}andk∈K{\textstyle k\in K}be such thatx=rk,{\textstyle x=rk,}where note thatk∈K{\textstyle k\in K}implies thatf(k)≤1.{\textstyle f(k)\leq 1.}Then1=f(x)=f(rk)=rf(k)≤r<1.{\textstyle 1=f(x)=f(rk)=rf(k)\leq r<1.}◼{\textstyle \blacksquare } This theorem can be extended to characterize certain classes of[−∞,∞]{\textstyle [-\infty ,\infty ]}-valued maps (for example, real-valuedsublinear functions) in terms of Minkowski functionals. For instance, it can be used to describe how every real homogeneous functionf:X→R{\textstyle f:X\to \mathbb {R} }(such as linear functionals) can be written in terms of a unique Minkowski functional having a certain property. Proposition[10]—Letf:X→[0,∞]{\textstyle f:X\to [0,\infty ]}be any function andK⊆X{\textstyle K\subseteq X}be any subset. The following statements are equivalent: {x∈X:f(x)<1}⊆K⊆{x∈X:f(x)≤1}.{\displaystyle \{x\in X:f(x)<1\}\;\subseteq \;K\;\subseteq \;\{x\in X:f(x)\leq 1\}.} In this next theorem, which follows immediately from the statements above,K{\textstyle K}isnotassumed to be absorbing inX{\textstyle X}and instead, it is deduced that(0,1)K{\textstyle (0,1)K}is absorbing whenpK{\textstyle p_{K}}is a seminorm. It is also not assumed thatK{\textstyle K}isbalanced(which is a property thatK{\textstyle K}is often required to have); in its place is the weaker condition that(0,1)sK⊆(0,1)K{\textstyle (0,1)sK\subseteq (0,1)K}for all scalarss{\textstyle s}satisfying|s|=1.{\textstyle |s|=1.}The common requirement thatK{\textstyle K}be convex is also weakened to only requiring that(0,1)K{\textstyle (0,1)K}be convex. Theorem—LetK{\textstyle K}be a subset of a real or complex vector spaceX.{\textstyle X.}ThenpK{\textstyle p_{K}}is aseminormonX{\textstyle X}if and only if all of the following conditions hold: in which case0∈K{\textstyle 0\in K}and both(0,1)K={x∈X:p(x)<1}{\textstyle (0,1)K=\{x\in X:p(x)<1\}}and⋂e>0(0,1+e)K={x∈X:pK(x)≤1}{\textstyle \bigcap _{e>0}(0,1+e)K=\left\{x\in X:p_{K}(x)\leq 1\right\}}will be convex, balanced, andabsorbingsubsets ofX.{\textstyle X.} Conversely, iff{\textstyle f}is a seminorm onX{\textstyle X}then the setV:={x∈X:f(x)<1}{\textstyle V:=\{x\in X:f(x)<1\}}satisfies all three of the above conditions (and thus also the conclusions) and alsof=pV;{\textstyle f=p_{V};}moreover,V{\textstyle V}is necessarily convex, balanced, absorbing, and satisfies(0,1)V=V=[0,1]V.{\textstyle (0,1)V=V=[0,1]V.} Corollary—IfK{\textstyle K}is a convex, balanced, and absorbing subset of a real or complex vector spaceX,{\textstyle X,}thenpK{\textstyle p_{K}}is aseminormonX.{\textstyle X.} It may be shown that a real-valuedsubadditive functionf:X→R{\textstyle f:X\to \mathbb {R} }on an arbitrarytopological vector spaceX{\textstyle X}is continuous at the origin if and only if it is uniformly continuous, where if in additionf{\textstyle f}is nonnegative, thenf{\textstyle f}is continuous if and only ifV:={x∈X:f(x)<1}{\textstyle V:=\{x\in X:f(x)<1\}}is an open neighborhood inX.{\textstyle X.}[11]Iff:X→R{\textstyle f:X\to \mathbb {R} }is subadditive and satisfiesf(0)=0,{\textstyle f(0)=0,}thenf{\textstyle f}is continuous if and only if its absolute value|f|:X→[0,∞){\textstyle |f|:X\to [0,\infty )}is continuous. Anonnegativesublinear functionis anonnegative homogeneousfunctionf:X→[0,∞){\textstyle f:X\to [0,\infty )}that satisfies the triangle inequality. It follows immediately from the results below that for such a functionf,{\textstyle f,}ifV:={x∈X:f(x)<1}{\textstyle V:=\{x\in X:f(x)<1\}}thenf=pV.{\textstyle f=p_{V}.}GivenK⊆X,{\textstyle K\subseteq X,}the Minkowski functionalpK{\textstyle p_{K}}is a sublinear function if and only if it is real-valued and subadditive, which is happens if and only if(0,∞)K=X{\textstyle (0,\infty )K=X}and(0,1)K{\textstyle (0,1)K}is convex. Theorem[11]—Suppose thatX{\textstyle X}is atopological vector space(not necessarilylocally convexor Hausdorff) over the real or complex numbers. Then the non-empty open convex subsets ofX{\textstyle X}are exactly those sets that are of the formz+{x∈X:p(x)<1}={x∈X:p(x−z)<1}{\textstyle z+\{x\in X:p(x)<1\}=\{x\in X:p(x-z)<1\}}for somez∈X{\textstyle z\in X}and some positive continuoussublinear functionp{\textstyle p}onX.{\textstyle X.} LetV≠∅{\textstyle V\neq \varnothing }be an open convex subset ofX.{\textstyle X.}If0∈V{\textstyle 0\in V}then letz:=0{\textstyle z:=0}and otherwise letz∈V{\textstyle z\in V}be arbitrary. Letp=pK:X→[0,∞){\textstyle p=p_{K}:X\to [0,\infty )}be the Minkowski functional ofK:=V−z{\textstyle K:=V-z}where this convex open neighborhood of the origin satisfies(0,1)K=K.{\textstyle (0,1)K=K.}Thenp{\textstyle p}is a continuous sublinear function onX{\textstyle X}sinceV−z{\textstyle V-z}is convex, absorbing, and open (however,p{\textstyle p}is not necessarily a seminorm since it is not necessarily absolutely homogeneous). From the properties of Minkowski functionals, we havepK−1([0,1))=(0,1)K,{\textstyle p_{K}^{-1}([0,1))=(0,1)K,}from which it follows thatV−z={x∈X:p(x)<1}{\textstyle V-z=\{x\in X:p(x)<1\}}and soV=z+{x∈X:p(x)<1}.{\textstyle V=z+\{x\in X:p(x)<1\}.}Sincez+{x∈X:p(x)<1}={x∈X:p(x−z)<1},{\textstyle z+\{x\in X:p(x)<1\}=\{x\in X:p(x-z)<1\},}this completes the proof.◼{\textstyle \blacksquare }
https://en.wikipedia.org/wiki/Minkowski_functional
Inmathematics, theoperator normmeasures the "size" of certainlinear operatorsby assigning each areal numbercalled itsoperator norm. Formally, it is anormdefined on the space ofbounded linear operatorsbetween two givennormed vector spaces. Informally, the operator norm‖T‖{\displaystyle \|T\|}of a linear mapT:X→Y{\displaystyle T:X\to Y}is the maximum factor by which it "lengthens" vectors. Given two normed vector spacesV{\displaystyle V}andW{\displaystyle W}(over the same basefield, either thereal numbersR{\displaystyle \mathbb {R} }or thecomplex numbersC{\displaystyle \mathbb {C} }), alinear mapA:V→W{\displaystyle A:V\to W}is continuousif and only ifthere exists a real numberc{\displaystyle c}such that[1]‖Av‖≤c‖v‖for allv∈V.{\displaystyle \|Av\|\leq c\|v\|\quad {\text{ for all }}v\in V.} The norm on the left is the one inW{\displaystyle W}and the norm on the right is the one inV{\displaystyle V}. Intuitively, the continuous operatorA{\displaystyle A}never increases the length of any vector by more than a factor ofc.{\displaystyle c.}Thus theimageof a bounded set under a continuous operator is also bounded. Because of this property, the continuous linear operators are also known asbounded operators. In order to "measure the size" ofA,{\displaystyle A,}one can take theinfimumof the numbersc{\displaystyle c}such that the above inequality holds for allv∈V.{\displaystyle v\in V.}This number represents the maximum scalar factor by whichA{\displaystyle A}"lengthens" vectors. In other words, the "size" ofA{\displaystyle A}is measured by how much it "lengthens" vectors in the "biggest" case. So we define the operator norm ofA{\displaystyle A}as‖A‖op=inf{c≥0:‖Av‖≤c‖v‖for allv∈V}.{\displaystyle \|A\|_{\text{op}}=\inf\{c\geq 0:\|Av\|\leq c\|v\|{\text{ for all }}v\in V\}.} The infimum is attained as the set of all suchc{\displaystyle c}isclosed,nonempty, andboundedfrom below.[2] It is important to bear in mind that this operator norm depends on the choice of norms for the normed vector spacesV{\displaystyle V}andW{\displaystyle W}. Every realm{\displaystyle m}-by-n{\displaystyle n}matrixcorresponds to a linear map fromRn{\displaystyle \mathbb {R} ^{n}}toRm.{\displaystyle \mathbb {R} ^{m}.}Each pair of the plethora of (vector)normsapplicable to real vector spaces induces an operator norm for allm{\displaystyle m}-by-n{\displaystyle n}matrices of real numbers; these induced norms form a subset ofmatrix norms. If we specifically choose theEuclidean normon bothRn{\displaystyle \mathbb {R} ^{n}}andRm,{\displaystyle \mathbb {R} ^{m},}then the matrix norm given to a matrixA{\displaystyle A}is thesquare rootof the largesteigenvalueof the matrixA∗A{\displaystyle A^{*}A}(whereA∗{\displaystyle A^{*}}denotes theconjugate transposeofA{\displaystyle A}).[3]This is equivalent to assigning the largestsingular valueofA.{\displaystyle A.} Passing to a typical infinite-dimensional example, consider thesequence spaceℓ2,{\displaystyle \ell ^{2},}which is anLpspace, defined byℓ2={(an)n≥1:an∈C,∑n|an|2<∞}.{\displaystyle \ell ^{2}=\left\{(a_{n})_{n\geq 1}:\;a_{n}\in \mathbb {C} ,\;\sum _{n}|a_{n}|^{2}<\infty \right\}.} This can be viewed as an infinite-dimensional analogue of theEuclidean spaceCn.{\displaystyle \mathbb {C} ^{n}.}Now consider a bounded sequences∙=(sn)n=1∞.{\displaystyle s_{\bullet }=\left(s_{n}\right)_{n=1}^{\infty }.}The sequences∙{\displaystyle s_{\bullet }}is an element of the spaceℓ∞,{\displaystyle \ell ^{\infty },}with a norm given by‖s∙‖∞=supn|sn|.{\displaystyle \left\|s_{\bullet }\right\|_{\infty }=\sup _{n}\left|s_{n}\right|.} Define an operatorTs{\displaystyle T_{s}}by pointwise multiplication:(an)n=1∞↦Ts(sn⋅an)n=1∞.{\displaystyle \left(a_{n}\right)_{n=1}^{\infty }\;{\stackrel {T_{s}}{\mapsto }}\;\ \left(s_{n}\cdot a_{n}\right)_{n=1}^{\infty }.} The operatorTs{\displaystyle T_{s}}is bounded with operator norm‖Ts‖op=‖s∙‖∞.{\displaystyle \left\|T_{s}\right\|_{\text{op}}=\left\|s_{\bullet }\right\|_{\infty }.} This discussion extends directly to the case whereℓ2{\displaystyle \ell ^{2}}is replaced by a generalLp{\displaystyle L^{p}}space withp>1{\displaystyle p>1}andℓ∞{\displaystyle \ell ^{\infty }}replaced byL∞.{\displaystyle L^{\infty }.} LetA:V→W{\displaystyle A:V\to W}be a linear operator between normed spaces. The first four definitions are always equivalent, and if in additionV≠{0}{\displaystyle V\neq \{0\}}then they are all equivalent: IfV={0}{\displaystyle V=\{0\}}then the sets in the last two rows will be empty, and consequently theirsupremumsover the set[−∞,∞]{\displaystyle [-\infty ,\infty ]}will equal−∞{\displaystyle -\infty }instead of the correct value of0.{\displaystyle 0.}If the supremum is taken over the set[0,∞]{\displaystyle [0,\infty ]}instead, then the supremum of the empty set is0{\displaystyle 0}and the formulas hold for anyV.{\displaystyle V.} Importantly, a linear operatorA:V→W{\displaystyle A:V\to W}is not, in general, guaranteed to achieve its norm‖A‖op=sup{‖Av‖:‖v‖≤1,v∈V}{\displaystyle \|A\|_{\text{op}}=\sup\{\|Av\|:\|v\|\leq 1,v\in V\}}on the closed unit ball{v∈V:‖v‖≤1},{\displaystyle \{v\in V:\|v\|\leq 1\},}meaning that there might not exist any vectoru∈V{\displaystyle u\in V}of norm‖u‖≤1{\displaystyle \|u\|\leq 1}such that‖A‖op=‖Au‖{\displaystyle \|A\|_{\text{op}}=\|Au\|}(if such a vector does exist and ifA≠0,{\displaystyle A\neq 0,}thenu{\displaystyle u}would necessarily have unit norm‖u‖=1{\displaystyle \|u\|=1}). R.C. James provedJames's theoremin 1964, which states that aBanach spaceV{\displaystyle V}isreflexiveif and only if everybounded linear functionalf∈V∗{\displaystyle f\in V^{*}}achieves itsnormon the closed unit ball.[4]It follows, in particular, that every non-reflexive Banach space has some bounded linear functional (a type of bounded linear operator) that does not achieve its norm on the closed unit ball. IfA:V→W{\displaystyle A:V\to W}is bounded then[5]‖A‖op=sup{|w∗(Av)|:‖v‖≤1,‖w∗‖≤1wherev∈V,w∗∈W∗}{\displaystyle \|A\|_{\text{op}}=\sup \left\{\left|w^{*}(Av)\right|:\|v\|\leq 1,\left\|w^{*}\right\|\leq 1{\text{ where }}v\in V,w^{*}\in W^{*}\right\}}and[5]‖A‖op=‖tA‖op{\displaystyle \|A\|_{\text{op}}=\left\|{}^{t}A\right\|_{\text{op}}}wheretA:W∗→V∗{\displaystyle {}^{t}A:W^{*}\to V^{*}}is thetransposeofA:V→W,{\displaystyle A:V\to W,}which is the linear operator defined byw∗↦w∗∘A.{\displaystyle w^{*}\,\mapsto \,w^{*}\circ A.} The operator norm is indeed a norm on the space of allbounded operatorsbetweenV{\displaystyle V}andW{\displaystyle W}. This means‖A‖op≥0and‖A‖op=0if and only ifA=0,{\displaystyle \|A\|_{\text{op}}\geq 0{\mbox{ and }}\|A\|_{\text{op}}=0{\mbox{ if and only if }}A=0,}‖aA‖op=|a|‖A‖opfor every scalara,{\displaystyle \|aA\|_{\text{op}}=|a|\|A\|_{\text{op}}{\mbox{ for every scalar }}a,}‖A+B‖op≤‖A‖op+‖B‖op.{\displaystyle \|A+B\|_{\text{op}}\leq \|A\|_{\text{op}}+\|B\|_{\text{op}}.} The following inequality is an immediate consequence of the definition:‖Av‖≤‖A‖op‖v‖for everyv∈V.{\displaystyle \|Av\|\leq \|A\|_{\text{op}}\|v\|\ {\mbox{ for every }}\ v\in V.} The operator norm is also compatible with the composition, or multiplication, of operators: ifV{\displaystyle V},W{\displaystyle W}andX{\displaystyle X}are three normed spaces over the same base field, andA:V→W{\displaystyle A:V\to W}andB:W→X{\displaystyle B:W\to X}are two bounded operators, then it is asub-multiplicative norm, that is:‖BA‖op≤‖B‖op‖A‖op.{\displaystyle \|BA\|_{\text{op}}\leq \|B\|_{\text{op}}\|A\|_{\text{op}}.} For bounded operators onV{\displaystyle V}, this implies that operator multiplication is jointly continuous. It follows from the definition that if a sequence of operators converges in operator norm, itconverges uniformlyon bounded sets. By choosing different norms for the codomain, used in computing‖Av‖{\displaystyle \|Av\|}, and the domain, used in computing‖v‖{\displaystyle \|v\|}, we obtain different values for the operator norm. Some common operator norms are easy to calculate, and others areNP-hard. Except for the NP-hard norms, all these norms can be calculated inN2{\displaystyle N^{2}}operations (for anN×N{\displaystyle N\times N}matrix), with the exception of theℓ2−ℓ2{\displaystyle \ell _{2}-\ell _{2}}norm (which requiresN3{\displaystyle N^{3}}operations for the exact answer, or fewer if you approximate it with thepower methodorLanczos iterations). The norm of theadjointor transpose can be computed as follows. We have that for anyp,q,{\displaystyle p,q,}then‖A‖p→q=‖A∗‖q′→p′{\displaystyle \|A\|_{p\rightarrow q}=\|A^{*}\|_{q'\rightarrow p'}}wherep′,q′{\displaystyle p',q'}areHölder conjugatetop,q,{\displaystyle p,q,}that is,1/p+1/p′=1{\displaystyle 1/p+1/p'=1}and1/q+1/q′=1.{\displaystyle 1/q+1/q'=1.} SupposeH{\displaystyle H}is a real or complexHilbert space. IfA:H→H{\displaystyle A:H\to H}is a bounded linear operator, then we have‖A‖op=‖A∗‖op{\displaystyle \|A\|_{\text{op}}=\left\|A^{*}\right\|_{\text{op}}}and‖A∗A‖op=‖A‖op2,{\displaystyle \left\|A^{*}A\right\|_{\text{op}}=\|A\|_{\text{op}}^{2},}whereA∗{\displaystyle A^{*}}denotes theadjoint operatorofA{\displaystyle A}(which inEuclidean spaceswith the standardinner productcorresponds to theconjugate transposeof the matrixA{\displaystyle A}). In general, thespectral radiusofA{\displaystyle A}is bounded above by the operator norm ofA{\displaystyle A}:ρ(A)≤‖A‖op.{\displaystyle \rho (A)\leq \|A\|_{\text{op}}.} To see why equality may not always hold, consider theJordan canonical formof a matrix in the finite-dimensional case. Because there are non-zero entries on the superdiagonal, equality may be violated. Thequasinilpotent operatorsis one class of such examples. A nonzero quasinilpotent operatorA{\displaystyle A}has spectrum{0}.{\displaystyle \{0\}.}Soρ(A)=0{\displaystyle \rho (A)=0}while‖A‖op>0.{\displaystyle \|A\|_{\text{op}}>0.} However, when a matrixN{\displaystyle N}isnormal, itsJordan canonical formis diagonal (up to unitary equivalence); this is thespectral theorem. In that case it is easy to see thatρ(N)=‖N‖op.{\displaystyle \rho (N)=\|N\|_{\text{op}}.} This formula can sometimes be used to compute the operator norm of a given bounded operatorA{\displaystyle A}: define theHermitian operatorB=A∗A,{\displaystyle B=A^{*}A,}determine its spectral radius, and take thesquare rootto obtain the operator norm ofA.{\displaystyle A.} The space of bounded operators onH,{\displaystyle H,}with thetopologyinduced by operator norm, is notseparable. For example, consider theLp spaceL2[0,1],{\displaystyle L^{2}[0,1],}which is a Hilbert space. For0<t≤1,{\displaystyle 0<t\leq 1,}letΩt{\displaystyle \Omega _{t}}be thecharacteristic functionof[0,t],{\displaystyle [0,t],}andPt{\displaystyle P_{t}}be themultiplication operatorgiven byΩt,{\displaystyle \Omega _{t},}that is,Pt(f)=f⋅Ωt.{\displaystyle P_{t}(f)=f\cdot \Omega _{t}.} Then eachPt{\displaystyle P_{t}}is a bounded operator with operator norm 1 and‖Pt−Ps‖op=1for allt≠s.{\displaystyle \left\|P_{t}-P_{s}\right\|_{\text{op}}=1\quad {\mbox{ for all }}\quad t\neq s.} But{Pt:0<t≤1}{\displaystyle \{P_{t}:0<t\leq 1\}}is anuncountable set. This implies the space of bounded operators onL2([0,1]){\displaystyle L^{2}([0,1])}is not separable, in operator norm. One can compare this with the fact that the sequence spaceℓ∞{\displaystyle \ell ^{\infty }}is not separable. Theassociative algebraof all bounded operators on a Hilbert space, together with the operator norm and the adjoint operation, yields aC*-algebra.
https://en.wikipedia.org/wiki/Operator_norm
Infunctional analysisand related areas ofmathematics, ametrizable(resp.pseudometrizable)topological vector space(TVS) is a TVS whose topology is induced by a metric (resp.pseudometric). AnLM-spaceis aninductive limitof a sequence oflocally convexmetrizable TVS. Apseudometricon a setX{\displaystyle X}is a mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }satisfying the following properties: A pseudometric is called ametricif it satisfies: Ultrapseudometric A pseudometricd{\displaystyle d}onX{\displaystyle X}is called aultrapseudometricor astrong pseudometricif it satisfies: Pseudometric space Apseudometric spaceis a pair(X,d){\displaystyle (X,d)}consisting of a setX{\displaystyle X}and a pseudometricd{\displaystyle d}onX{\displaystyle X}such thatX{\displaystyle X}'s topology is identical to the topology onX{\displaystyle X}induced byd.{\displaystyle d.}We call a pseudometric space(X,d){\displaystyle (X,d)}ametric space(resp.ultrapseudometric space) whend{\displaystyle d}is a metric (resp. ultrapseudometric). Ifd{\displaystyle d}is a pseudometric on a setX{\displaystyle X}then collection ofopen balls:Br(z):={x∈X:d(x,z)<r}{\displaystyle B_{r}(z):=\{x\in X:d(x,z)<r\}}asz{\displaystyle z}ranges overX{\displaystyle X}andr>0{\displaystyle r>0}ranges over the positive real numbers, forms a basis for a topology onX{\displaystyle X}that is called thed{\displaystyle d}-topologyor thepseudometric topologyonX{\displaystyle X}induced byd.{\displaystyle d.} Pseudometrizable space A topological space(X,τ){\displaystyle (X,\tau )}is calledpseudometrizable(resp.metrizable,ultrapseudometrizable) if there exists a pseudometric (resp. metric, ultrapseudometric)d{\displaystyle d}onX{\displaystyle X}such thatτ{\displaystyle \tau }is equal to the topology induced byd.{\displaystyle d.}[1] An additivetopological groupis an additive group endowed with a topology, called agroup topology, under which addition and negation become continuous operators. A topologyτ{\displaystyle \tau }on a real or complex vector spaceX{\displaystyle X}is called avector topologyor aTVS topologyif it makes the operations of vector addition and scalar multiplication continuous (that is, if it makesX{\displaystyle X}into atopological vector space). Everytopological vector space(TVS)X{\displaystyle X}is an additive commutative topological group but not all group topologies onX{\displaystyle X}are vector topologies. This is because despite it making addition and negation continuous, a group topology on a vector spaceX{\displaystyle X}may fail to make scalar multiplication continuous. For instance, thediscrete topologyon any non-trivial vector space makes addition and negation continuous but do not make scalar multiplication continuous. IfX{\displaystyle X}is an additive group then we say that a pseudometricd{\displaystyle d}onX{\displaystyle X}istranslation invariantor justinvariantif it satisfies any of the following equivalent conditions: IfX{\displaystyle X}is atopological groupthe avalueorG-seminormonX{\displaystyle X}(theGstands for Group) is a real-valued mapp:X→R{\displaystyle p:X\rightarrow \mathbb {R} }with the following properties:[2] where we call a G-seminorm aG-normif it satisfies the additional condition: Ifp{\displaystyle p}is a value on a vector spaceX{\displaystyle X}then: Theorem[2]—Suppose thatX{\displaystyle X}is an additive commutative group. Ifd{\displaystyle d}is a translation invariant pseudometric onX{\displaystyle X}then the mapp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a value onX{\displaystyle X}calledthe value associated withd{\displaystyle d}, and moreover,d{\displaystyle d}generates a group topology onX{\displaystyle X}(i.e. thed{\displaystyle d}-topology onX{\displaystyle X}makesX{\displaystyle X}into a topological group). Conversely, ifp{\displaystyle p}is a value onX{\displaystyle X}then the mapd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}and the value associated withd{\displaystyle d}is justp.{\displaystyle p.} Theorem[2]—If(X,τ){\displaystyle (X,\tau )}is an additive commutativetopological groupthen the following are equivalent: If(X,τ){\displaystyle (X,\tau )}is Hausdorff then the word "pseudometric" in the above statement may be replaced by the word "metric." A commutative topological group is metrizable if and only if it is Hausdorff and pseudometrizable. LetX{\displaystyle X}be a non-trivial (i.e.X≠{0}{\displaystyle X\neq \{0\}}) real or complex vector space and letd{\displaystyle d}be the translation-invarianttrivial metriconX{\displaystyle X}defined byd(x,x)=0{\displaystyle d(x,x)=0}andd(x,y)=1for allx,y∈X{\displaystyle d(x,y)=1{\text{ for all }}x,y\in X}such thatx≠y.{\displaystyle x\neq y.}The topologyτ{\displaystyle \tau }thatd{\displaystyle d}induces onX{\displaystyle X}is thediscrete topology, which makes(X,τ){\displaystyle (X,\tau )}into a commutative topological group under addition but doesnotform a vector topology onX{\displaystyle X}because(X,τ){\displaystyle (X,\tau )}isdisconnectedbut every vector topology is connected. What fails is that scalar multiplication isn't continuous on(X,τ).{\displaystyle (X,\tau ).} This example shows that a translation-invariant (pseudo)metric isnotenough to guarantee a vector topology, which leads us to define paranorms andF-seminorms. A collectionN{\displaystyle {\mathcal {N}}}of subsets of a vector space is calledadditive[5]if for everyN∈N,{\displaystyle N\in {\mathcal {N}},}there exists someU∈N{\displaystyle U\in {\mathcal {N}}}such thatU+U⊆N.{\displaystyle U+U\subseteq N.} Continuity of addition at 0—If(X,+){\displaystyle (X,+)}is agroup(as all vector spaces are),τ{\displaystyle \tau }is a topology onX,{\displaystyle X,}andX×X{\displaystyle X\times X}is endowed with theproduct topology, then the addition mapX×X→X{\displaystyle X\times X\to X}(i.e. the map(x,y)↦x+y{\displaystyle (x,y)\mapsto x+y}) is continuous at the origin ofX×X{\displaystyle X\times X}if and only if the set ofneighborhoodsof the origin in(X,τ){\displaystyle (X,\tau )}is additive. This statement remains true if the word "neighborhood" is replaced by "open neighborhood."[5] All of the above conditions are consequently a necessary for a topology to form a vector topology. Additive sequences of sets have the particularly nice property that they define non-negative continuous real-valuedsubadditivefunctions. These functions can then be used to prove many of the basic properties of topological vector spaces and also show that a Hausdorff TVS with a countable basis of neighborhoods is metrizable. The following theorem is true more generally for commutative additivetopological groups. Theorem—LetU∙=(Ui)i=0∞{\displaystyle U_{\bullet }=\left(U_{i}\right)_{i=0}^{\infty }}be a collection of subsets of a vector space such that0∈Ui{\displaystyle 0\in U_{i}}andUi+1+Ui+1⊆Ui{\displaystyle U_{i+1}+U_{i+1}\subseteq U_{i}}for alli≥0.{\displaystyle i\geq 0.}For allu∈U0,{\displaystyle u\in U_{0},}letS(u):={n∙=(n1,…,nk):k≥1,ni≥0for alli,andu∈Un1+⋯+Unk}.{\displaystyle \mathbb {S} (u):=\left\{n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)~:~k\geq 1,n_{i}\geq 0{\text{ for all }}i,{\text{ and }}u\in U_{n_{1}}+\cdots +U_{n_{k}}\right\}.} Definef:X→[0,1]{\displaystyle f:X\to [0,1]}byf(x)=1{\displaystyle f(x)=1}ifx∉U0{\displaystyle x\not \in U_{0}}and otherwise letf(x):=inf{2−n1+⋯2−nk:n∙=(n1,…,nk)∈S(x)}.{\displaystyle f(x):=\inf _{}\left\{2^{-n_{1}}+\cdots 2^{-n_{k}}~:~n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)\in \mathbb {S} (x)\right\}.} Thenf{\displaystyle f}issubadditive(meaningf(x+y)≤f(x)+f(y)for allx,y∈X{\displaystyle f(x+y)\leq f(x)+f(y){\text{ for all }}x,y\in X}) andf=0{\displaystyle f=0}on⋂i≥0Ui,{\displaystyle \bigcap _{i\geq 0}U_{i},}so in particularf(0)=0.{\displaystyle f(0)=0.}If allUi{\displaystyle U_{i}}aresymmetric setsthenf(−x)=f(x){\displaystyle f(-x)=f(x)}and if allUi{\displaystyle U_{i}}are balanced thenf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}and allx∈X.{\displaystyle x\in X.}IfX{\displaystyle X}is a topological vector space and if allUi{\displaystyle U_{i}}are neighborhoods of the origin thenf{\displaystyle f}is continuous, where if in additionX{\displaystyle X}is Hausdorff andU∙{\displaystyle U_{\bullet }}forms a basis of balanced neighborhoods of the origin inX{\displaystyle X}thend(x,y):=f(x−y){\displaystyle d(x,y):=f(x-y)}is a metric defining the vector topology onX.{\displaystyle X.} Assume thatn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}always denotes a finite sequence of non-negative integers and use the notation:∑2−n∙:=2−n1+⋯+2−nkand∑Un∙:=Un1+⋯+Unk.{\displaystyle \sum 2^{-n_{\bullet }}:=2^{-n_{1}}+\cdots +2^{-n_{k}}\quad {\text{ and }}\quad \sum U_{n_{\bullet }}:=U_{n_{1}}+\cdots +U_{n_{k}}.} For any integersn≥0{\displaystyle n\geq 0}andd>2,{\displaystyle d>2,}Un⊇Un+1+Un+1⊇Un+1+Un+2+Un+2⊇Un+1+Un+2+⋯+Un+d+Un+d+1+Un+d+1.{\displaystyle U_{n}\supseteq U_{n+1}+U_{n+1}\supseteq U_{n+1}+U_{n+2}+U_{n+2}\supseteq U_{n+1}+U_{n+2}+\cdots +U_{n+d}+U_{n+d+1}+U_{n+d+1}.} From this it follows that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of distinct positive integers then∑Un∙⊆U−1+min(n∙).{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{-1+\min \left(n_{\bullet }\right)}.} It will now be shown by induction onk{\displaystyle k}that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of non-negative integers such that∑2−n∙≤2−M{\displaystyle \sum 2^{-n_{\bullet }}\leq 2^{-M}}for some integerM≥0{\displaystyle M\geq 0}then∑Un∙⊆UM.{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{M}.}This is clearly true fork=1{\displaystyle k=1}andk=2{\displaystyle k=2}so assume thatk>2,{\displaystyle k>2,}which implies that allni{\displaystyle n_{i}}are positive. If allni{\displaystyle n_{i}}are distinct then this step is done, and otherwise pick distinct indicesi<j{\displaystyle i<j}such thatni=nj{\displaystyle n_{i}=n_{j}}and constructm∙=(m1,…,mk−1){\displaystyle m_{\bullet }=\left(m_{1},\ldots ,m_{k-1}\right)}fromn∙{\displaystyle n_{\bullet }}by replacing eachni{\displaystyle n_{i}}withni−1{\displaystyle n_{i}-1}and deleting thejth{\displaystyle j^{\text{th}}}element ofn∙{\displaystyle n_{\bullet }}(all other elements ofn∙{\displaystyle n_{\bullet }}are transferred tom∙{\displaystyle m_{\bullet }}unchanged). Observe that∑2−n∙=∑2−m∙{\displaystyle \sum 2^{-n_{\bullet }}=\sum 2^{-m_{\bullet }}}and∑Un∙⊆∑Um∙{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}}(becauseUni+Unj⊆Uni−1{\displaystyle U_{n_{i}}+U_{n_{j}}\subseteq U_{n_{i}-1}}) so by appealing to the inductive hypothesis we conclude that∑Un∙⊆∑Um∙⊆UM,{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}\subseteq U_{M},}as desired. It is clear thatf(0)=0{\displaystyle f(0)=0}and that0≤f≤1{\displaystyle 0\leq f\leq 1}so to prove thatf{\displaystyle f}is subadditive, it suffices to prove thatf(x+y)≤f(x)+f(y){\displaystyle f(x+y)\leq f(x)+f(y)}whenx,y∈X{\displaystyle x,y\in X}are such thatf(x)+f(y)<1,{\displaystyle f(x)+f(y)<1,}which implies thatx,y∈U0.{\displaystyle x,y\in U_{0}.}This is an exercise. If allUi{\displaystyle U_{i}}are symmetric thenx∈∑Un∙{\displaystyle x\in \sum U_{n_{\bullet }}}if and only if−x∈∑Un∙{\displaystyle -x\in \sum U_{n_{\bullet }}}from which it follows thatf(−x)≤f(x){\displaystyle f(-x)\leq f(x)}andf(−x)≥f(x).{\displaystyle f(-x)\geq f(x).}If allUi{\displaystyle U_{i}}are balanced then the inequalityf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all unit scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}is proved similarly. Becausef{\displaystyle f}is a nonnegative subadditive function satisfyingf(0)=0,{\displaystyle f(0)=0,}as described in the article onsublinear functionals,f{\displaystyle f}is uniformly continuous onX{\displaystyle X}if and only iff{\displaystyle f}is continuous at the origin. If allUi{\displaystyle U_{i}}are neighborhoods of the origin then for any realr>0,{\displaystyle r>0,}pick an integerM>1{\displaystyle M>1}such that2−M<r{\displaystyle 2^{-M}<r}so thatx∈UM{\displaystyle x\in U_{M}}impliesf(x)≤2−M<r.{\displaystyle f(x)\leq 2^{-M}<r.}If the set of allUi{\displaystyle U_{i}}form basis of balanced neighborhoods of the origin then it may be shown that for anyn>1,{\displaystyle n>1,}there exists some0<r≤2−n{\displaystyle 0<r\leq 2^{-n}}such thatf(x)<r{\displaystyle f(x)<r}impliesx∈Un.{\displaystyle x\in U_{n}.}◼{\displaystyle \blacksquare } IfX{\displaystyle X}is a vector space over the real or complex numbers then aparanormonX{\displaystyle X}is a G-seminorm (defined above)p:X→R{\displaystyle p:X\rightarrow \mathbb {R} }onX{\displaystyle X}that satisfies any of the following additional conditions, each of which begins with "for all sequencesx∙=(xi)i=1∞{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}and all convergent sequences of scalarss∙=(si)i=1∞{\displaystyle s_{\bullet }=\left(s_{i}\right)_{i=1}^{\infty }}":[6] A paranorm is calledtotalif in addition it satisfies: Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then the mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}that defines avector topologyonX.{\displaystyle X.}[8] Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then: IfX{\displaystyle X}is a vector space over the real or complex numbers then anF-seminormonX{\displaystyle X}(theF{\displaystyle F}stands forFréchet) is a real-valued mapp:X→R{\displaystyle p:X\to \mathbb {R} }with the following four properties:[11] AnF-seminorm is called anF-normif in addition it satisfies: AnF-seminorm is calledmonotoneif it satisfies: AnF-seminormed space(resp.F-normed space)[12]is a pair(X,p){\displaystyle (X,p)}consisting of a vector spaceX{\displaystyle X}and anF-seminorm (resp.F-norm)p{\displaystyle p}onX.{\displaystyle X.} If(X,p){\displaystyle (X,p)}and(Z,q){\displaystyle (Z,q)}areF-seminormed spaces then a mapf:X→Z{\displaystyle f:X\to Z}is called anisometric embedding[12]ifq(f(x)−f(y))=p(x,y)for allx,y∈X.{\displaystyle q(f(x)-f(y))=p(x,y){\text{ for all }}x,y\in X.} Every isometric embedding of oneF-seminormed space into another is atopological embedding, but the converse is not true in general.[12] EveryF-seminorm is a paranorm and every paranorm is equivalent to someF-seminorm.[7]EveryF-seminorm on a vector spaceX{\displaystyle X}is a value onX.{\displaystyle X.}In particular,p(x)=0,{\displaystyle p(x)=0,}andp(x)=p(−x){\displaystyle p(x)=p(-x)}for allx∈X.{\displaystyle x\in X.} Theorem[11]—Letp{\displaystyle p}be anF-seminorm on a vector spaceX.{\displaystyle X.}Then the mapd:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation invariant pseudometric onX{\displaystyle X}that defines a vector topologyτ{\displaystyle \tau }onX.{\displaystyle X.}Ifp{\displaystyle p}is anF-norm thend{\displaystyle d}is a metric. WhenX{\displaystyle X}is endowed with this topology thenp{\displaystyle p}is a continuous map onX.{\displaystyle X.} The balanced sets{x∈X:p(x)≤r},{\displaystyle \{x\in X~:~p(x)\leq r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of closed set. Similarly, the balanced sets{x∈X:p(x)<r},{\displaystyle \{x\in X~:~p(x)<r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of open sets. Suppose thatL{\displaystyle {\mathcal {L}}}is a non-empty collection ofF-seminorms on a vector spaceX{\displaystyle X}and for any finite subsetF⊆L{\displaystyle {\mathcal {F}}\subseteq {\mathcal {L}}}and anyr>0,{\displaystyle r>0,}letUF,r:=⋂p∈F{x∈X:p(x)<r}.{\displaystyle U_{{\mathcal {F}},r}:=\bigcap _{p\in {\mathcal {F}}}\{x\in X:p(x)<r\}.} The set{UF,r:r>0,F⊆L,Ffinite}{\displaystyle \left\{U_{{\mathcal {F}},r}~:~r>0,{\mathcal {F}}\subseteq {\mathcal {L}},{\mathcal {F}}{\text{ finite }}\right\}}forms a filter base onX{\displaystyle X}that also forms a neighborhood basis at the origin for a vector topology onX{\displaystyle X}denoted byτL.{\displaystyle \tau _{\mathcal {L}}.}[12]EachUF,r{\displaystyle U_{{\mathcal {F}},r}}is abalancedandabsorbingsubset ofX.{\displaystyle X.}[12]These sets satisfy[12]UF,r/2+UF,r/2⊆UF,r.{\displaystyle U_{{\mathcal {F}},r/2}+U_{{\mathcal {F}},r/2}\subseteq U_{{\mathcal {F}},r}.} Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negative subadditive functions on a vector spaceX.{\displaystyle X.} TheFréchet combination[8]ofp∙{\displaystyle p_{\bullet }}is defined to be the real-valued mapp(x):=∑i=1∞pi(x)2i[1+pi(x)].{\displaystyle p(x):=\sum _{i=1}^{\infty }{\frac {p_{i}(x)}{2^{i}\left[1+p_{i}(x)\right]}}.} Assume thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is an increasing sequence of seminorms onX{\displaystyle X}and letp{\displaystyle p}be the Fréchet combination ofp∙.{\displaystyle p_{\bullet }.}Thenp{\displaystyle p}is anF-seminorm onX{\displaystyle X}that induces the same locally convex topology as the familyp∙{\displaystyle p_{\bullet }}of seminorms.[13] Sincep∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is increasing, a basis of open neighborhoods of the origin consists of all sets of the form{x∈X:pi(x)<r}{\displaystyle \left\{x\in X~:~p_{i}(x)<r\right\}}asi{\displaystyle i}ranges over all positive integers andr>0{\displaystyle r>0}ranges over all positive real numbers. Thetranslation invariantpseudometriconX{\displaystyle X}induced by thisF-seminormp{\displaystyle p}isd(x,y)=∑i=1∞12ipi(x−y)1+pi(x−y).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {p_{i}(x-y)}{1+p_{i}(x-y)}}.} This metric was discovered byFréchetin his 1906 thesis for the spaces of real and complex sequences with pointwise operations.[14] If eachpi{\displaystyle p_{i}}is a paranorm then so isp{\displaystyle p}and moreover,p{\displaystyle p}induces the same topology onX{\displaystyle X}as the familyp∙{\displaystyle p_{\bullet }}of paranorms.[8]This is also true of the following paranorms onX{\displaystyle X}: The Fréchet combination can be generalized by use of a bounded remetrization function. Abounded remetrization function[15]is a continuous non-negative non-decreasing mapR:[0,∞)→[0,∞){\displaystyle R:[0,\infty )\to [0,\infty )}that has a bounded range, issubadditive(meaning thatR(s+t)≤R(s)+R(t){\displaystyle R(s+t)\leq R(s)+R(t)}for alls,t≥0{\displaystyle s,t\geq 0}), and satisfiesR(s)=0{\displaystyle R(s)=0}if and only ifs=0.{\displaystyle s=0.} Examples of bounded remetrization functions includearctan⁡t,{\displaystyle \arctan t,}tanh⁡t,{\displaystyle \tanh t,}t↦min{t,1},{\displaystyle t\mapsto \min\{t,1\},}andt↦t1+t.{\displaystyle t\mapsto {\frac {t}{1+t}}.}[15]Ifd{\displaystyle d}is a pseudometric (respectively, metric) onX{\displaystyle X}andR{\displaystyle R}is a bounded remetrization function thenR∘d{\displaystyle R\circ d}is a bounded pseudometric (respectively, bounded metric) onX{\displaystyle X}that is uniformly equivalent tod.{\displaystyle d.}[15] Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negativeF-seminorm on a vector spaceX,{\displaystyle X,}R{\displaystyle R}is a bounded remetrization function, andr∙=(ri)i=1∞{\displaystyle r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }}is a sequence of positive real numbers whose sum is finite. Thenp(x):=∑i=1∞riR(pi(x)){\displaystyle p(x):=\sum _{i=1}^{\infty }r_{i}R\left(p_{i}(x)\right)}defines a boundedF-seminorm that is uniformly equivalent to thep∙.{\displaystyle p_{\bullet }.}[16]It has the property that for any netx∙=(xa)a∈A{\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}inX,{\displaystyle X,}p(x∙)→0{\displaystyle p\left(x_{\bullet }\right)\to 0}if and only ifpi(x∙)→0{\displaystyle p_{i}\left(x_{\bullet }\right)\to 0}for alli.{\displaystyle i.}[16]p{\displaystyle p}is anF-norm if and only if thep∙{\displaystyle p_{\bullet }}separate points onX.{\displaystyle X.}[16] A pseudometric (resp. metric)d{\displaystyle d}is induced by a seminorm (resp. norm) on a vector spaceX{\displaystyle X}if and only ifd{\displaystyle d}is translation invariant andabsolutely homogeneous, which means that for all scalarss{\displaystyle s}and allx,y∈X,{\displaystyle x,y\in X,}in which case the function defined byp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a seminorm (resp. norm) and the pseudometric (resp. metric) induced byp{\displaystyle p}is equal tod.{\displaystyle d.} If(X,τ){\displaystyle (X,\tau )}is atopological vector space(TVS) (where note in particular thatτ{\displaystyle \tau }is assumed to be a vector topology) then the following are equivalent:[11] If(X,τ){\displaystyle (X,\tau )}is a TVS then the following are equivalent: Birkhoff–Kakutani theorem—If(X,τ){\displaystyle (X,\tau )}is a topological vector space then the following three conditions are equivalent:[17][note 1] By the Birkhoff–Kakutani theorem, it follows that there is anequivalent metricthat is translation-invariant. If(X,τ){\displaystyle (X,\tau )}is TVS then the following are equivalent:[13] LetM{\displaystyle M}be a vector subspace of a topological vector space(X,τ).{\displaystyle (X,\tau ).} IfX{\displaystyle X}is Hausdorff locally convex TVS thenX{\displaystyle X}with thestrong topology,(X,b(X,X′)),{\displaystyle \left(X,b\left(X,X^{\prime }\right)\right),}is metrizable if and only if there exists a countable setB{\displaystyle {\mathcal {B}}}of bounded subsets ofX{\displaystyle X}such that every bounded subset ofX{\displaystyle X}is contained in some element ofB.{\displaystyle {\mathcal {B}}.}[22] Thestrong dual spaceXb′{\displaystyle X_{b}^{\prime }}of a metrizable locally convex space (such as aFréchet space[23])X{\displaystyle X}is aDF-space.[24]The strong dual of a DF-space is aFréchet space.[25]The strong dual of areflexiveFréchet space is abornological space.[24]The strong bidual (that is, thestrong dual spaceof the strong dual space) of a metrizable locally convex space is a Fréchet space.[26]IfX{\displaystyle X}is a metrizable locally convex space then its strong dualXb′{\displaystyle X_{b}^{\prime }}has one of the following properties, if and only if it has all of these properties: (1)bornological, (2)infrabarreled, (3)barreled.[26] A topological vector space isseminormableif and only if it has aconvexbounded neighborhood of the origin. Moreover, a TVS isnormableif and only if it isHausdorffand seminormable.[14]Every metrizable TVS on a finite-dimensionalvector space is a normablelocally convexcomplete TVS, beingTVS-isomorphictoEuclidean space. Consequently, any metrizable TVS that isnotnormable must be infinite dimensional. IfM{\displaystyle M}is a metrizablelocally convex TVSthat possess acountablefundamental system of bounded sets, thenM{\displaystyle M}is normable.[27] IfX{\displaystyle X}is a Hausdorfflocally convex spacethen the following are equivalent: and if this locally convex spaceX{\displaystyle X}is also metrizable, then the following may be appended to this list: In particular, if a metrizable locally convex spaceX{\displaystyle X}(such as aFréchet space) isnotnormable then itsstrong dual spaceXb′{\displaystyle X_{b}^{\prime }}is not aFréchet–Urysohn spaceand consequently, thiscompleteHausdorff locally convex spaceXb′{\displaystyle X_{b}^{\prime }}is also neither metrizable nor normable. Another consequence of this is that ifX{\displaystyle X}is areflexivelocally convexTVS whose strong dualXb′{\displaystyle X_{b}^{\prime }}is metrizable thenXb′{\displaystyle X_{b}^{\prime }}is necessarily a reflexive Fréchet space,X{\displaystyle X}is aDF-space, bothX{\displaystyle X}andXb′{\displaystyle X_{b}^{\prime }}are necessarilycompleteHausdorffultrabornologicaldistinguishedwebbed spaces, and moreover,Xb′{\displaystyle X_{b}^{\prime }}is normable if and only ifX{\displaystyle X}is normable if and only ifX{\displaystyle X}is Fréchet–Urysohn if and only ifX{\displaystyle X}is metrizable. In particular, such a spaceX{\displaystyle X}is either aBanach spaceor else it is not even a Fréchet–Urysohn space. Suppose that(X,d){\displaystyle (X,d)}is a pseudometric space andB⊆X.{\displaystyle B\subseteq X.}The setB{\displaystyle B}ismetrically boundedord{\displaystyle d}-boundedif there exists a real numberR>0{\displaystyle R>0}such thatd(x,y)≤R{\displaystyle d(x,y)\leq R}for allx,y∈B{\displaystyle x,y\in B}; the smallest suchR{\displaystyle R}is then called thediameterord{\displaystyle d}-diameterofB.{\displaystyle B.}[14]IfB{\displaystyle B}isboundedin a pseudometrizable TVSX{\displaystyle X}then it is metrically bounded; the converse is in general false but it is true forlocally convexmetrizable TVSs.[14] Theorem[29]—All infinite-dimensionalseparablecomplete metrizable TVS arehomeomorphic. Everytopological vector space(and more generally, atopological group) has a canonicaluniform structure, induced by its topology, which allows the notions of completeness and uniform continuity to be applied to it. IfX{\displaystyle X}is a metrizable TVS andd{\displaystyle d}is a metric that definesX{\displaystyle X}'s topology, then its possible thatX{\displaystyle X}is complete as a TVS (i.e. relative to its uniformity) but the metricd{\displaystyle d}isnotacomplete metric(such metrics exist even forX=R{\displaystyle X=\mathbb {R} }). Thus, ifX{\displaystyle X}is a TVS whose topology is induced by a pseudometricd,{\displaystyle d,}then the notion of completeness ofX{\displaystyle X}(as a TVS) and the notion of completeness of the pseudometric space(X,d){\displaystyle (X,d)}are not always equivalent. The next theorem gives a condition for when they are equivalent: Theorem—IfX{\displaystyle X}is a pseudometrizable TVS whose topology is induced by atranslation invariantpseudometricd,{\displaystyle d,}thend{\displaystyle d}is a complete pseudometric onX{\displaystyle X}if and only ifX{\displaystyle X}is complete as a TVS.[36] Theorem[37][38](Klee)—Letd{\displaystyle d}beany[note 2]metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is a complete-TVS. Theorem—IfX{\displaystyle X}is a TVS whose topology is induced by a paranormp,{\displaystyle p,}thenX{\displaystyle X}is complete if and only if for every sequence(xi)i=1∞{\displaystyle \left(x_{i}\right)_{i=1}^{\infty }}inX,{\displaystyle X,}if∑i=1∞p(xi)<∞{\displaystyle \sum _{i=1}^{\infty }p\left(x_{i}\right)<\infty }then∑i=1∞xi{\displaystyle \sum _{i=1}^{\infty }x_{i}}converges inX.{\displaystyle X.}[39] IfM{\displaystyle M}is a closed vector subspace of a complete pseudometrizable TVSX,{\displaystyle X,}then the quotient spaceX/M{\displaystyle X/M}is complete.[40]IfM{\displaystyle M}is acompletevector subspace of a metrizable TVSX{\displaystyle X}and if the quotient spaceX/M{\displaystyle X/M}is complete then so isX.{\displaystyle X.}[40]IfX{\displaystyle X}is not complete thenM:=X,{\displaystyle M:=X,}but not complete, vector subspace ofX.{\displaystyle X.} ABaireseparabletopological groupis metrizable if and only if it is cosmic.[23] Banach-Saks theorem[45]—If(xn)n=1∞{\displaystyle \left(x_{n}\right)_{n=1}^{\infty }}is a sequence in alocally convexmetrizable TVS(X,τ){\displaystyle (X,\tau )}that convergesweaklyto somex∈X,{\displaystyle x\in X,}then there exists a sequencey∙=(yi)i=1∞{\displaystyle y_{\bullet }=\left(y_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}such thaty∙→x{\displaystyle y_{\bullet }\to x}in(X,τ){\displaystyle (X,\tau )}and eachyi{\displaystyle y_{i}}is a convex combination of finitely manyxn.{\displaystyle x_{n}.} Mackey's countability condition[14]—Suppose thatX{\displaystyle X}is a locally convex metrizable TVS and that(Bi)i=1∞{\displaystyle \left(B_{i}\right)_{i=1}^{\infty }}is a countable sequence of bounded subsets ofX.{\displaystyle X.}Then there exists a bounded subsetB{\displaystyle B}ofX{\displaystyle X}and a sequence(ri)i=1∞{\displaystyle \left(r_{i}\right)_{i=1}^{\infty }}of positive real numbers such thatBi⊆riB{\displaystyle B_{i}\subseteq r_{i}B}for alli.{\displaystyle i.} Generalized series As describedin this article's section on generalized series, for anyI{\displaystyle I}-indexed familyfamily(ri)i∈I{\displaystyle \left(r_{i}\right)_{i\in I}}of vectors from a TVSX,{\displaystyle X,}it is possible to define their sum∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}as the limit of thenetof finite partial sumsF∈FiniteSubsets⁡(I)↦∑i∈Fri{\displaystyle F\in \operatorname {FiniteSubsets} (I)\mapsto \textstyle \sum \limits _{i\in F}r_{i}}where the domainFiniteSubsets⁡(I){\displaystyle \operatorname {FiniteSubsets} (I)}isdirectedby⊆.{\displaystyle \,\subseteq .\,}IfI=N{\displaystyle I=\mathbb {N} }andX=R,{\displaystyle X=\mathbb {R} ,}for instance, then the generalized series∑i∈Nri{\displaystyle \textstyle \sum \limits _{i\in \mathbb {N} }r_{i}}converges if and only if∑i=1∞ri{\displaystyle \textstyle \sum \limits _{i=1}^{\infty }r_{i}}converges unconditionallyin the usual sense (which for real numbers,is equivalenttoabsolute convergence). If a generalized series∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}converges in a metrizable TVS, then the set{i∈I:ri≠0}{\displaystyle \left\{i\in I:r_{i}\neq 0\right\}}is necessarilycountable(that is, either finite orcountably infinite);[proof 1]in other words, all but at most countably manyri{\displaystyle r_{i}}will be zero and so this generalized series∑i∈Iri=∑ri≠0i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}~=~\textstyle \sum \limits _{\stackrel {i\in I}{r_{i}\neq 0}}r_{i}}is actually a sum of at most countably many non-zero terms. IfX{\displaystyle X}is a pseudometrizable TVS andA{\displaystyle A}maps bounded subsets ofX{\displaystyle X}to bounded subsets ofY,{\displaystyle Y,}thenA{\displaystyle A}is continuous.[14]Discontinuous linear functionals exist on any infinite-dimensional pseudometrizable TVS.[46]Thus, a pseudometrizable TVS is finite-dimensional if and only if its continuous dual space is equal to itsalgebraic dual space.[46] IfF:X→Y{\displaystyle F:X\to Y}is a linear map between TVSs andX{\displaystyle X}is metrizable then the following are equivalent: Open and almost open maps A vector subspaceM{\displaystyle M}of a TVSX{\displaystyle X}hasthe extension propertyif any continuous linear functional onM{\displaystyle M}can be extended to a continuous linear functional onX.{\displaystyle X.}[22]Say that a TVSX{\displaystyle X}has theHahn-Banachextension property(HBEP) if every vector subspace ofX{\displaystyle X}has the extension property.[22] TheHahn-Banach theoremguarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable TVSs there is a converse: Theorem(Kalton)—Every complete metrizable TVS with the Hahn-Banach extension property is locally convex.[22] If a vector spaceX{\displaystyle X}has uncountable dimension and if we endow it with thefinest vector topologythen this is a TVS with the HBEP that is neither locally convex or metrizable.[22] Proofs
https://en.wikipedia.org/wiki/Paranorm
Inmathematics, ametric spaceis asettogether with a notion ofdistancebetween itselements, usually calledpoints. The distance is measured by afunctioncalled ametricordistance function.[1]Metric spaces are a general setting for studying many of the concepts ofmathematical analysisandgeometry. The most familiar example of a metric space is3-dimensional Euclidean spacewith its usual notion of distance. Other well-known examples are asphereequipped with theangular distanceand thehyperbolic plane. A metric may correspond to ametaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with theHamming distance, which measures the number of characters that need to be changed to get from one string to another. Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, includingRiemannian manifolds,normed vector spaces, andgraphs. Inabstract algebra, thep-adic numbersarise as elements of thecompletionof a metric structure on therational numbers. Metric spaces are also studied in their own right inmetric geometry[2]andanalysis on metric spaces.[3] Many of the basic notions ofmathematical analysis, includingballs,completeness, as well asuniform,Lipschitz, andHölder continuity, can be defined in the setting of metric spaces. Other notions, such ascontinuity,compactness, andopenandclosed sets, can be defined for metric spaces, but also in the even more general setting oftopological spaces. To see the utility of different notions of distance, consider thesurface of the Earthas a set of points. We can measure the distance between two such points by the length of theshortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural inseismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points. The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts. Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as withWasserstein metricson spaces ofmeasures) or the degree of difference between two objects (for example, theHamming distancebetween two strings of characters, or theGromov–Hausdorff distancebetween metric spaces themselves). Formally, ametric spaceis anordered pair(M,d)whereMis a set anddis ametriconM, i.e., afunctiond:M×M→R{\displaystyle d\,\colon M\times M\to \mathbb {R} }satisfying the following axioms for all pointsx,y,z∈M{\displaystyle x,y,z\in M}:[4][5] If the metricdis unambiguous, one often refers byabuse of notationto "the metric spaceM". By taking all axioms except the second, one can show that distance is always non-negative:0=d(x,x)≤d(x,y)+d(y,x)=2d(x,y){\displaystyle 0=d(x,x)\leq d(x,y)+d(y,x)=2d(x,y)}Therefore the second axiom can be weakened toIfx≠y, thend(x,y)≠0{\textstyle {\text{If }}x\neq y{\text{, then }}d(x,y)\neq 0}and combined with the first to maked(x,y)=0⟺x=y{\textstyle d(x,y)=0\iff x=y}.[6] Thereal numberswith the distance functiond(x,y)=|y−x|{\displaystyle d(x,y)=|y-x|}given by theabsolute differenceform a metric space. Many properties of metric spaces and functions between them are generalizations of concepts inreal analysisand coincide with those concepts when applied to the real line. The Euclidean planeR2{\displaystyle \mathbb {R} ^{2}}can be equipped with many different metrics. TheEuclidean distancefamiliar from school mathematics can be defined byd2((x1,y1),(x2,y2))=(x2−x1)2+(y2−y1)2.{\displaystyle d_{2}((x_{1},y_{1}),(x_{2},y_{2}))={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.} ThetaxicaborManhattandistanceis defined byd1((x1,y1),(x2,y2))=|x2−x1|+|y2−y1|{\displaystyle d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{2}-x_{1}|+|y_{2}-y_{1}|}and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article. Themaximum,L∞{\displaystyle L^{\infty }}, orChebyshev distanceis defined byd∞((x1,y1),(x2,y2))=max{|x2−x1|,|y2−y1|}.{\displaystyle d_{\infty }((x_{1},y_{1}),(x_{2},y_{2}))=\max\{|x_{2}-x_{1}|,|y_{2}-y_{1}|\}.}This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves akingwould have to make on achessboardto travel from one point to another on the given space. In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formulad∞(p,q)≤d2(p,q)≤d1(p,q)≤2d∞(p,q),{\displaystyle d_{\infty }(p,q)\leq d_{2}(p,q)\leq d_{1}(p,q)\leq 2d_{\infty }(p,q),}which holds for every pair of pointsp,q∈R2{\displaystyle p,q\in \mathbb {R} ^{2}}. A radically different distance can be defined by settingd(p,q)={0,ifp=q,1,otherwise.{\displaystyle d(p,q)={\begin{cases}0,&{\text{if }}p=q,\\1,&{\text{otherwise.}}\end{cases}}}UsingIverson brackets,d(p,q)=[p≠q]{\displaystyle d(p,q)=[p\neq q]}In thisdiscrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points. All of these metrics make sense onRn{\displaystyle \mathbb {R} ^{n}}as well asR2{\displaystyle \mathbb {R} ^{2}}. Given a metric space(M,d)and asubsetA⊆M{\displaystyle A\subseteq M}, we can considerAto be a metric space by measuring distances the same way we would inM. Formally, theinduced metriconAis a functiondA:A×A→R{\displaystyle d_{A}:A\times A\to \mathbb {R} }defined bydA(x,y)=d(x,y).{\displaystyle d_{A}(x,y)=d(x,y).}For example, if we take the two-dimensional sphereS2as a subset ofR3{\displaystyle \mathbb {R} ^{3}}, the Euclidean metric onR3{\displaystyle \mathbb {R} ^{3}}induces the straight-line metric onS2described above. Two more useful examples are the open interval(0, 1)and the closed interval[0, 1]thought of as subspaces of the real line. Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. Hisdistancewas given by logarithm of across ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models forelliptic geometryandhyperbolic geometry, andFelix Klein, in several publications, established the field ofnon-euclidean geometrythrough the use of theCayley-Klein metric. The idea of an abstract space with metric properties was addressed in 1906 byRené Maurice Fréchet[7]and the termmetric spacewas coined byFelix Hausdorffin 1914.[8][9][10] Fréchet's work laid the foundation for understandingconvergence,continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff andStefan Banachfurther refined and expanded the framework of metric spaces. Hausdorff introducedtopological spacesas a generalization of metric spaces. Banach's work infunctional analysisheavily relied on the metric structure. Over time, metric spaces became a central part ofmodern mathematics. They have influenced various fields includingtopology,geometry, andapplied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts. A distance function is enough to define notions of closeness and convergence that were first developed inreal analysis. Properties that depend on the structure of a metric space are referred to asmetric properties. Every metric space is also atopological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are reallytopological properties. For any pointxin a metric spaceMand any real numberr> 0, theopen ballof radiusraroundxis defined to be the set of points that are strictly less than distancerfromx:Br(x)={y∈M:d(x,y)<r}.{\displaystyle B_{r}(x)=\{y\in M:d(x,y)<r\}.}This is a natural way to define a set of points that are relatively close tox. Therefore, a setN⊆M{\displaystyle N\subseteq M}is aneighborhoodofx(informally, it contains all points "close enough" tox) if it contains an open ball of radiusraroundxfor somer> 0. Anopen setis a set which is a neighborhood of all its points. It follows that the open balls form abasefor a topology onM. In other words, the open sets ofMare exactly the unions of open balls. As in any topology,closed setsare the complements of open sets. Sets may be both open and closed as well as neither open nor closed. This topology does not carry all the information about the metric space. For example, the distancesd1,d2, andd∞defined above all induce the same topology onR2{\displaystyle \mathbb {R} ^{2}}, although they behave differently in many respects. Similarly,R{\displaystyle \mathbb {R} }with the Euclidean metric and its subspace the interval(0, 1)with the induced metric arehomeomorphicbut have very different metric properties. Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are calledmetrizableand are particularly well-behaved in many ways: in particular, they areparacompact[11]Hausdorff spaces(hencenormal) andfirst-countable.[a]TheNagata–Smirnov metrization theoremgives a characterization of metrizability in terms of other topological properties, without reference to metrics. Convergence of sequencesin Euclidean space is defined as follows: Convergence of sequences in a topological space is defined as follows: In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern fortopological propertiesof metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis. Informally, a metric space iscompleteif it has no "missing points": every sequence that looks like it should converge to something actually converges. To make this precise: a sequence(xn)in a metric spaceMisCauchyif for everyε > 0there is an integerNsuch that for allm,n>N,d(xm,xn) < ε. By the triangle inequality, any convergent sequence is Cauchy: ifxmandxnare both less thanεaway from the limit, then they are less than2εaway from each other. If the converse is true—every Cauchy sequence inMconverges—thenMis complete. Euclidean spaces are complete, as isR2{\displaystyle \mathbb {R} ^{2}}with the other metrics described above. Two examples of spaces which are not complete are(0, 1)and the rationals, each with the metric induced fromR{\displaystyle \mathbb {R} }. One can think of(0, 1)as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it inR{\displaystyle \mathbb {R} }(for example, its successive decimal approximations). These examples show that completeness isnota topological property, sinceR{\displaystyle \mathbb {R} }is complete but the homeomorphic space(0, 1)is not. This notion of "missing points" can be made precise. In fact, every metric space has a uniquecompletion, which is a complete space that contains the given space as adensesubset. For example,[0, 1]is the completion of(0, 1), and the real numbers are the completion of the rationals. Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, thep-adic numbersare defined as the completion of the rationals under a different metric. Completion is particularly common as a tool infunctional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example,weak solutionstodifferential equationstypically live in a completion (aSobolev space) rather than the original space of nice functions for which the differential equation actually makes sense. A metric spaceMisboundedif there is anrsuch that no pair of points inMis more than distancerapart.[b]The least suchris called thediameterofM. The spaceMis calledprecompactortotally boundedif for everyr> 0there is a finitecoverofMby open balls of radiusr. Every totally bounded space is bounded. To see this, start with a finite cover byr-balls for some arbitraryr. Since the subset ofMconsisting of the centers of these balls is finite, it has finite diameter, sayD. By the triangle inequality, the diameter of the whole space is at mostD+ 2r. The converse does not hold: an example of a metric space that is bounded but not totally bounded isR2{\displaystyle \mathbb {R} ^{2}}(or any other infinite set) with the discrete metric. Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces: One example of a compact space is the closed interval[0, 1]. Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool isLebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover. Unlike in the case of topological spaces or algebraic structures such asgroupsorrings, there is no single "right" type ofstructure-preserving functionbetween metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}are two metric spaces. The words "function" and "map" are used interchangeably. One interpretation of a "structure-preserving" map is one that fully preserves the distance function: It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called anisometry.[13]One perhaps non-obvious example of an isometry between spaces described in this article is the mapf:(R2,d1)→(R2,d∞){\displaystyle f:(\mathbb {R} ^{2},d_{1})\to (\mathbb {R} ^{2},d_{\infty })}defined byf(x,y)=(x+y,x−y).{\displaystyle f(x,y)=(x+y,x-y).} If there is an isometry between the spacesM1andM2, they are said to beisometric. Metric spaces that are isometric areessentially identical. On the other end of the spectrum, one can forget entirely about the metric structure and studycontinuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are: Ahomeomorphismis a continuous bijection whose inverse is also continuous; if there is a homeomorphism betweenM1andM2, they are said to behomeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example,R{\displaystyle \mathbb {R} }is unbounded and complete, while(0, 1)is bounded but not complete. A functionf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isuniformly continuousif for every real numberε > 0there existsδ > 0such that for all pointsxandyinM1such thatd(x,y)<δ{\displaystyle d(x,y)<\delta }, we haved2(f(x),f(y))<ε.{\displaystyle d_{2}(f(x),f(y))<\varepsilon .} The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the pointx. However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences inM1to Cauchy sequences inM2. In other words, uniform continuity preserves some metric properties which are not purely topological. On the other hand, theHeine–Cantor theoremstates that ifM1is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces. ALipschitz mapis one that stretches distances by at most a bounded factor. Formally, given a real numberK> 0, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isK-Lipschitzifd2(f(x),f(y))≤Kd1(x,y)for allx,y∈M1.{\displaystyle d_{2}(f(x),f(y))\leq Kd_{1}(x,y)\quad {\text{for all}}\quad x,y\in M_{1}.}Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric.[14]For example, a curve in a metric space isrectifiable(has finite length) if and only if it has a Lipschitz reparametrization. A 1-Lipschitz map is sometimes called anonexpandingormetric map. Metric maps are commonly taken to be the morphisms of thecategory of metric spaces. AK-Lipschitz map forK< 1is called acontraction. TheBanach fixed-point theoremstates that ifMis a complete metric space, then every contractionf:M→M{\displaystyle f:M\to M}admits a uniquefixed point. If the metric spaceMis compact, the result holds for a slightly weaker condition onf: a mapf:M→M{\displaystyle f:M\to M}admits a unique fixed point ifd(f(x),f(y))<d(x,y)for allx≠y∈M1.{\displaystyle d(f(x),f(y))<d(x,y)\quad {\mbox{for all}}\quad x\neq y\in M_{1}.} Aquasi-isometryis a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example,R2{\displaystyle \mathbb {R} ^{2}}and its subspaceZ2{\displaystyle \mathbb {Z} ^{2}}are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important ingeometric group theory: theŠvarc–Milnor lemmastates that all spaces on which a groupacts geometricallyare quasi-isometric.[15] Formally, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}is aquasi-isometric embeddingif there exist constantsA≥ 1andB≥ 0such that1Ad2(f(x),f(y))−B≤d1(x,y)≤Ad2(f(x),f(y))+Bfor allx,y∈M1.{\displaystyle {\frac {1}{A}}d_{2}(f(x),f(y))-B\leq d_{1}(x,y)\leq Ad_{2}(f(x),f(y))+B\quad {\text{ for all }}\quad x,y\in M_{1}.}It is aquasi-isometryif in addition it isquasi-surjective, i.e. there is a constantC≥ 0such that every point inM2{\displaystyle M_{2}}is at distance at mostCfrom some point in the imagef(M1){\displaystyle f(M_{1})}. Given two metric spaces(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}: Anormed vector spaceis a vector space equipped with anorm, which is a function that measures the length of vectors. The norm of a vectorvis typically denoted by‖v‖{\displaystyle \lVert v\rVert }. Any normed vector space can be equipped with a metric in which the distance between two vectorsxandyis given byd(x,y):=‖x−y‖.{\displaystyle d(x,y):=\lVert x-y\rVert .}The metricdis said to beinducedby the norm‖⋅‖{\displaystyle \lVert {\cdot }\rVert }. Conversely,[16]if a metricdon avector spaceXis then it is the metric induced by the norm‖x‖:=d(x,0).{\displaystyle \lVert x\rVert :=d(x,0).}A similar relationship holds betweenseminormsandpseudometrics. Among examples of metrics induced by a norm are the metricsd1,d2, andd∞onR2{\displaystyle \mathbb {R} ^{2}}, which are induced by theManhattan norm, theEuclidean norm, and themaximum norm, respectively. More generally, theKuratowski embeddingallows one to see any metric space as a subspace of a normed vector space. Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied infunctional analysis. Completeness is particularly important in this context: a complete normed vector space is known as aBanach space. An unusual property of normed vector spaces is thatlinear transformationsbetween them are continuous if and only if they are Lipschitz. Such transformations are known asbounded operators. Acurvein a metric space(M,d)is a continuous functionγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}. Thelengthofγis measured byL(γ)=sup0=x0<x1<⋯<xn=T{∑k=1nd(γ(xk−1),γ(xk))}.{\displaystyle L(\gamma )=\sup _{0=x_{0}<x_{1}<\cdots <x_{n}=T}\left\{\sum _{k=1}^{n}d(\gamma (x_{k-1}),\gamma (x_{k}))\right\}.}In general, this supremum may be infinite; a curve of finite length is calledrectifiable.[17]Suppose that the length of the curveγis equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length,γbecomes ageodesic: a curve which is a distance-preserving function.[15]A geodesic is a shortest possible path between any two of its points.[c] Ageodesic metric spaceis a metric space which admits a geodesic between any two of its points. The spaces(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}and(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}are both geodesic metric spaces. In(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}, geodesics are unique, but in(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}, there are often infinitely many geodesics between two points, as shown in the figure at the top of the article. The spaceMis alength space(or the metricdisintrinsic) if the distance between any two pointsxandyis the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points(1, 0)and(-1, 0)can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface. Given any metric space(M,d), one can define a new, intrinsic distance functiondintrinsiconMby setting the distance between pointsxandyto be the infimum of thed-lengths of paths between them. For instance, ifdis the straight-line distance on the sphere, thendintrinsicis the great-circle distance. However, in some casesdintrinsicmay have infinite values. For example, ifMis theKoch snowflakewith the subspace metricdinduced fromR2{\displaystyle \mathbb {R} ^{2}}, then the resulting intrinsic distance is infinite for any pair of distinct points. ARiemannian manifoldis a space equipped with a Riemannianmetric tensor, which determines lengths oftangent vectorsat every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable pathγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}in a Riemannian manifoldMhas length defined as the integral of the length of the tangent vector to the path:L(γ)=∫0T|γ˙(t)|dt.{\displaystyle L(\gamma )=\int _{0}^{T}|{\dot {\gamma }}(t)|dt.}On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such assub-RiemannianandFinsler metrics. The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is aCAT(k)space(a synthetic condition which depends purely on the metric) if and only if itssectional curvatureis bounded above byk.[20]ThusCAT(k)spaces generalize upper curvature bounds to general metric spaces. Real analysis makes use of both the metric onRn{\displaystyle \mathbb {R} ^{n}}and theLebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside inmetric measure spaces: spaces that have both ameasureand a metric which are compatible with each other. Formally, ametric measure spaceis a metric space equipped with aBorel regular measuresuch that every ball has positive measure.[21]For example Euclidean spaces of dimensionn, and more generallyn-dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with theLebesgue measure. Certainfractalmetric spaces such as theSierpiński gasketcan be equipped with the α-dimensionalHausdorff measurewhere α is theHausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure. One application of metric measure spaces is generalizing the notion ofRicci curvaturebeyond Riemannian manifolds. Just asCAT(k)andAlexandrov spacesgeneralize sectional curvature bounds,RCD spacesare a class of metric measure spaces which generalize lower bounds on Ricci curvature.[22] Ametric space isdiscreteif its induced topology is thediscrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular,finite metric spaces(those having afinitenumber of points) are studied incombinatoricsandtheoretical computer science.[23]Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can beisometrically embeddedin a Euclidean space or inHilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points.[24][25] For anyundirected connected graphG, the setVof vertices ofGcan be turned into a metric space by defining thedistancebetween verticesxandyto be the length of the shortest edge path connecting them. This is also calledshortest-path distanceorgeodesic distance. Ingeometric group theorythis construction is applied to theCayley graphof a (typically infinite)finitely-generated group, yielding theword metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set.[15] An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics. A significant result in this area is that any finite metric space can be probabilistically embedded into atree metricwith an expected distortion ofO(logn){\displaystyle O(logn)}, wheren{\displaystyle n}is the number of points in the metric space.[26] This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound ofΩ(logn){\displaystyle \Omega (logn)}. The tree metrics produced in this embeddingdominatethe original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure. The result has significant implications for various computational problems: The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. TheO(logn){\displaystyle O(logn)}distortion bound has led to improvedapproximation ratiosin several algorithmic problems, demonstrating the practical significance of this theoretical result. In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples: The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves.HausdorffandGromov–Hausdorff distancedefine metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively. Suppose(M,d)is a metric space, and letSbe a subset ofM. Thedistance fromSto a pointxofMis, informally, the distance fromxto the closest point ofS. However, since there may not be a single closest point, it is defined via aninfimum:d(x,S)=inf{d(x,s):s∈S}.{\displaystyle d(x,S)=\inf\{d(x,s):s\in S\}.}In particular,d(x,S)=0{\displaystyle d(x,S)=0}if and only ifxbelongs to theclosureofS. Furthermore, distances between points and sets satisfy a version of the triangle inequality:d(x,S)≤d(x,y)+d(y,S),{\displaystyle d(x,S)\leq d(x,y)+d(y,S),}and therefore the mapdS:M→R{\displaystyle d_{S}:M\to \mathbb {R} }defined bydS(x)=d(x,S){\displaystyle d_{S}(x)=d(x,S)}is continuous. Incidentally, this shows that metric spaces arecompletely regular. Given two subsetsSandTofM, theirHausdorff distanceisdH(S,T)=max{sup{d(s,T):s∈S},sup{d(t,S):t∈T}}.{\displaystyle d_{H}(S,T)=\max\{\sup\{d(s,T):s\in S\},\sup\{d(t,S):t\in T\}\}.}Informally, two setsSandTare close to each other in the Hausdorff distance if no element ofSis too far fromTand vice versa. For example, ifSis an open set in Euclidean spaceTis anε-netinsideS, thendH(S,T)<ε{\displaystyle d_{H}(S,T)<\varepsilon }. In general, the Hausdorff distancedH(S,T){\displaystyle d_{H}(S,T)}can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets ofM. The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. TheGromov–Hausdorff distancebetween compact spacesXandYis the infimum of the Hausdorff distance over all metric spacesZthat containXandYas subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications. If(M1,d1),…,(Mn,dn){\displaystyle (M_{1},d_{1}),\ldots ,(M_{n},d_{n})}are metric spaces, andNis theEuclidean normonRn{\displaystyle \mathbb {R} ^{n}}, then(M1×⋯×Mn,d×){\displaystyle {\bigl (}M_{1}\times \cdots \times M_{n},d_{\times }{\bigr )}}is a metric space, where theproduct metricis defined byd×((x1,…,xn),(y1,…,yn))=N(d1(x1,y1),…,dn(xn,yn)),{\displaystyle d_{\times }{\bigl (}(x_{1},\ldots ,x_{n}),(y_{1},\ldots ,y_{n}){\bigr )}=N{\bigl (}d_{1}(x_{1},y_{1}),\ldots ,d_{n}(x_{n},y_{n}){\bigr )},}and the induced topology agrees with theproduct topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained ifNis thetaxicab norm, ap-norm, themaximum norm, or any other norm which is non-decreasing as the coordinates of a positiven-tuple increase (yielding the triangle inequality). Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metricd(x,y)=∑i=1∞12idi(xi,yi)1+di(xi,yi).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {d_{i}(x_{i},y_{i})}{1+d_{i}(x_{i},y_{i})}}.} The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies ofR{\displaystyle \mathbb {R} }is notfirst-countableand thus is not metrizable. IfMis a metric space with metricd, and∼{\displaystyle \sim }is anequivalence relationonM, then we can endow the quotient setM/∼{\displaystyle M/\!\sim }with a pseudometric. The distance between two equivalence classes[x]{\displaystyle [x]}and[y]{\displaystyle [y]}is defined asd′([x],[y])=inf{d(p1,q1)+d(p2,q2)+⋯+d(pn,qn)},{\displaystyle d'([x],[y])=\inf\{d(p_{1},q_{1})+d(p_{2},q_{2})+\dotsb +d(p_{n},q_{n})\},}where theinfimumis taken over all finite sequences(p1,p2,…,pn){\displaystyle (p_{1},p_{2},\dots ,p_{n})}and(q1,q2,…,qn){\displaystyle (q_{1},q_{2},\dots ,q_{n})}withp1∼x{\displaystyle p_{1}\sim x},qn∼y{\displaystyle q_{n}\sim y},qi∼pi+1,i=1,2,…,n−1{\displaystyle q_{i}\sim p_{i+1},i=1,2,\dots ,n-1}.[30]In general this will only define apseudometric, i.e.d′([x],[y])=0{\displaystyle d'([x],[y])=0}does not necessarily imply that[x]=[y]{\displaystyle [x]=[y]}. However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces),d′{\displaystyle d'}is a metric. The quotient metricd′{\displaystyle d'}is characterized by the followinguniversal property. Iff:(M,d)→(X,δ){\displaystyle f\,\colon (M,d)\to (X,\delta )}is a metric (i.e. 1-Lipschitz) map between metric spaces satisfyingf(x) =f(y)wheneverx∼y{\displaystyle x\sim y}, then the induced functionf¯:M/∼→X{\displaystyle {\overline {f}}\,\colon {M/\sim }\to X}, given byf¯([x])=f(x){\displaystyle {\overline {f}}([x])=f(x)}, is a metric mapf¯:(M/∼,d′)→(X,δ).{\displaystyle {\overline {f}}\,\colon (M/\sim ,d')\to (X,\delta ).} The quotient metric does not always induce thequotient topology. For example, the topological quotient of the metric spaceN×[0,1]{\displaystyle \mathbb {N} \times [0,1]}identifying all points of the form(n,0){\displaystyle (n,0)}is not metrizable since it is notfirst-countable, but the quotient metric is a well-defined metric on the same set which induces acoarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient.[31] A topological space issequentialif and only if it is a (topological) quotient of a metric space.[32] There are several notions of spaces which have less structure than a metric space, but more than a topological space. There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, infunctional analysispseudometrics often come fromseminormson vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term intopology. Some authors define metrics so as to allow the distance functiondto attain the value ∞, i.e. distances are non-negative numbers on theextended real number line.[4]Such a function is also called anextended metricor "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using asubadditivemonotonically increasing bounded function which is zero at zero, e.g.d′(x,y)=d(x,y)/(1+d(x,y)){\displaystyle d'(x,y)=d(x,y)/(1+d(x,y))}ord″(x,y)=min(1,d(x,y)){\displaystyle d''(x,y)=\min(1,d(x,y))}. The requirement that the metric take values in[0,∞){\displaystyle [0,\infty )}can be relaxed to consider metrics with values in other structures, including: These generalizations still induce auniform structureon the space. ApseudometriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) onlyd(x,x)=0{\displaystyle d(x,x)=0}for allx{\displaystyle x}is required.[34]In other words, the axioms for a pseudometric are: In some contexts, pseudometrics are referred to assemimetrics[35]because of their relation toseminorms. Occasionally, aquasimetricis defined as a function that satisfies all axioms for a metric with the possible exception of symmetry.[36]The name of this generalisation is not entirely standardized.[37] Quasimetrics are common in real life. For example, given a setXof mountain villages, the typical walking times between elements ofXform a quasimetric because travel uphill takes longer than travel downhill. Another example is thelength of car ridesin a city with one-way streets: here, a shortest path from pointAto pointBgoes along a different set of streets than a shortest path fromBtoAand may have a different length. A quasimetric on the reals can be defined by settingd(x,y)={x−yifx≥y,1otherwise.{\displaystyle d(x,y)={\begin{cases}x-y&{\text{if }}x\geq y,\\1&{\text{otherwise.}}\end{cases}}}The 1 may be replaced, for example, by infinity or by1+y−x{\displaystyle 1+{\sqrt {y-x}}}or any othersubadditivefunction ofy-x. This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size byfiling it down, but it is difficult or impossible to grow it. Given a quasimetric onX, one can define anR-ball aroundxto be the set{y∈X|d(x,y)≤R}{\displaystyle \{y\in X|d(x,y)\leq R\}}. As in the case of a metric, such balls form a basis for a topology onX, but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed)Sorgenfrey line. In ametametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are: Metametrics appear in the study ofGromov hyperbolic metric spacesand their boundaries. Thevisual metametricon such a space satisfiesd(x,x)=0{\displaystyle d(x,x)=0}for pointsx{\displaystyle x}on the boundary, but otherwised(x,x){\displaystyle d(x,x)}is approximately the distance fromx{\displaystyle x}to the boundary. Metametrics were first defined by Jussi Väisälä.[38]In other work, a function satisfying these axioms is called apartial metric[39][40]or adislocated metric.[34] AsemimetriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }that satisfies the first three axioms, but not necessarily the triangle inequality: Some authors work with a weaker form of the triangle inequality, such as: The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to asquasimetrics,[41]nearmetrics[42]orinframetrics.[43] The ρ-inframetric inequalities were introduced to modelround-trip delay timesin theinternet.[43]The triangle inequality implies the 2-inframetric inequality, and theultrametric inequalityis exactly the 1-inframetric inequality. Relaxing the last three axioms leads to the notion of apremetric, i.e. a function satisfying the following conditions: This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics[44]or pseudometrics;[45]in translations of Russian books it sometimes appears as "prametric".[46]A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance.[47] Any premetric gives rise to a topology as follows. For a positive realr{\displaystyle r}, ther{\displaystyle r}-ballcentered at a pointp{\displaystyle p}is defined as A set is calledopenif for any pointp{\displaystyle p}in the set there is anr{\displaystyle r}-ballcentered atp{\displaystyle p}which is contained in the set. Every premetric space is a topological space, and in fact asequential space. In general, ther{\displaystyle r}-ballsthemselves need not be open sets with respect to this topology. As for metrics, the distance between two setsA{\displaystyle A}andB{\displaystyle B}, is defined as This defines a premetric on thepower setof a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric. Any premetric gives rise to apreclosure operatorcl{\displaystyle cl}as follows: The prefixespseudo-,quasi-andsemi-can also be combined, e.g., apseudoquasimetric(sometimes calledhemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the openr{\displaystyle r}-ballsform a basis of open sets. A very basic example of a pseudoquasimetric space is the set{0,1}{\displaystyle \{0,1\}}with the premetric given byd(0,1)=1{\displaystyle d(0,1)=1}andd(1,0)=0.{\displaystyle d(1,0)=0.}The associated topological space is theSierpiński space. Sets equipped with an extended pseudoquasimetric were studied byWilliam Lawvereas "generalized metric spaces".[48]From acategoricalpoint of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of themetric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients. Lawvere also gave an alternate definition of such spaces asenriched categories. The ordered set(R,≥){\displaystyle (\mathbb {R} ,\geq )}can be seen as acategorywith onemorphisma→b{\displaystyle a\to b}ifa≥b{\displaystyle a\geq b}and none otherwise. Using+as thetensor productand 0 as theidentitymakes this category into amonoidal categoryR∗{\displaystyle R^{*}}. Every (extended pseudoquasi-)metric space(M,d){\displaystyle (M,d)}can now be viewed as a categoryM∗{\displaystyle M^{*}}enriched overR∗{\displaystyle R^{*}}: The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. Amultisetis a generalization of the notion of asetin which an element can occur more than once. Define the multiset unionU=XY{\displaystyle U=XY}as follows: if an elementxoccursmtimes inXandntimes inYthen it occursm+ntimes inU. A functiondon the set of nonempty finite multisets of elements of a setMis a metric[49]if By considering the cases of axioms 1 and 2 in which the multisetXhas two elements and the case of axiom 3 in which the multisetsX,Y, andZhave one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements. A simple example is the set of all nonempty finite multisetsX{\displaystyle X}of integers withd(X)=max(X)−min(X){\displaystyle d(X)=\max(X)-\min(X)}. More complex examples areinformation distancein multisets;[49]andnormalized compression distance(NCD) in multisets.[50]
https://en.wikipedia.org/wiki/Relation_of_norms_and_metrics
Inmathematics, particularly infunctional analysis, aseminormis like anormbut need not bepositive definite. Seminorms are intimately connected withconvex sets: every seminorm is theMinkowski functionalof someabsorbingdiskand, conversely, the Minkowski functional of any such set is a seminorm. Atopological vector spaceis locally convex if and only if its topology is induced by a family of seminorms. LetX{\displaystyle X}be a vector space over either thereal numbersR{\displaystyle \mathbb {R} }or thecomplexnumbersC.{\displaystyle \mathbb {C} .}Areal-valued functionp:X→R{\displaystyle p:X\to \mathbb {R} }is called aseminormif it satisfies the following two conditions: These two conditions imply thatp(0)=0{\displaystyle p(0)=0}[proof 1]and that every seminormp{\displaystyle p}also has the following property:[proof 2] Some authors include non-negativity as part of the definition of "seminorm" (and also sometimes of "norm"), although this is not necessary since it follows from the other two properties. By definition, anormonX{\displaystyle X}is a seminorm that also separates points, meaning that it has the following additional property: Aseminormed spaceis a pair(X,p){\displaystyle (X,p)}consisting of a vector spaceX{\displaystyle X}and a seminormp{\displaystyle p}onX.{\displaystyle X.}If the seminormp{\displaystyle p}is also a norm then the seminormed space(X,p){\displaystyle (X,p)}is called anormed space. Since absolute homogeneity implies positive homogeneity, every seminorm is a type of function called asublinear function. A mapp:X→R{\displaystyle p:X\to \mathbb {R} }is called asublinear functionif it is subadditive andpositive homogeneous. Unlike a seminorm, a sublinear function isnotnecessarily nonnegative. Sublinear functions are often encountered in the context of theHahn–Banach theorem. A real-valued functionp:X→R{\displaystyle p:X\to \mathbb {R} }is a seminorm if and only if it is asublinearandbalanced function. Seminorms on a vector spaceX{\displaystyle X}are intimately tied, via Minkowski functionals, to subsets ofX{\displaystyle X}that areconvex,balanced, andabsorbing. Given such a subsetD{\displaystyle D}ofX,{\displaystyle X,}the Minkowski functional ofD{\displaystyle D}is a seminorm. Conversely, given a seminormp{\displaystyle p}onX,{\displaystyle X,}the sets{x∈X:p(x)<1}{\displaystyle \{x\in X:p(x)<1\}}and{x∈X:p(x)≤1}{\displaystyle \{x\in X:p(x)\leq 1\}}are convex, balanced, and absorbing and furthermore, the Minkowski functional of these two sets (as well as of any set lying "in between them") isp.{\displaystyle p.}[5] Every seminorm is asublinear function, and thus satisfies allproperties of a sublinear function, includingconvexity,p(0)=0,{\displaystyle p(0)=0,}and for all vectorsx,y∈X{\displaystyle x,y\in X}: thereverse triangle inequality:[2][6]|p(x)−p(y)|≤p(x−y){\displaystyle |p(x)-p(y)|\leq p(x-y)}and also0≤max{p(x),p(−x)}{\textstyle 0\leq \max\{p(x),p(-x)\}}andp(x)−p(y)≤p(x−y).{\displaystyle p(x)-p(y)\leq p(x-y).}[2][6] For any vectorx∈X{\displaystyle x\in X}and positive realr>0:{\displaystyle r>0:}[7]x+{y∈X:p(y)<r}={y∈X:p(x−y)<r}{\displaystyle x+\{y\in X:p(y)<r\}=\{y\in X:p(x-y)<r\}}and furthermore,{x∈X:p(x)<r}{\displaystyle \{x\in X:p(x)<r\}}is anabsorbingdiskinX.{\displaystyle X.}[3] Ifp{\displaystyle p}is a sublinear function on a real vector spaceX{\displaystyle X}then there exists a linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf≤p{\displaystyle f\leq p}[6]and furthermore, for any linear functionalg{\displaystyle g}onX,{\displaystyle X,}g≤p{\displaystyle g\leq p}onX{\displaystyle X}if and only ifg−1(1)∩{x∈X:p(x)<1}=∅.{\displaystyle g^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .}[6] Other properties of seminorms Every seminorm is abalanced function. A seminormp{\displaystyle p}is a norm onX{\displaystyle X}if and only if{x∈X:p(x)<1}{\displaystyle \{x\in X:p(x)<1\}}does not contain a non-trivial vector subspace. Ifp:X→[0,∞){\displaystyle p:X\to [0,\infty )}is a seminorm onX{\displaystyle X}thenker⁡p:=p−1(0){\displaystyle \ker p:=p^{-1}(0)}is a vector subspace ofX{\displaystyle X}and for everyx∈X,{\displaystyle x\in X,}p{\displaystyle p}is constant on the setx+ker⁡p={x+k:p(k)=0}{\displaystyle x+\ker p=\{x+k:p(k)=0\}}and equal top(x).{\displaystyle p(x).}[proof 3] Furthermore, for any realr>0,{\displaystyle r>0,}[3]r{x∈X:p(x)<1}={x∈X:p(x)<r}={x∈X:1rp(x)<1}.{\displaystyle r\{x\in X:p(x)<1\}=\{x\in X:p(x)<r\}=\left\{x\in X:{\tfrac {1}{r}}p(x)<1\right\}.} IfD{\displaystyle D}is a set satisfying{x∈X:p(x)<1}⊆D⊆{x∈X:p(x)≤1}{\displaystyle \{x\in X:p(x)<1\}\subseteq D\subseteq \{x\in X:p(x)\leq 1\}}thenD{\displaystyle D}isabsorbinginX{\displaystyle X}andp=pD{\displaystyle p=p_{D}}wherepD{\displaystyle p_{D}}denotes theMinkowski functionalassociated withD{\displaystyle D}(that is, the gauge ofD{\displaystyle D}).[5]In particular, ifD{\displaystyle D}is as above andq{\displaystyle q}is any seminorm onX,{\displaystyle X,}thenq=p{\displaystyle q=p}if and only if{x∈X:q(x)<1}⊆D⊆{x∈X:q(x)≤}.{\displaystyle \{x\in X:q(x)<1\}\subseteq D\subseteq \{x\in X:q(x)\leq \}.}[5] If(X,‖⋅‖){\displaystyle (X,\|\,\cdot \,\|)}is a normed space andx,y∈X{\displaystyle x,y\in X}then‖x−y‖=‖x−z‖+‖z−y‖{\displaystyle \|x-y\|=\|x-z\|+\|z-y\|}for allz{\displaystyle z}in the interval[x,y].{\displaystyle [x,y].}[8] Every norm is aconvex functionand consequently, finding a global maximum of a norm-basedobjective functionis sometimes tractable. Letp:X→R{\displaystyle p:X\to \mathbb {R} }be a non-negative function. The following are equivalent: If any of the above conditions hold, then the following are equivalent: Ifp{\displaystyle p}is a sublinear function on a real vector spaceX{\displaystyle X}then the following are equivalent:[6] Ifp,q:X→[0,∞){\displaystyle p,q:X\to [0,\infty )}are seminorms onX{\displaystyle X}then: Ifp{\displaystyle p}is a seminorm onX{\displaystyle X}andf{\displaystyle f}is a linear functional onX{\displaystyle X}then: Seminorms offer a particularly clean formulation of theHahn–Banach theorem: A similar extension property also holds for seminorms: Theorem[16][12](Extending seminorms)—IfM{\displaystyle M}is a vector subspace ofX,{\displaystyle X,}p{\displaystyle p}is a seminorm onM,{\displaystyle M,}andq{\displaystyle q}is a seminorm onX{\displaystyle X}such thatp≤q|M,{\displaystyle p\leq q{\big \vert }_{M},}then there exists a seminormP{\displaystyle P}onX{\displaystyle X}such thatP|M=p{\displaystyle P{\big \vert }_{M}=p}andP≤q.{\displaystyle P\leq q.} A seminormp{\displaystyle p}onX{\displaystyle X}induces a topology, called theseminorm-induced topology, via the canonicaltranslation-invariantpseudometricdp:X×X→R{\displaystyle d_{p}:X\times X\to \mathbb {R} };dp(x,y):=p(x−y)=p(y−x).{\displaystyle d_{p}(x,y):=p(x-y)=p(y-x).}This topology isHausdorffif and only ifdp{\displaystyle d_{p}}is a metric, which occurs if and only ifp{\displaystyle p}is anorm.[4]This topology makesX{\displaystyle X}into alocally convexpseudometrizabletopological vector spacethat has aboundedneighborhood of the origin and aneighborhood basisat the origin consisting of the following open balls (or the closed balls) centered at the origin:{x∈X:p(x)<r}or{x∈X:p(x)≤r}{\displaystyle \{x\in X:p(x)<r\}\quad {\text{ or }}\quad \{x\in X:p(x)\leq r\}}asr>0{\displaystyle r>0}ranges over the positive reals. Every seminormed space(X,p){\displaystyle (X,p)}should be assumed to be endowed with this topology unless indicated otherwise. A topological vector space whose topology is induced by some seminorm is calledseminormable. Equivalently, every vector spaceX{\displaystyle X}with seminormp{\displaystyle p}induces avector space quotientX/W,{\displaystyle X/W,}whereW{\displaystyle W}is the subspace ofX{\displaystyle X}consisting of all vectorsx∈X{\displaystyle x\in X}withp(x)=0.{\displaystyle p(x)=0.}ThenX/W{\displaystyle X/W}carries a norm defined byp(x+W)=p(x).{\displaystyle p(x+W)=p(x).}The resulting topology,pulled backtoX,{\displaystyle X,}is precisely the topology induced byp.{\displaystyle p.} Any seminorm-induced topology makesX{\displaystyle X}locally convex, as follows. Ifp{\displaystyle p}is a seminorm onX{\displaystyle X}andr∈R,{\displaystyle r\in \mathbb {R} ,}call the set{x∈X:p(x)<r}{\displaystyle \{x\in X:p(x)<r\}}theopen ball of radiusr{\displaystyle r}about the origin; likewise the closed ball of radiusr{\displaystyle r}is{x∈X:p(x)≤r}.{\displaystyle \{x\in X:p(x)\leq r\}.}The set of all open (resp. closed)p{\displaystyle p}-balls at the origin forms a neighborhood basis ofconvexbalancedsets that are open (resp. closed) in thep{\displaystyle p}-topology onX.{\displaystyle X.} The notions of stronger and weaker seminorms are akin to the notions of stronger and weakernorms. Ifp{\displaystyle p}andq{\displaystyle q}are seminorms onX,{\displaystyle X,}then we say thatq{\displaystyle q}isstrongerthanp{\displaystyle p}and thatp{\displaystyle p}isweakerthanq{\displaystyle q}if any of the following equivalent conditions holds: The seminormsp{\displaystyle p}andq{\displaystyle q}are calledequivalentif they are both weaker (or both stronger) than each other. This happens if they satisfy any of the following conditions: A topological vector space (TVS) is said to be aseminormable space(respectively, anormable space) if its topology is induced by a single seminorm (resp. a single norm). A TVS is normable if and only if it is seminormable and Hausdorff or equivalently, if and only if it is seminormable andT1(because a TVS is Hausdorff if and only if it is aT1space). Alocally bounded topological vector spaceis a topological vector space that possesses a bounded neighborhood of the origin. Normability oftopological vector spacesis characterized byKolmogorov's normability criterion. A TVS is seminormable if and only if it has a convex bounded neighborhood of the origin.[17]Thus alocally convexTVS is seminormable if and only if it has a non-empty bounded open set.[18]A TVS is normable if and only if it is aT1spaceand admits a bounded convex neighborhood of the origin. IfX{\displaystyle X}is a Hausdorfflocally convexTVS then the following are equivalent: Furthermore,X{\displaystyle X}is finite dimensional if and only ifXσ′{\displaystyle X_{\sigma }^{\prime }}is normable (hereXσ′{\displaystyle X_{\sigma }^{\prime }}denotesX′{\displaystyle X^{\prime }}endowed with theweak-* topology). The product of infinitely many seminormable space is again seminormable if and only if all but finitely many of these spaces trivial (that is, 0-dimensional).[18] Ifp{\displaystyle p}is a seminorm on a topological vector spaceX,{\displaystyle X,}then the following are equivalent:[5] In particular, if(X,p){\displaystyle (X,p)}is a seminormed space then a seminormq{\displaystyle q}onX{\displaystyle X}is continuous if and only ifq{\displaystyle q}is dominated by a positive scalar multiple ofp.{\displaystyle p.}[3] IfX{\displaystyle X}is a real TVS,f{\displaystyle f}is a linear functional onX,{\displaystyle X,}andp{\displaystyle p}is a continuous seminorm (or more generally, a sublinear function) onX,{\displaystyle X,}thenf≤p{\displaystyle f\leq p}onX{\displaystyle X}implies thatf{\displaystyle f}is continuous.[6] IfF:(X,p)→(Y,q){\displaystyle F:(X,p)\to (Y,q)}is a map between seminormed spaces then let[15]‖F‖p,q:=sup{q(F(x)):p(x)≤1,x∈X}.{\displaystyle \|F\|_{p,q}:=\sup\{q(F(x)):p(x)\leq 1,x\in X\}.} IfF:(X,p)→(Y,q){\displaystyle F:(X,p)\to (Y,q)}is a linear map between seminormed spaces then the following are equivalent: IfF{\displaystyle F}is continuous thenq(F(x))≤‖F‖p,qp(x){\displaystyle q(F(x))\leq \|F\|_{p,q}p(x)}for allx∈X.{\displaystyle x\in X.}[15] The space of all continuous linear mapsF:(X,p)→(Y,q){\displaystyle F:(X,p)\to (Y,q)}between seminormed spaces is itself a seminormed space under the seminorm‖F‖p,q.{\displaystyle \|F\|_{p,q}.}This seminorm is a norm ifq{\displaystyle q}is a norm.[15] The concept ofnormincomposition algebrasdoesnotshare the usual properties of a norm. A composition algebra(A,∗,N){\displaystyle (A,*,N)}consists of analgebra over a fieldA,{\displaystyle A,}aninvolution∗,{\displaystyle \,*,}and aquadratic formN,{\displaystyle N,}which is called the "norm". In several casesN{\displaystyle N}is anisotropic quadratic formso thatA{\displaystyle A}has at least onenull vector, contrary to the separation of points required for the usual norm discussed in this article. Anultraseminormor anon-Archimedean seminormis a seminormp:X→R{\displaystyle p:X\to \mathbb {R} }that also satisfiesp(x+y)≤max{p(x),p(y)}for allx,y∈X.{\displaystyle p(x+y)\leq \max\{p(x),p(y)\}{\text{ for all }}x,y\in X.} Weakening subadditivity: Quasi-seminorms A mapp:X→R{\displaystyle p:X\to \mathbb {R} }is called aquasi-seminormif it is (absolutely) homogeneous and there exists someb≤1{\displaystyle b\leq 1}such thatp(x+y)≤bp(p(x)+p(y))for allx,y∈X.{\displaystyle p(x+y)\leq bp(p(x)+p(y)){\text{ for all }}x,y\in X.}The smallest value ofb{\displaystyle b}for which this holds is called themultiplier ofp.{\displaystyle p.} A quasi-seminorm that separates points is called aquasi-normonX.{\displaystyle X.} Weakening homogeneity -k{\displaystyle k}-seminorms A mapp:X→R{\displaystyle p:X\to \mathbb {R} }is called ak{\displaystyle k}-seminormif it is subadditive and there exists ak{\displaystyle k}such that0<k≤1{\displaystyle 0<k\leq 1}and for allx∈X{\displaystyle x\in X}and scalarss,{\displaystyle s,}p(sx)=|s|kp(x){\displaystyle p(sx)=|s|^{k}p(x)}Ak{\displaystyle k}-seminorm that separates points is called ak{\displaystyle k}-normonX.{\displaystyle X.} We have the following relationship between quasi-seminorms andk{\displaystyle k}-seminorms: Proofs
https://en.wikipedia.org/wiki/Seminorm
Inlinear algebra, asublinearfunction (orfunctionalas is more often used infunctional analysis), also called aquasi-seminormor aBanach functional, on avector spaceX{\displaystyle X}is areal-valuedfunctionwith only some of the properties of aseminorm. Unlike seminorms, a sublinear function does not have to benonnegative-valued and also does not have to beabsolutely homogeneous. Seminorms are themselves abstractions of the more well known notion ofnorms, where a seminorm has all the defining properties of a normexceptthat it is not required to map non-zero vectors to non-zero values. Infunctional analysisthe nameBanach functionalis sometimes used, reflecting that they are most commonly used when applying a general formulation of theHahn–Banach theorem. The notion of a sublinear function was introduced byStefan Banachwhen he proved his version of theHahn-Banach theorem.[1] There is also a different notion incomputer science, described below, that also goes by the name "sublinear function." LetX{\displaystyle X}be avector spaceover a fieldK,{\displaystyle \mathbb {K} ,}whereK{\displaystyle \mathbb {K} }is either thereal numbersR{\displaystyle \mathbb {R} }orcomplex numbersC.{\displaystyle \mathbb {C} .}A real-valued functionp:X→R{\displaystyle p:X\to \mathbb {R} }onX{\displaystyle X}is called asublinear function(or asublinearfunctionalifK=R{\displaystyle \mathbb {K} =\mathbb {R} }), and also sometimes called aquasi-seminormor aBanach functional, if it has these two properties:[1] A functionp:X→R{\displaystyle p:X\to \mathbb {R} }is calledpositive[3]ornonnegativeifp(x)≥0{\displaystyle p(x)\geq 0}for allx∈X,{\displaystyle x\in X,}although some authors[4]definepositiveto instead mean thatp(x)≠0{\displaystyle p(x)\neq 0}wheneverx≠0;{\displaystyle x\neq 0;}these definitions are not equivalent. It is asymmetric functionifp(−x)=p(x){\displaystyle p(-x)=p(x)}for allx∈X.{\displaystyle x\in X.}Every subadditive symmetric function is necessarily nonnegative.[proof 1]A sublinear function on a real vector space issymmetricif and only if it is aseminorm. A sublinear function on a real or complex vector space is a seminorm if and only if it is abalanced functionor equivalently, if and only ifp(ux)≤p(x){\displaystyle p(ux)\leq p(x)}for everyunit lengthscalaru{\displaystyle u}(satisfying|u|=1{\displaystyle |u|=1}) and everyx∈X.{\displaystyle x\in X.} The set of all sublinear functions onX,{\displaystyle X,}denoted byX#,{\displaystyle X^{\#},}can bepartially orderedby declaringp≤q{\displaystyle p\leq q}if and only ifp(x)≤q(x){\displaystyle p(x)\leq q(x)}for allx∈X.{\displaystyle x\in X.}A sublinear function is calledminimalif it is aminimal elementofX#{\displaystyle X^{\#}}under this order. A sublinear function is minimal if and only if it is a reallinear functional.[1] Everynorm,seminorm, and real linear functional is a sublinear function. Theidentity functionR→R{\displaystyle \mathbb {R} \to \mathbb {R} }onX:=R{\displaystyle X:=\mathbb {R} }is an example of a sublinear function (in fact, it is even a linear functional) that is neither positive nor a seminorm; the same is true of this map's negationx↦−x.{\displaystyle x\mapsto -x.}[5]More generally, for any reala≤b,{\displaystyle a\leq b,}the mapSa,b:R→Rx↦{axifx≤0bxifx≥0{\displaystyle {\begin{alignedat}{4}S_{a,b}:\;&&\mathbb {R} &&\;\to \;&\mathbb {R} \\[0.3ex]&&x&&\;\mapsto \;&{\begin{cases}ax&{\text{ if }}x\leq 0\\bx&{\text{ if }}x\geq 0\\\end{cases}}\\\end{alignedat}}}is a sublinear function onX:=R{\displaystyle X:=\mathbb {R} }and moreover, every sublinear functionp:R→R{\displaystyle p:\mathbb {R} \to \mathbb {R} }is of this form; specifically, ifa:=−p(−1){\displaystyle a:=-p(-1)}andb:=p(1){\displaystyle b:=p(1)}thena≤b{\displaystyle a\leq b}andp=Sa,b.{\displaystyle p=S_{a,b}.} Ifp{\displaystyle p}andq{\displaystyle q}are sublinear functions on a real vector spaceX{\displaystyle X}then so is the mapx↦max{p(x),q(x)}.{\displaystyle x\mapsto \max\{p(x),q(x)\}.}More generally, ifP{\displaystyle {\mathcal {P}}}is any non-empty collection of sublinear functionals on a real vector spaceX{\displaystyle X}and if for allx∈X,{\displaystyle x\in X,}q(x):=sup{p(x):p∈P},{\displaystyle q(x):=\sup\{p(x):p\in {\mathcal {P}}\},}thenq{\displaystyle q}is a sublinear functional onX.{\displaystyle X.}[5] A functionp:X→R{\displaystyle p:X\to \mathbb {R} }which issubadditive,convex, and satisfiesp(0)≤0{\displaystyle p(0)\leq 0}is also positively homogeneous (the latter conditionp(0)≤0{\displaystyle p(0)\leq 0}is necessary as the example ofp(x):=x2+1{\displaystyle p(x):={\sqrt {x^{2}+1}}}onX:=R{\displaystyle X:=\mathbb {R} }shows). Ifp{\displaystyle p}is positively homogeneous, it is convex if and only if it is subadditive. Therefore, assumingp(0)≤0{\displaystyle p(0)\leq 0}, any two properties among subadditivity, convexity, and positive homogeneity implies the third. Every sublinear function is aconvex function: For0≤t≤1,{\displaystyle 0\leq t\leq 1,}p(tx+(1−t)y)≤p(tx)+p((1−t)y)subadditivity=tp(x)+(1−t)p(y)nonnegative homogeneity{\displaystyle {\begin{alignedat}{3}p(tx+(1-t)y)&\leq p(tx)+p((1-t)y)&&\quad {\text{ subadditivity}}\\&=tp(x)+(1-t)p(y)&&\quad {\text{ nonnegative homogeneity}}\\\end{alignedat}}} Ifp:X→R{\displaystyle p:X\to \mathbb {R} }is a sublinear function on a vector spaceX{\displaystyle X}then[proof 2][3]p(0)=0≤p(x)+p(−x),{\displaystyle p(0)~=~0~\leq ~p(x)+p(-x),}for everyx∈X,{\displaystyle x\in X,}which implies that at least one ofp(x){\displaystyle p(x)}andp(−x){\displaystyle p(-x)}must be nonnegative; that is, for everyx∈X,{\displaystyle x\in X,}[3]0≤max{p(x),p(−x)}.{\displaystyle 0~\leq ~\max\{p(x),p(-x)\}.}Moreover, whenp:X→R{\displaystyle p:X\to \mathbb {R} }is a sublinear function on a real vector space then the mapq:X→R{\displaystyle q:X\to \mathbb {R} }defined byq(x)=defmax{p(x),p(−x)}{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}}is a seminorm.[3] Subadditivity ofp:X→R{\displaystyle p:X\to \mathbb {R} }guarantees that for all vectorsx,y∈X,{\displaystyle x,y\in X,}[1][proof 3]p(x)−p(y)≤p(x−y),{\displaystyle p(x)-p(y)~\leq ~p(x-y),}−p(x)≤p(−x),{\displaystyle -p(x)~\leq ~p(-x),}so ifp{\displaystyle p}is alsosymmetricthen thereverse triangle inequalitywill hold for all vectorsx,y∈X,{\displaystyle x,y\in X,}|p(x)−p(y)|≤p(x−y).{\displaystyle |p(x)-p(y)|~\leq ~p(x-y).} Definingker⁡p=defp−1(0),{\displaystyle \ker p~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~p^{-1}(0),}then subadditivity also guarantees that for allx∈X,{\displaystyle x\in X,}the value ofp{\displaystyle p}on the setx+(ker⁡p∩−ker⁡p)={x+k:p(k)=0=p(−k)}{\displaystyle x+(\ker p\cap -\ker p)=\{x+k:p(k)=0=p(-k)\}}is constant and equal top(x).{\displaystyle p(x).}[proof 4]In particular, ifker⁡p=p−1(0){\displaystyle \ker p=p^{-1}(0)}is a vector subspace ofX{\displaystyle X}then−ker⁡p=ker⁡p{\displaystyle -\ker p=\ker p}and the assignmentx+ker⁡p↦p(x),{\displaystyle x+\ker p\mapsto p(x),}which will be denoted byp^,{\displaystyle {\hat {p}},}is a well-defined real-valued sublinear function on thequotient spaceX/ker⁡p{\displaystyle X\,/\,\ker p}that satisfiesp^−1(0)=ker⁡p.{\displaystyle {\hat {p}}^{-1}(0)=\ker p.}Ifp{\displaystyle p}is a seminorm thenp^{\displaystyle {\hat {p}}}is just the usual canonical norm on the quotient spaceX/ker⁡p.{\displaystyle X\,/\,\ker p.} Pryce's sublinearity lemma[2]—Supposep:X→R{\displaystyle p:X\to \mathbb {R} }is a sublinear functional on a vector spaceX{\displaystyle X}and thatK⊆X{\displaystyle K\subseteq X}is a non-empty convex subset. Ifx∈X{\displaystyle x\in X}is a vector anda,c>0{\displaystyle a,c>0}are positive real numbers such thatp(x)+ac<infk∈Kp(x+ak){\displaystyle p(x)+ac~<~\inf _{k\in K}p(x+ak)}then for every positive realb>0{\displaystyle b>0}there exists somez∈K{\displaystyle \mathbf {z} \in K}such thatp(x+az)+bc<infk∈Kp(x+az+bk).{\displaystyle p(x+a\mathbf {z} )+bc~<~\inf _{k\in K}p(x+a\mathbf {z} +bk).} Addingbc{\displaystyle bc}to both sides of the hypothesisp(x)+ac<infp(x+aK){\textstyle p(x)+ac\,<\,\inf _{}p(x+aK)}(wherep(x+aK)=def{p(x+ak):k∈K}{\displaystyle p(x+aK)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{p(x+ak):k\in K\}}) and combining that with the conclusion givesp(x)+ac+bc<infp(x+aK)+bc≤p(x+az)+bc<infp(x+az+bK){\displaystyle p(x)+ac+bc~<~\inf _{}p(x+aK)+bc~\leq ~p(x+a\mathbf {z} )+bc~<~\inf _{}p(x+a\mathbf {z} +bK)}which yields many more inequalities, including, for instance,p(x)+ac+bc<p(x+az)+bc<p(x+az+bz){\displaystyle p(x)+ac+bc~<~p(x+a\mathbf {z} )+bc~<~p(x+a\mathbf {z} +b\mathbf {z} )}in which an expression on one side of a strict inequality<{\displaystyle \,<\,}can be obtained from the other by replacing the symbolc{\displaystyle c}withz{\displaystyle \mathbf {z} }(or vice versa) and moving the closing parenthesis to the right (or left) of an adjacent summand (all other symbols remain fixed and unchanged). Ifp:X→R{\displaystyle p:X\to \mathbb {R} }is a real-valued sublinear function on a real vector spaceX{\displaystyle X}(or ifX{\displaystyle X}is complex, then when it is considered as a real vector space) then the mapq(x)=defmax{p(x),p(−x)}{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}}defines aseminormon the real vector spaceX{\displaystyle X}called theseminorm associated withp.{\displaystyle p.}[3]A sublinear functionp{\displaystyle p}on a real or complex vector space is asymmetric functionif and only ifp=q{\displaystyle p=q}whereq(x)=defmax{p(x),p(−x)}{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}}as before. More generally, ifp:X→R{\displaystyle p:X\to \mathbb {R} }is a real-valued sublinear function on a (real or complex) vector spaceX{\displaystyle X}thenq(x)=defsup|u|=1p(ux)=sup{p(ux):uis a unit scalar}{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\sup _{|u|=1}p(ux)~=~\sup\{p(ux):u{\text{ is a unit scalar }}\}}will define aseminormonX{\displaystyle X}if this supremum is always a real number (that is, never equal to∞{\displaystyle \infty }). Ifp{\displaystyle p}is a sublinear function on a real vector spaceX{\displaystyle X}then the following are equivalent:[1] Ifp{\displaystyle p}is a sublinear function on a real vector spaceX{\displaystyle X}then there exists a linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf≤p.{\displaystyle f\leq p.}[1] IfX{\displaystyle X}is a real vector space,f{\displaystyle f}is a linear functional onX,{\displaystyle X,}andp{\displaystyle p}is a positive sublinear function onX,{\displaystyle X,}thenf≤p{\displaystyle f\leq p}onX{\displaystyle X}if and only iff−1(1)∩{x∈X:p(x)<1}=∅.{\displaystyle f^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .}[1] A real-valued functionf{\displaystyle f}defined on a subset of a real or complex vector spaceX{\displaystyle X}is said to bedominated bya sublinear functionp{\displaystyle p}iff(x)≤p(x){\displaystyle f(x)\leq p(x)}for everyx{\displaystyle x}that belongs to the domain off.{\displaystyle f.}Iff:X→R{\displaystyle f:X\to \mathbb {R} }is a reallinear functionalonX{\displaystyle X}then[6][1]f{\displaystyle f}is dominated byp{\displaystyle p}(that is,f≤p{\displaystyle f\leq p}) if and only if−p(−x)≤f(x)≤p(x)for everyx∈X.{\displaystyle -p(-x)\leq f(x)\leq p(x)\quad {\text{ for every }}x\in X.}Moreover, ifp{\displaystyle p}is a seminorm or some othersymmetric map(which by definition means thatp(−x)=p(x){\displaystyle p(-x)=p(x)}holds for allx{\displaystyle x}) thenf≤p{\displaystyle f\leq p}if and only if|f|≤p.{\displaystyle |f|\leq p.} Theorem[1]—Ifp:X→R{\displaystyle p:X\to \mathbb {R} }be a sublinear function on a real vector spaceX{\displaystyle X}and ifz∈X{\displaystyle z\in X}then there exists a linear functionalf{\displaystyle f}onX{\displaystyle X}that is dominated byp{\displaystyle p}(that is,f≤p{\displaystyle f\leq p}) and satisfiesf(z)=p(z).{\displaystyle f(z)=p(z).}Moreover, ifX{\displaystyle X}is atopological vector spaceandp{\displaystyle p}is continuous at the origin thenf{\displaystyle f}is continuous. Theorem[7]—Supposef:X→R{\displaystyle f:X\to \mathbb {R} }is a subadditive function (that is,f(x+y)≤f(x)+f(y){\displaystyle f(x+y)\leq f(x)+f(y)}for allx,y∈X{\displaystyle x,y\in X}). Thenf{\displaystyle f}is continuous at the origin if and only iff{\displaystyle f}is uniformly continuous onX.{\displaystyle X.}Iff{\displaystyle f}satisfiesf(0)=0{\displaystyle f(0)=0}thenf{\displaystyle f}is continuous if and only if its absolute value|f|:X→[0,∞){\displaystyle |f|:X\to [0,\infty )}is continuous. Iff{\displaystyle f}is non-negative thenf{\displaystyle f}is continuous if and only if{x∈X:f(x)<1}{\displaystyle \{x\in X:f(x)<1\}}is open inX.{\displaystyle X.} SupposeX{\displaystyle X}is atopological vector space(TVS) over the real or complex numbers andp{\displaystyle p}is a sublinear function onX.{\displaystyle X.}Then the following are equivalent:[7] and ifp{\displaystyle p}is positive then this list may be extended to include: IfX{\displaystyle X}is a real TVS,f{\displaystyle f}is a linear functional onX,{\displaystyle X,}andp{\displaystyle p}is a continuous sublinear function onX,{\displaystyle X,}thenf≤p{\displaystyle f\leq p}onX{\displaystyle X}implies thatf{\displaystyle f}is continuous.[7] Theorem[7]—IfU{\displaystyle U}is a convex open neighborhood of the origin in atopological vector spaceX{\displaystyle X}then theMinkowski functionalofU,{\displaystyle U,}pU:X→[0,∞),{\displaystyle p_{U}:X\to [0,\infty ),}is a continuous non-negative sublinear function onX{\displaystyle X}such thatU={x∈X:pU(x)<1};{\displaystyle U=\left\{x\in X:p_{U}(x)<1\right\};}if in additionU{\displaystyle U}is abalanced setthenpU{\displaystyle p_{U}}is aseminormonX.{\displaystyle X.} Theorem[7]—Suppose thatX{\displaystyle X}is atopological vector space(not necessarilylocally convexorHausdorff) over the real or complex numbers. Then the open convex subsets ofX{\displaystyle X}are exactly those that are of the formz+{x∈X:p(x)<1}={x∈X:p(x−z)<1}{\displaystyle z+\{x\in X:p(x)<1\}=\{x\in X:p(x-z)<1\}}for somez∈X{\displaystyle z\in X}and some positive continuous sublinear functionp{\displaystyle p}onX.{\displaystyle X.} LetV{\displaystyle V}be an open convex subset ofX.{\displaystyle X.}If0∈V{\displaystyle 0\in V}then letz:=0{\displaystyle z:=0}and otherwise letz∈V{\displaystyle z\in V}be arbitrary. Letp:X→[0,∞){\displaystyle p:X\to [0,\infty )}be theMinkowski functionalofV−z,{\displaystyle V-z,}which is a continuous sublinear function onX{\displaystyle X}sinceV−z{\displaystyle V-z}is convex,absorbing, and open (p{\displaystyle p}however is not necessarily a seminorm sinceV{\displaystyle V}was not assumed to bebalanced). FromX=X−z,{\displaystyle X=X-z,}it follows thatz+{x∈X:p(x)<1}={x∈X:p(x−z)<1}.{\displaystyle z+\{x\in X:p(x)<1\}=\{x\in X:p(x-z)<1\}.}It will be shown thatV=z+{x∈X:p(x)<1},{\displaystyle V=z+\{x\in X:p(x)<1\},}which will complete the proof. One of the knownproperties of Minkowski functionalsguarantees{x∈X:p(x)<1}=(0,1)(V−z),{\textstyle \{x\in X:p(x)<1\}=(0,1)(V-z),}where(0,1)(V−z)=def{tx:0<t<1,x∈V−z}=V−z{\displaystyle (0,1)(V-z)\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\{tx:0<t<1,x\in V-z\}=V-z}sinceV−z{\displaystyle V-z}is convex and contains the origin. ThusV−z={x∈X:p(x)<1},{\displaystyle V-z=\{x\in X:p(x)<1\},}as desired.◼{\displaystyle \blacksquare } The concept can be extended to operators that are homogeneous and subadditive. This requires only that thecodomainbe, say, anordered vector spaceto make sense of the conditions. Incomputer science, a functionf:Z+→R{\displaystyle f:\mathbb {Z} ^{+}\to \mathbb {R} }is calledsublineariflimn→∞f(n)n=0,{\displaystyle \lim _{n\to \infty }{\frac {f(n)}{n}}=0,}orf(n)∈o(n){\displaystyle f(n)\in o(n)}inasymptotic notation(notice the smallo{\displaystyle o}). Formally,f(n)∈o(n){\displaystyle f(n)\in o(n)}if and only if, for any givenc>0,{\displaystyle c>0,}there exists anN{\displaystyle N}such thatf(n)<cn{\displaystyle f(n)<cn}forn≥N.{\displaystyle n\geq N.}[8]That is,f{\displaystyle f}grows slower than any linear function. The two meanings should not be confused: while a Banach functional isconvex, almost the opposite is true for functions of sublinear growth: every functionf(n)∈o(n){\displaystyle f(n)\in o(n)}can be upper-bounded by aconcave functionof sublinear growth.[9] Proofs
https://en.wikipedia.org/wiki/Sublinear_function
Āryabhata's sine tableis a set of twenty-four numbers given in the astronomical treatiseĀryabhatiyacomposed by the fifth centuryIndian mathematicianand astronomerĀryabhata(476–550 CE), for the computation of thehalf-chordsof a certain set of arcs of a circle. The set of numbers appears in verse 12 in Chapter 1Dasagitikaof Aryabhatiya and is the first table of sines.[1][2]It is not a table in the modern sense of a mathematical table; that is, it is not a set of numbers arranged into rows and columns.[3][4][5]Āryabhaṭa's table is also not a set of values of the trigonometric sine function in a conventional sense; it is a table of thefirst differencesof the values oftrigonometric sinesexpressed inarcminutes, and because of this the table is also referred to asĀryabhaṭa's table of sine-differences.[6][7] Āryabhaṭa's table was the first sine table ever constructed in thehistory of mathematics.[8]The now lost tables ofHipparchus(c. 190 BC – c. 120 BC) andMenelaus(c. 70–140 CE) andthose ofPtolemy(c. AD 90 – c. 168) were all tables ofchordsand not of half-chords.[8]Āryabhaṭa's table remained as the standard sine table of ancient India. There were continuous attempts to improve the accuracy of this table. These endeavors culminated in the eventual discovery of thepower series expansionsof the sine and cosine functions byMadhava of Sangamagrama(c. 1350 – c. 1425), the founder of theKerala school of astronomy and mathematics, and the tabulation of asine table by Madhavawith values accurate to seven or eight decimal places. Some historians of mathematics have argued that the sine table given in Āryabhaṭiya was an adaptation of earlier such tables constructed by mathematicians and astronomers of ancient Greece.[9]David Pingree, one of America's foremost historians of the exact sciences in antiquity, was an exponent of such a view. Assuming this hypothesis,G. J. Toomer[10][11][12]writes, "Hardly any documentation exists for the earliest arrival of Greek astronomical models in India, or for that matter what those models would have looked like. So it is very difficult to ascertain the extent to which what has come down to us represents transmitted knowledge, and what is original with Indian scientists. ... The truth is probably a tangled mixture of both."[13] The values encoded in Āryabhaṭa's Sanskrit verse can be decoded using thenumerical schemeexplained inĀryabhaṭīya, and the decoded numbers are listed in the table below. In the table, the angle measures relevant to Āryabhaṭa's sine table are listed in the second column. The third column contains the list the numbers contained in the Sanskrit verse given above inDevanagariscript. For the convenience of users unable to read Devanagari, these word-numerals are reproduced in the fourth column inISO 15919transliteration. The next column contains these numbers in theHindu-Arabic numerals. Āryabhaṭa's numbers are the first differences in the values of sines. The corresponding value of sine (or more precisely, ofjya) can be obtained by summing up the differences up to that difference. Thus the value ofjyacorresponding to 18° 45′ is the sum 225 + 224 + 222 + 219 + 215 = 1105. For assessing the accuracy of Āryabhaṭa's computations, the modern values ofjyas are given in the last column of the table. In the Indian mathematical tradition, the sine ( orjya) of an angle is not a ratio of numbers. It is the length of a certain line segment, a certain half-chord. The radius of the base circle is basic parameter for the construction of such tables. Historically, several tables have been constructed using different values for this parameter. Āryabhaṭa has chosen the number 3438 as the value of radius of the base circle for the computation of his sine table. The rationale of the choice of this parameter is the idea of measuring the circumference of a circle in angle measures. In astronomical computations distances are measured indegrees,minutes,seconds, etc. In this measure, the circumference of a circle is 360° = (60 × 360) minutes = 21600 minutes. The radius of the circle, the measure of whose circumference is 21600 minutes, is 21600 / 2π minutes. Computing this using the valueπ= 3.1416 known toAryabhataone gets the radius of the circle as 3438 minutes approximately. Āryabhaṭa's sine table is based on this value for the radius of the base circle. It has not yet been established who is the first ever to use this value for the base radius. ButAryabhatiyais the earliest surviving text containing a reference to this basic constant.[14] The second section of Āryabhaṭiya, titled Ganitapādd, a contains a stanza indicating a method for the computation of the sine table. There are several ambiguities in correctly interpreting the meaning of this verse. For example, the following is a translation of the verse given by Katz wherein the words in square brackets are insertions of the translator and not translations of texts in the verse.[14] This may be referring to the fact that the second derivative of the sine function is equal to the negative of the sine function.
https://en.wikipedia.org/wiki/%C4%80ryabha%E1%B9%ADa%27s_sine_table
Inmathematics,Bhāskara I's sine approximation formulais arational expressionin onevariablefor thecomputationof theapproximate valuesof thetrigonometric sinesdiscovered byBhāskara I(c. 600 – c. 680), a seventh-century Indianmathematician.[1]Thisformulais given in his treatise titledMahabhaskariya. It is not known how Bhāskara I arrived at his approximation formula. However, severalhistoriansofmathematicshave put forward different hypotheses as to the method Bhāskara might have used to arrive at his formula. The formula is elegant and simple, and it enables the computation of reasonably accurate values of trigonometric sines without the use of geometry.[2] The formula is given in verses 17–19, chapter VII, Mahabhaskariya of Bhāskara I. A translation of the verses is given below:[3] (Now) I briefly state the rule (for finding thebhujaphalaand thekotiphala, etc.) without making use of the Rsine-differences 225, etc. Subtract the degrees of abhuja(orkoti) from the degrees of a half circle (that is, 180 degrees). Then multiply the remainder by the degrees of thebhujaorkotiand put down the result at two places. At one place subtract the result from 40500. By one-fourth of the remainder (thus obtained), divide the result at the other place as multiplied by theanthyaphala(that is, the epicyclic radius). Thus is obtained the entirebahuphala(or,kotiphala) for the sun, moon or the star-planets. So also are obtained the direct and inverse Rsines. (The reference "Rsine-differences 225" is an allusion toAryabhata's sine table.) In modern mathematical notations, for an anglexin degrees, this formula gives[3] Bhāskara I's sine approximation formula can be expressed using theradianmeasure ofanglesas follows:[1] For a positive integernthis takes the following form:[4] The formula acquires an even simpler form when expressed in terms of the cosine rather than the sine. Using radian measure for angles from−π2{\displaystyle -{\frac {\pi }{2}}}toπ2{\displaystyle {\frac {\pi }{2}}}and puttingx=12π+y{\displaystyle x={\tfrac {1}{2}}\pi +y}, one gets To express the previous formula with the constantτ=2π,{\displaystyle \tau =2\pi ,}one can use Equivalent forms of Bhāskara I's formula have been given by almost all subsequent astronomers and mathematicians of India. For example,Brahmagupta's (598–668CE)Brhma-Sphuta-Siddhanta(verses 23–24, chapter XIV)[3]gives the formula in the following form: Also,Bhāskara II(1114–1185CE) has given this formula in hisLilavati(Kshetra-vyavahara, Soka No. 48) in the following form: The approximation can also be used to derive formulas for inverse cosine and inverse sine: Alternatively, using absolute values and theSign function, each pair of functions can be rewritten as such: The formula is applicable for values ofx° in the range from 0° to 180°. The formula is remarkably accurate in this range. The graphs of sinxand the approximation formula are visually indistinguishable and are nearly identical. One of the accompanying figures gives the graph of the error function, namely, the function in using the formula. It shows that the maximum absolute error in using the formula is around 0.0016. From a plot of the percentage value of the absolute error, it is clear that the maximum relative error is less than 1.8%. The approximation formula thus gives sufficiently accurate values of sines for most practical purposes. However, it was not sufficient for the more accurate computational requirements of astronomy. The search for more accurate formulas by Indian astronomers eventually led to the discovery of thepower seriesexpansions of sinxand cosxbyMadhava of Sangamagrama(c. 1350 – c. 1425), the founder of theKerala school of astronomy and mathematics. Bhāskara had not indicated any method by which he arrived at his formula. Historians have speculated on various possibilities. No definitive answers have as yet been obtained. Beyond its historical importance of being a prime example of the mathematical achievements of ancient Indian astronomers, the formula is of significance from a modern perspective also. Mathematicians have attempted to derive the rule using modern concepts and tools. Around half a dozen methods have been suggested, each based on a separate set of premises.[2][3]Most of these derivations use only elementary concepts. Let thecircumferenceof acirclebe measured indegreesand let theradiusRof thecirclebe also measured indegrees. Choosing a fixed diameterABand an arbitrary pointPon the circle and dropping the perpendicularPMtoAB, we can compute the area of the triangleAPBin two ways. Equating the two expressions for the area one gets(1/2)AB×PM= (1/2)AP×BP. This gives Lettingxbe the length of the arcAP, the length of the arcBPis180 −x. These arcs are much bigger than the respective chords. Hence one gets One now seeks two constants α and β such that It is indeed not possible to obtain such constants. However, one may choose values for α and β so that the above expression is valid for two chosen values of the arc lengthx. Choosing 30° and 90° as these values and solving the resulting equations, one immediately gets Bhāskara I's sine approximation formula.[2][3] Assuming thatxis in radians, one may seek an approximation to sinxin the following form: The constantsa,b,c,p,qandr(only five of them are independent) can be determined by assuming that the formula must be exactly valid whenx= 0, π/6, π/2, π, and further assuming that it has to satisfy the property that sin(x) = sin(π −x).[2][3]This procedure produces the formula expressed usingradianmeasure of angles. The part of the graph of sinxin the range from 0° to 180° "looks like" part of a parabola through the points (0, 0) and (180, 0). The general form of such a parabola is The parabola that also passes through (90, 1) (which is the point corresponding to the value sin(90°) = 1) is The parabola which also passes through (30, 1/2) (which is the point corresponding to the value sin(30°) = 1/2) is These expressions suggest a varying denominator which takes the value 90 × 90 whenx= 90 and the value2 × 30 × 150whenx= 30. That this expression should also be symmetrical about the linex= 90 rules out the possibility of choosing a linear expression inx. Computations involvingx(180 −x) might immediately suggest that the expression could be of the form A little experimentation (or by setting up and solving two linear equations inaandb) will yield the valuesa= 5/4,b= −1/4. These give Bhāskara I's sine approximation formula.[4] Karel Stroethoff (2014) offers a similar, but simpler argument for Bhāskara I's choice. He also provides an analogous approximation for the cosine and extends the technique to second and third-order polynomials.[5]
https://en.wikipedia.org/wiki/Bhaskara_I%27s_sine_approximation_formula
Inmathematics, thediscrete sine transform (DST)is aFourier-related transformsimilar to thediscrete Fourier transform(DFT), but using a purelyrealmatrix. It is equivalent to the imaginary parts of a DFT of roughly twice the length, operating on real data withoddsymmetry(since the Fourier transform of a real and odd function is imaginary and odd), where in some variants the input and/or output data are shifted by half a sample. The DST is related to thediscrete cosine transform(DCT), which is equivalent to a DFT of real andevenfunctions. See the DCT article for a general discussion of how the boundary conditions relate the various DCT and DST types. Generally, the DST is derived from the DCT by replacing theNeumann conditionatx=0 with aDirichlet condition.[1]Both the DCT and the DST were described byNasir Ahmed, T. Natarajan, andK.R. Raoin 1974.[2][3]The type-I DST (DST-I) was later described byAnil K. Jainin 1976, and the type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.[4] DSTs are widely employed in solvingpartial differential equationsbyspectral methods, where the different variants of the DST correspond to slightly different odd/even boundary conditions at the two ends of the array. Like any Fourier-related transform, discrete sine transforms (DSTs) express a function or a signal in terms of a sum ofsinusoidswith differentfrequenciesandamplitudes. Like thediscrete Fourier transform(DFT), a DST operates on a function at a finite number of discrete data points. The obvious distinction between a DST and a DFT is that the former uses onlysine functions, while the latter uses both cosines and sines (in the form ofcomplex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DST implies differentboundary conditionsthan the DFT or other related transforms. The Fourier-related transforms that operate on a function over a finitedomain, such as the DFT or DST or aFourier series, can be thought of as implicitly defining anextensionof that function outside the domain. That is, once you write a functionf(x){\displaystyle f(x)}as a sum of sinusoids, you can evaluate that sum at anyx{\displaystyle x}, even forx{\displaystyle x}where the originalf(x){\displaystyle f(x)}was not specified. The DFT, like the Fourier series, implies aperiodicextension of the original function. A DST, like asine transform, implies anoddextension of the original function. However, because DSTs operate onfinite,discretesequences, two issues arise that do not apply for the continuous sine transform. First, one has to specify whether the function is even or odd atboththe left and right boundaries of the domain (i.e. the min-nand max-nboundaries in the definitions below, respectively). Second, one has to specify aroundwhat pointthe function is even or odd. In particular, consider a sequence (a,b,c) of three equally spaced data points, and say that we specify an oddleftboundary. There are two sensible possibilities: either the data is odd about the pointpriortoa, in which case the odd extension is (−c,−b,−a,0,a,b,c), or the data is odd about the pointhalfwaybetweenaand the previous point, in which case the odd extension is (−c,−b,−a,a,b,c) These choices lead to all the standard variations of DSTs and alsodiscrete cosine transforms(DCTs). Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of2×2×2×2=16{\displaystyle 2\times 2\times 2\times 2=16}possibilities. Half of these possibilities, those where theleftboundary is odd, correspond to the 8 types of DST; the other half are the 8 types of DCT. These different boundary conditions strongly affect the applications of the transform, and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solvepartial differential equationsbyspectral methods, the boundary conditions are directly specified as a part of the problem being solved. Formally, the discrete sine transform is alinear, invertiblefunctionF:RN->RN(whereRdenotes the set ofreal numbers), or equivalently anN×Nsquare matrix. There are several variants of the DST with slightly modified definitions. TheNreal numbersx0,...,xN− 1are transformed into theNreal numbersX0,...,XN− 1according to one of the formulas: The DST-I matrix isorthogonal(up to a scale factor). A DST-I is exactly equivalent to a DFT of a real sequence that is odd around the zero-th and middle points, scaled by 1/2. For example, a DST-I ofN=3 real numbers (a,b,c) is exactly equivalent to a DFT of eight real numbers (0,a,b,c,0,−c,−b,−a) (odd symmetry), scaled by 1/2. (In contrast, DST types II–IV involve a half-sample shift in the equivalent DFT.) This is the reason for theN+ 1 in the denominator of the sine function: the equivalent DFT has 2(N+1) points and has 2π/2(N+1) in its sinusoid frequency, so the DST-I has π/(N+1) in its frequency. Thus, the DST-I corresponds to the boundary conditions:xnis odd aroundn= −1 and odd aroundn=N; similarly forXk. Xk=∑n=0N−1xnsin⁡[πN(n+12)(k+1)]k=0,…,N−1{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\sin \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)(k+1)\right]\quad \quad k=0,\dots ,N-1} Some authors further multiply theXN− 1term by 1/√2(see below for the corresponding change in DST-III). This makes the DST-II matrixorthogonal(up to a scale factor), but breaks the direct correspondence with a real-odd DFT of half-shifted input. The DST-II implies the boundary conditions:xnis odd aroundn= −1/2 and odd aroundn=N− 1/2;Xkis odd aroundk= −1 and even aroundk=N− 1. Xk=(−1)k2xN−1+∑n=0N−2xnsin⁡[πN(n+1)(k+12)]k=0,…,N−1{\displaystyle X_{k}={\frac {(-1)^{k}}{2}}x_{N-1}+\sum _{n=0}^{N-2}x_{n}\sin \left[{\frac {\pi }{N}}(n+1)\left(k+{\frac {1}{2}}\right)\right]\quad \quad k=0,\dots ,N-1} Some authors further multiply thexN− 1term by√2(see above for the corresponding change in DST-II). This makes the DST-III matrixorthogonal(up to a scale factor), but breaks the direct correspondence with a real-odd DFT of half-shifted output. The DST-III implies the boundary conditions:xnis odd aroundn= −1 and even aroundn=N− 1;Xkis odd aroundk= −1/2 and odd aroundk=N− 1/2. Xk=∑n=0N−1xnsin⁡[πN(n+12)(k+12)]k=0,…,N−1{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\sin \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)\left(k+{\frac {1}{2}}\right)\right]\quad \quad k=0,\dots ,N-1} The DST-IV matrix isorthogonal(up to a scale factor). The DST-IV implies the boundary conditions:xnis odd aroundn= −1/2 and even aroundn=N− 1/2; similarly forXk. DST types I–IV are equivalent to real-odd DFTs of even order. In principle, there are actually four additional types of discrete sine transform (Martucci, 1994), corresponding to real-odd DFTs of logically odd order, which have factors ofN+1/2 in the denominators of the sine arguments. However, these variants seem to be rarely used in practice. The inverse of DST-I is DST-I multiplied by 2/(N+ 1). The inverse of DST-IV is DST-IV multiplied by 2/N. The inverse of DST-II is DST-III multiplied by 2/N(and vice versa). As for theDFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by2/N{\textstyle {\sqrt {2/N}}}so that the inverse does not require any additional multiplicative factor. Although the direct application of these formulas would require O(N2) operations, it is possible to compute the same thing with only O(NlogN) complexity by factorizing the computation similar to thefast Fourier transform(FFT). (One can also compute DSTs via FFTs combined with O(N) pre- and post-processing steps.) A DST-III or DST-IV can be computed from a DCT-III or DCT-IV (seediscrete cosine transform), respectively, by reversing the order of the inputs and flipping the sign of every other output, and vice versa for DST-II from DCT-II. In this way it follows that types II–IV of the DST require exactly the same number of arithmetic operations (additions and multiplications) as the corresponding DCT types. A family of transforms composed of sine andsine hyperbolicfunctions exists; these transforms are made based on thenaturalvibrationof thin square plates with differentboundary conditions.[5]
https://en.wikipedia.org/wiki/Discrete_sine_transform
In mathematics, theDixon elliptic functionssm and cm are twoelliptic functions(doubly periodicmeromorphic functionson thecomplex plane) that map from eachregular hexagonin ahexagonal tilingto the whole complex plane. Because these functions satisfy the identitycm3⁡z+sm3⁡z=1{\displaystyle \operatorname {cm} ^{3}z+\operatorname {sm} ^{3}z=1}, asreal functionsthey parametrize the cubicFermat curvex3+y3=1{\displaystyle x^{3}+y^{3}=1}, just as thetrigonometric functionssine and cosine parametrize theunit circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}. They were named sm and cm byAlfred Dixonin 1890, by analogy to the trigonometric functions sine and cosine and theJacobi elliptic functionssn and cn;Göran Dillnerdescribed them earlier in 1873.[1] The functions sm and cm can be defined as the solutions to theinitial value problem:[2] Or as the inverse of theSchwarz–Christoffel mappingfrom the complex unit disk to an equilateral triangle, theAbelian integral:[3] which can also be expressed using thehypergeometric function:[4] Both sm and cm have a period along the real axis ofπ3=B(13,13)=32πΓ3(13)≈5.29991625{\displaystyle \pi _{3}=\mathrm {B} {\bigl (}{\tfrac {1}{3}},{\tfrac {1}{3}}{\bigr )}={\tfrac {\sqrt {3}}{2\pi }}\Gamma ^{3}{\bigl (}{\tfrac {1}{3}}{\bigr )}\approx 5.29991625}withB{\displaystyle \mathrm {B} }thebeta functionandΓ{\displaystyle \Gamma }thegamma function:[5] They satisfy the identitycm3⁡z+sm3⁡z=1{\displaystyle \operatorname {cm} ^{3}z+\operatorname {sm} ^{3}z=1}. The parametric functiont↦(cm⁡t,sm⁡t),{\displaystyle t\mapsto (\operatorname {cm} t,\,\operatorname {sm} t),}t∈[−13π3,23π3]{\displaystyle t\in {\bigl [}{-{\tfrac {1}{3}}}\pi _{3},{\tfrac {2}{3}}\pi _{3}{\bigr ]}}parametrizes the cubicFermat curvex3+y3=1,{\displaystyle x^{3}+y^{3}=1,}with12t{\displaystyle {\tfrac {1}{2}}t}representing the signed area lying between the segment from the origin to(1,0){\displaystyle (1,\,0)}, the segment from the origin to(cm⁡t,sm⁡t){\displaystyle (\operatorname {cm} t,\,\operatorname {sm} t)}, and the Fermat curve, analogous to the relationship between the argument of the trigonometric functions and the area of a sector of the unit circle.[6]To see why, applyGreen's theorem: Notice that the area between thex+y=0{\displaystyle x+y=0}andx3+y3=1{\displaystyle x^{3}+y^{3}=1}can be broken into three pieces, each of area16π3{\displaystyle {\tfrac {1}{6}}\pi _{3}}: The functionsm⁡z{\displaystyle \operatorname {sm} z}haszerosat the complex-valued pointsz=13π3i(a+bω){\displaystyle z={\tfrac {1}{\sqrt {3}}}\pi _{3}i(a+b\omega )}for any integersa{\displaystyle a}andb{\displaystyle b}, whereω{\displaystyle \omega }is a cuberoot of unity,ω=exp⁡23iπ=−12+32i{\displaystyle \omega =\exp {\tfrac {2}{3}}i\pi =-{\tfrac {1}{2}}+{\tfrac {\sqrt {3}}{2}}i}(that is,a+bω{\displaystyle a+b\omega }is anEisenstein integer). The functioncm⁡z{\displaystyle \operatorname {cm} z}has zeros at the complex-valued pointsz=13π3+13π3i(a+bω){\displaystyle z={\tfrac {1}{3}}\pi _{3}+{\tfrac {1}{\sqrt {3}}}\pi _{3}i(a+b\omega )}. Both functions havepolesat the complex-valued pointsz=−13π3+13π3i(a+bω){\displaystyle z=-{\tfrac {1}{3}}\pi _{3}+{\tfrac {1}{\sqrt {3}}}\pi _{3}i(a+b\omega )}. On the real line,sm⁡x=0↔x∈π3Z{\displaystyle \operatorname {sm} x=0\leftrightarrow x\in \pi _{3}\mathbb {Z} }, which is analogous tosin⁡x=0↔x∈πZ{\displaystyle \sin x=0\leftrightarrow x\in \pi \mathbb {Z} }. Bothcmandsmcommute with complex conjugation, Analogous to the parity of trigonometric functions (cosine aneven functionand sine anodd function), the Dixon functioncmis invariant under13{\textstyle {\tfrac {1}{3}}}turn rotations of the complex plane, and13{\textstyle {\tfrac {1}{3}}}turn rotations of the domain ofsmcause13{\displaystyle {\tfrac {1}{3}}}turn rotations of the codomain: Each Dixon elliptic function is invariant under translations by the Eisenstein integersa+bω{\displaystyle a+b\omega }scaled byπ3,{\displaystyle \pi _{3},} Negation of each ofcmandsmis equivalent to a13π3{\displaystyle {\tfrac {1}{3}}\pi _{3}}translation of the other, Forn∈{0,1,2},{\displaystyle n\in \mathbb {\{} 0,1,2\},}translations by13π3ω{\displaystyle {\tfrac {1}{3}}\pi _{3}\omega }give The Dixon elliptic functions satisfy the argument sum and difference identities:[8] These formulas can be used to compute the complex-valued functions in real components:[citation needed] Argument duplication and triplication identities can be derived from the sum identity:[9] Thecm{\displaystyle \operatorname {cm} }function satisfies the identitiescm⁡29π3=−cm⁡19π3cm⁡49π3,cm⁡14π3=cl⁡13ϖ,{\displaystyle {\begin{aligned}\operatorname {cm} {\tfrac {2}{9}}\pi _{3}&=-\operatorname {cm} {\tfrac {1}{9}}\pi _{3}\,\operatorname {cm} {\tfrac {4}{9}}\pi _{3},\\[5mu]\operatorname {cm} {\tfrac {1}{4}}\pi _{3}&=\operatorname {cl} {\tfrac {1}{3}}\varpi ,\end{aligned}}} wherecl{\displaystyle \operatorname {cl} }islemniscate cosineandϖ{\displaystyle \varpi }isLemniscate constant.[citation needed] Thecmandsmfunctions can be approximated for|z|<13π3{\displaystyle |z|<{\tfrac {1}{3}}\pi _{3}}by theTaylor series whose coefficients satisfy the recurrencec0=s0=1,{\displaystyle c_{0}=s_{0}=1,}[10] These recurrences result in:[11] TheequianharmonicWeierstrass elliptic function℘(z)=℘(z;0,127),{\displaystyle \wp (z)=\wp {\bigl (}z;0,{\tfrac {1}{27}}{\bigr )},}withlatticeΛ=π3Z⊕π3ωZ{\displaystyle \Lambda =\pi _{3}\mathbb {Z} \oplus \pi _{3}\omega \mathbb {Z} }a scaling of the Eisenstein integers, can be defined as:[12] The function℘(z){\displaystyle \wp (z)}solves the differential equation: We can also write it as the inverse of the integral: In terms of℘(z){\displaystyle \wp (z)}, the Dixon elliptic functions can be written:[13] Likewise, the Weierstrass elliptic function℘(z)=℘(z;0,127){\displaystyle \wp (z)=\wp {\bigl (}z;0,{\tfrac {1}{27}}{\bigr )}}can be written in terms of Dixon elliptic functions: The Dixon elliptic functions can also be expressed usingJacobi elliptic functions, which was first observed byCayley.[14]Letk=e5iπ/6{\displaystyle k=e^{5i\pi /6}},θ=314e5iπ/12{\displaystyle \theta =3^{\frac {1}{4}}e^{5i\pi /12}},s=sn⁡(u,k){\displaystyle s=\operatorname {sn} (u,k)},c=cn⁡(u,k){\displaystyle c=\operatorname {cn} (u,k)}, andd=dn⁡(u,k){\displaystyle d=\operatorname {dn} (u,k)}. Then, let Finally, the Dixon elliptic functions are as so: Several definitions of generalized trigonometric functions include the usual trigonometric sine and cosine as ann=2{\displaystyle n=2}case, and the functions sm and cm as ann=3{\displaystyle n=3}case.[15] For example, definingπn=B(1n,1n){\displaystyle \pi _{n}=\mathrm {B} {\bigl (}{\tfrac {1}{n}},{\tfrac {1}{n}}{\bigr )}}andsinn⁡z,cosn⁡z{\displaystyle \sin _{n}z,\,\cos _{n}z}the inverses of an integral: The area in the positive quadrant under the curvexn+yn=1{\displaystyle x^{n}+y^{n}=1}is The quarticn=4{\displaystyle n=4}case results in a square lattice in the complex plane, related to thelemniscate elliptic functions. The Dixon elliptic functions areconformal mapsfrom an equilateral triangle to a disk, and are therefore helpful for constructing polyhedralconformal map projectionsinvolving equilateral triangles, for example projecting the sphere onto a triangle, hexagon,tetrahedron, octahedron, or icosahedron.[16]
https://en.wikipedia.org/wiki/Dixon_elliptic_functions
Euler's formula, named afterLeonhard Euler, is amathematical formulaincomplex analysisthat establishes the fundamental relationship between thetrigonometric functionsand the complexexponential function. Euler's formula states that, for anyreal numberx, one haseix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,}whereeis thebase of the natural logarithm,iis theimaginary unit, andcosandsinare thetrigonometric functionscosineandsinerespectively. This complex exponential function is sometimes denotedcisx("cosine plusisine"). The formula is still valid ifxis acomplex number, and is also calledEuler's formulain this more general case.[1] Euler's formula is ubiquitous in mathematics, physics, chemistry, and engineering. The physicistRichard Feynmancalled the equation "our jewel" and "the most remarkable formula in mathematics".[2] Whenx=π, Euler's formula may be rewritten aseiπ+ 1 = 0oreiπ= −1, which is known asEuler's identity. In 1714, the English mathematicianRoger Cotespresented a geometrical argument that can be interpreted (after correcting a misplaced factor of−1{\displaystyle {\sqrt {-1}}}) as:[3][4][5]ix=ln⁡(cos⁡x+isin⁡x).{\displaystyle ix=\ln(\cos x+i\sin x).}Exponentiating this equation yields Euler's formula. Note that the logarithmic statement is not universally correct for complex numbers, since a complex logarithm can have infinitely many values, differing by multiples of2πi. Around 1740Leonhard Eulerturned his attention to the exponential function and derived the equation named after him by comparing the series expansions of the exponential and trigonometric expressions.[6][4]The formula was first published in 1748 in his foundational workIntroductio in analysin infinitorum.[7] Johann Bernoullihad found that[8]11+x2=12(11−ix+11+ix).{\displaystyle {\frac {1}{1+x^{2}}}={\frac {1}{2}}\left({\frac {1}{1-ix}}+{\frac {1}{1+ix}}\right).} And since∫dx1+ax=1aln⁡(1+ax)+C,{\displaystyle \int {\frac {dx}{1+ax}}={\frac {1}{a}}\ln(1+ax)+C,}the above equation tells us something aboutcomplex logarithmsby relating natural logarithms to imaginary (complex) numbers. Bernoulli, however, did not evaluate the integral. Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understandcomplex logarithms. Euler also suggested that complex logarithms can have infinitely many values. The view of complex numbers as points in thecomplex planewas described about 50 years later byCaspar Wessel. The exponential functionexfor real values ofxmay be defined in a few different equivalent ways (seeCharacterizations of the exponential function). Several of these methods may be directly extended to give definitions ofezfor complex values ofzsimply by substitutingzin place ofxand using the complex algebraic operations. In particular, we may use any of the three following definitions, which are equivalent. From a more advanced perspective, each of these definitions may be interpreted as giving theuniqueanalytic continuationofexto the complex plane. The exponential functionf(z)=ez{\displaystyle f(z)=e^{z}}is the uniquedifferentiable functionof acomplex variablefor which the derivative equals the functiondfdz=f{\displaystyle {\frac {df}{dz}}=f}andf(0)=1.{\displaystyle f(0)=1.} For complexzez=1+z1!+z22!+z33!+⋯=∑n=0∞znn!.{\displaystyle e^{z}=1+{\frac {z}{1!}}+{\frac {z^{2}}{2!}}+{\frac {z^{3}}{3!}}+\cdots =\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.} Using theratio test, it is possible to show that thispower serieshas an infiniteradius of convergenceand so definesezfor all complexz. For complexzez=limn→∞(1+zn)n.{\displaystyle e^{z}=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}.} Here,nis restricted topositive integers, so there is no question about what the power with exponentnmeans. Various proofs of the formula are possible. This proof shows that the quotient of the trigonometric and exponential expressions is the constant function one, so they must be equal (the exponential function is never zero,[9]so this is permitted).[10] Consider the functionf(θ)f(θ)=cos⁡θ+isin⁡θeiθ=e−iθ(cos⁡θ+isin⁡θ){\displaystyle f(\theta )={\frac {\cos \theta +i\sin \theta }{e^{i\theta }}}=e^{-i\theta }\left(\cos \theta +i\sin \theta \right)}for realθ. Differentiating gives by theproduct rulef′(θ)=e−iθ(icos⁡θ−sin⁡θ)−ie−iθ(cos⁡θ+isin⁡θ)=0{\displaystyle f'(\theta )=e^{-i\theta }\left(i\cos \theta -\sin \theta \right)-ie^{-i\theta }\left(\cos \theta +i\sin \theta \right)=0}Thus,f(θ)is a constant. Sincef(0) = 1, thenf(θ) = 1for all realθ, and thuseiθ=cos⁡θ+isin⁡θ.{\displaystyle e^{i\theta }=\cos \theta +i\sin \theta .} Here is a proof of Euler's formula usingpower-series expansions, as well as basic facts about the powers ofi:[11]i0=1,i1=i,i2=−1,i3=−i,i4=1,i5=i,i6=−1,i7=−i⋮⋮⋮⋮{\displaystyle {\begin{aligned}i^{0}&=1,&i^{1}&=i,&i^{2}&=-1,&i^{3}&=-i,\\i^{4}&=1,&i^{5}&=i,&i^{6}&=-1,&i^{7}&=-i\\&\vdots &&\vdots &&\vdots &&\vdots \end{aligned}}} Using now the power-series definition from above, we see that for real values ofxeix=1+ix+(ix)22!+(ix)33!+(ix)44!+(ix)55!+(ix)66!+(ix)77!+(ix)88!+⋯=1+ix−x22!−ix33!+x44!+ix55!−x66!−ix77!+x88!+⋯=(1−x22!+x44!−x66!+x88!−⋯)+i(x−x33!+x55!−x77!+⋯)=cos⁡x+isin⁡x,{\displaystyle {\begin{aligned}e^{ix}&=1+ix+{\frac {(ix)^{2}}{2!}}+{\frac {(ix)^{3}}{3!}}+{\frac {(ix)^{4}}{4!}}+{\frac {(ix)^{5}}{5!}}+{\frac {(ix)^{6}}{6!}}+{\frac {(ix)^{7}}{7!}}+{\frac {(ix)^{8}}{8!}}+\cdots \\[8pt]&=1+ix-{\frac {x^{2}}{2!}}-{\frac {ix^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\frac {ix^{5}}{5!}}-{\frac {x^{6}}{6!}}-{\frac {ix^{7}}{7!}}+{\frac {x^{8}}{8!}}+\cdots \\[8pt]&=\left(1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+{\frac {x^{8}}{8!}}-\cdots \right)+i\left(x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \right)\\[8pt]&=\cos x+i\sin x,\end{aligned}}}where in the last step we recognize the two terms are theMaclaurin seriesforcosxandsinx.The rearrangement of terms is justifiedbecause each series isabsolutely convergent. Another proof[12]is based on the fact that all complex numbers can be expressed inpolar coordinates. Therefore,for somerandθdepending onx,eix=r(cos⁡θ+isin⁡θ).{\displaystyle e^{ix}=r\left(\cos \theta +i\sin \theta \right).}No assumptions are being made aboutrandθ; they will be determined in the course of the proof. From any of the definitions of the exponential function it can be shown that the derivative ofeixisieix. Therefore, differentiating both sides givesieix=(cos⁡θ+isin⁡θ)drdx+r(−sin⁡θ+icos⁡θ)dθdx.{\displaystyle ie^{ix}=\left(\cos \theta +i\sin \theta \right){\frac {dr}{dx}}+r\left(-\sin \theta +i\cos \theta \right){\frac {d\theta }{dx}}.}Substitutingr(cosθ+isinθ)foreixand equating real and imaginary parts in this formula gives⁠dr/dx⁠= 0and⁠dθ/dx⁠= 1. Thus,ris a constant, andθisx+Cfor some constantC. The initial valuesr(0) = 1andθ(0) = 0come frome0i= 1, givingr= 1andθ=x. This proves the formulaeiθ=1(cos⁡θ+isin⁡θ)=cos⁡θ+isin⁡θ.{\displaystyle e^{i\theta }=1(\cos \theta +i\sin \theta )=\cos \theta +i\sin \theta .} This formula can be interpreted as saying that the functioneiφis aunit complex number, i.e., it traces out theunit circlein thecomplex planeasφranges through the real numbers. Hereφis theanglethat a line connecting the origin with a point on the unit circle makes with thepositive real axis, measured counterclockwise and inradians. The original proof is based on theTaylor seriesexpansions of theexponential functionez(wherezis a complex number) and ofsinxandcosxfor real numbersx(see above). In fact, the same proof shows that Euler's formula is even valid for allcomplexnumbersx. A point in thecomplex planecan be represented by a complex number written incartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates andpolar coordinates. The polar form simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex numberz=x+iy, and its complex conjugate,z=x−iy, can be written asz=x+iy=|z|(cos⁡φ+isin⁡φ)=reiφ,z¯=x−iy=|z|(cos⁡φ−isin⁡φ)=re−iφ,{\displaystyle {\begin{aligned}z&=x+iy=|z|(\cos \varphi +i\sin \varphi )=re^{i\varphi },\\{\bar {z}}&=x-iy=|z|(\cos \varphi -i\sin \varphi )=re^{-i\varphi },\end{aligned}}}where φis theargumentofz, i.e., the angle between thexaxis and the vectorzmeasured counterclockwise inradians, which is definedup toaddition of2π. Many texts writeφ= tan−1⁠y/x⁠instead ofφ= atan2(y,x), but the first equation needs adjustment whenx≤ 0. This is because for any realxandy, not both zero, the angles of the vectors(x,y)and(−x, −y)differ byπradians, but have the identical value oftanφ=⁠y/x⁠. Now, taking this derived formula, we can use Euler's formula to define thelogarithmof a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation):a=eln⁡a,{\displaystyle a=e^{\ln a},}and thateaeb=ea+b,{\displaystyle e^{a}e^{b}=e^{a+b},}both valid for any complex numbersaandb. Therefore, one can write:z=|z|eiφ=eln⁡|z|eiφ=eln⁡|z|+iφ{\displaystyle z=\left|z\right|e^{i\varphi }=e^{\ln \left|z\right|}e^{i\varphi }=e^{\ln \left|z\right|+i\varphi }}for anyz≠ 0. Taking the logarithm of both sides shows thatln⁡z=ln⁡|z|+iφ,{\displaystyle \ln z=\ln \left|z\right|+i\varphi ,}and in fact, this can be used as the definition for thecomplex logarithm. The logarithm of a complex number is thus amulti-valued function, becauseφis multi-valued. Finally, the other exponential law(ea)k=eak,{\displaystyle \left(e^{a}\right)^{k}=e^{ak},}which can be seen to hold for all integersk, together with Euler's formula, implies severaltrigonometric identities, as well asde Moivre's formula. Euler's formula, the definitions of the trigonometric functions and the standard identities for exponentials are sufficient to easily derive most trigonometric identities. It provides a powerful connection betweenanalysisandtrigonometry, and provides an interpretation of the sine and cosine functions asweighted sumsof the exponential function:cos⁡x=Re⁡(eix)=eix+e−ix2,sin⁡x=Im⁡(eix)=eix−e−ix2i.{\displaystyle {\begin{aligned}\cos x&=\operatorname {Re} \left(e^{ix}\right)={\frac {e^{ix}+e^{-ix}}{2}},\\\sin x&=\operatorname {Im} \left(e^{ix}\right)={\frac {e^{ix}-e^{-ix}}{2i}}.\end{aligned}}} The two equations above can be derived by adding or subtracting Euler's formulas:eix=cos⁡x+isin⁡x,e−ix=cos⁡(−x)+isin⁡(−x)=cos⁡x−isin⁡x{\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x,\\e^{-ix}&=\cos(-x)+i\sin(-x)=\cos x-i\sin x\end{aligned}}}and solving for either cosine or sine. These formulas can even serve as the definition of the trigonometric functions for complex argumentsx. For example, lettingx=iy, we have:cos⁡iy=e−y+ey2=cosh⁡y,sin⁡iy=e−y−ey2i=ey−e−y2i=isinh⁡y.{\displaystyle {\begin{aligned}\cos iy&={\frac {e^{-y}+e^{y}}{2}}=\cosh y,\\\sin iy&={\frac {e^{-y}-e^{y}}{2i}}={\frac {e^{y}-e^{-y}}{2}}i=i\sinh y.\end{aligned}}} In additioncosh⁡ix=eix+e−ix2=cos⁡x,sinh⁡ix=eix−e−ix2=isin⁡x.{\displaystyle {\begin{aligned}\cosh ix&={\frac {e^{ix}+e^{-ix}}{2}}=\cos x,\\\sinh ix&={\frac {e^{ix}-e^{-ix}}{2}}=i\sin x.\end{aligned}}} Complex exponentials can simplify trigonometry, because they are mathematically easier to manipulate than their sine and cosine components. One technique is simply to convert sines and cosines into equivalent expressions in terms of exponentials sometimes calledcomplex sinusoids.[13]After the manipulations, the simplified result is still real-valued. For example: cos⁡xcos⁡y=eix+e−ix2⋅eiy+e−iy2=12⋅ei(x+y)+ei(x−y)+ei(−x+y)+ei(−x−y)2=12(ei(x+y)+e−i(x+y)2+ei(x−y)+e−i(x−y)2)=12(cos⁡(x+y)+cos⁡(x−y)).{\displaystyle {\begin{aligned}\cos x\cos y&={\frac {e^{ix}+e^{-ix}}{2}}\cdot {\frac {e^{iy}+e^{-iy}}{2}}\\&={\frac {1}{2}}\cdot {\frac {e^{i(x+y)}+e^{i(x-y)}+e^{i(-x+y)}+e^{i(-x-y)}}{2}}\\&={\frac {1}{2}}{\bigg (}{\frac {e^{i(x+y)}+e^{-i(x+y)}}{2}}+{\frac {e^{i(x-y)}+e^{-i(x-y)}}{2}}{\bigg )}\\&={\frac {1}{2}}\left(\cos(x+y)+\cos(x-y)\right).\end{aligned}}} Another technique is to represent sines and cosines in terms of thereal partof a complex expression and perform the manipulations on the complex expression. For example:cos⁡nx=Re⁡(einx)=Re⁡(ei(n−1)x⋅eix)=Re⁡(ei(n−1)x⋅(eix+e−ix⏟2cos⁡x−e−ix))=Re⁡(ei(n−1)x⋅2cos⁡x−ei(n−2)x)=cos⁡[(n−1)x]⋅[2cos⁡x]−cos⁡[(n−2)x].{\displaystyle {\begin{aligned}\cos nx&=\operatorname {Re} \left(e^{inx}\right)\\&=\operatorname {Re} \left(e^{i(n-1)x}\cdot e^{ix}\right)\\&=\operatorname {Re} {\Big (}e^{i(n-1)x}\cdot {\big (}\underbrace {e^{ix}+e^{-ix}} _{2\cos x}-e^{-ix}{\big )}{\Big )}\\&=\operatorname {Re} \left(e^{i(n-1)x}\cdot 2\cos x-e^{i(n-2)x}\right)\\&=\cos[(n-1)x]\cdot [2\cos x]-\cos[(n-2)x].\end{aligned}}} This formula is used for recursive generation ofcosnxfor integer values ofnand arbitraryx(in radians). Consideringcosxa parameter in equation above yields recursive formula forChebyshev polynomialsof the first kind. In the language oftopology, Euler's formula states that the imaginary exponential functiont↦eit{\displaystyle t\mapsto e^{it}}is a (surjective)morphismoftopological groupsfrom the real lineR{\displaystyle \mathbb {R} }to the unit circleS1{\displaystyle \mathbb {S} ^{1}}. In fact, this exhibitsR{\displaystyle \mathbb {R} }as acovering spaceofS1{\displaystyle \mathbb {S} ^{1}}. Similarly,Euler's identitysays that thekernelof this map isτZ{\displaystyle \tau \mathbb {Z} }, whereτ=2π{\displaystyle \tau =2\pi }. These observations may be combined and summarized in thecommutative diagrambelow: Indifferential equations, the functioneixis often used to simplify solutions, even if the final answer is a real function involving sine and cosine. The reason for this is that the exponential function is theeigenfunctionof the operation ofdifferentiation. Inelectrical engineering,signal processing, and similar fields, signals that vary periodically over time are often described as a combination of sinusoidal functions (seeFourier analysis), and these are more conveniently expressed as the sum of exponential functions withimaginaryexponents, using Euler's formula. Also,phasor analysisof circuits can include Euler's formula to represent the impedance of a capacitor or an inductor. In thefour-dimensional spaceofquaternions, there is asphereofimaginary units. For any pointron this sphere, andxa real number, Euler's formula applies:exp⁡xr=cos⁡x+rsin⁡x,{\displaystyle \exp xr=\cos x+r\sin x,}and the element is called aversorin quaternions. The set of all versors forms a3-spherein the 4-space. Thespecial casesthat evaluate to units illustrate rotation around the complex unit circle: The special case atx=τ(whereτ= 2π, oneturn) yieldseiτ= 1 + 0. This is also argued to link five fundamental constants with three basic arithmetic operations, but, unlike Euler's identity, without rearranging theaddendsfrom the general case:eiτ=cos⁡τ+isin⁡τ=1+0{\displaystyle {\begin{aligned}e^{i\tau }&=\cos \tau +i\sin \tau \\&=1+0\end{aligned}}}An interpretation of the simplified formeiτ= 1is that rotating by a full turn is anidentity function.[14]
https://en.wikipedia.org/wiki/Euler%27s_formula
Ordinarytrigonometrystudiestrianglesin theEuclidean plane⁠R2{\displaystyle \mathbb {R} ^{2}}⁠. There are a number of ways of defining the ordinaryEuclidean geometrictrigonometric functionsonreal numbers, for exampleright-angled triangle definitions,unit circle definitions,series definitions[broken anchor],definitions via differential equations[broken anchor], anddefinitions using functional equations.Generalizations of trigonometric functionsare often developed by starting with one of the above methods and adapting it to a situation other than the real numbers of Euclidean geometry. Generally, trigonometry can be the study of triples of points in any kind ofgeometryorspace. A triangle is thepolygonwith the smallest number of vertices, so one direction to generalize is to study higher-dimensional analogs ofanglesand polygons:solid anglesandpolytopessuch astetrahedronsandn-simplices.
https://en.wikipedia.org/wiki/Generalized_trigonometry
Inmathematics,hyperbolic functionsare analogues of the ordinarytrigonometric functions, but defined using thehyperbolarather than thecircle. Just as the points(cost, sint)form acircle with a unit radius, the points(cosht, sinht)form the right half of theunit hyperbola. Also, similarly to how the derivatives ofsin(t)andcos(t)arecos(t)and–sin(t)respectively, the derivatives ofsinh(t)andcosh(t)arecosh(t)andsinh(t)respectively. Hyperbolic functions are used to express theangle of parallelisminhyperbolic geometry. They are used to expressLorentz boostsashyperbolic rotationsinspecial relativity. They also occur in the solutions of many lineardifferential equations(such as the equation defining acatenary),cubic equations, andLaplace's equationinCartesian coordinates.Laplace's equationsare important in many areas ofphysics, includingelectromagnetic theory,heat transfer, andfluid dynamics. The basic hyperbolic functions are:[1] from which are derived:[4] corresponding to the derived trigonometric functions. Theinverse hyperbolic functionsare: The hyperbolic functions take arealargumentcalled ahyperbolic angle. The magnitude of a hyperbolic angle is theareaof itshyperbolic sectortoxy= 1. The hyperbolic functions may be defined in terms of thelegs of a right trianglecovering this sector. Incomplex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine areentire functions. As a result, the other hyperbolic functions aremeromorphicin the whole complex plane. ByLindemann–Weierstrass theorem, the hyperbolic functions have atranscendental valuefor every non-zeroalgebraic valueof the argument.[12] The first known calculation of a hyperbolic trigonometry problem is attributed toGerardus Mercatorwhen issuing theMercator map projectioncirca 1566. It requires tabulating solutions to a transcendental equation involving hyperbolic functions.[13] The first to suggest a similarity between the sector of the circle and that of the hyperbola wasIsaac Newtonin his 1687Principia Mathematica.[14] Roger Cotessuggested to modify the trigonometric functions using theimaginary uniti=−1{\displaystyle i={\sqrt {-1}}}to obtain an oblatespheroidfrom a prolate one.[14] Hyperbolic functions were formally introduced in 1757 byVincenzo Riccati.[14][13][15]Riccati usedSc.andCc.(sinus/cosinus circulare) to refer to circular functions andSh.andCh.(sinus/cosinus hyperbolico) to refer to hyperbolic functions.[14]As early as 1759,Daviet de Foncenexshowed the interchangeability of the trigonometric and hyperbolic functions using the imaginary unit and extendedde Moivre's formulato hyperbolic functions.[15][14] During the 1760s,Johann Heinrich Lambertsystematized the use functions and provided exponential expressions in various publications.[14][15]Lambert credited Riccati for the terminology and names of the functions, but altered the abbreviations to those used today.[15][16] There are various equivalent ways to define the hyperbolic functions. In terms of theexponential function:[1][4] The hyperbolic functions may be defined as solutions ofdifferential equations: The hyperbolic sine and cosine are the solution(s,c)of the systemc′(x)=s(x),s′(x)=c(x),{\displaystyle {\begin{aligned}c'(x)&=s(x),\\s'(x)&=c(x),\\\end{aligned}}}with the initial conditionss(0)=0,c(0)=1.{\displaystyle s(0)=0,c(0)=1.}The initial conditions make the solution unique; without them any pair of functions(aex+be−x,aex−be−x){\displaystyle (ae^{x}+be^{-x},ae^{x}-be^{-x})}would be a solution. sinh(x)andcosh(x)are also the unique solution of the equationf″(x) =f(x), such thatf(0) = 1,f′(0) = 0for the hyperbolic cosine, andf(0) = 0,f′(0) = 1for the hyperbolic sine. Hyperbolic functions may also be deduced fromtrigonometric functionswithcomplexarguments: whereiis theimaginary unitwithi2= −1. The above definitions are related to the exponential definitions viaEuler's formula(See§ Hyperbolic functions for complex numbersbelow). It can be shown that thearea under the curveof the hyperbolic cosine (over a finite interval) is always equal to thearc lengthcorresponding to that interval:[17]area=∫abcosh⁡xdx=∫ab1+(ddxcosh⁡x)2dx=arc length.{\displaystyle {\text{area}}=\int _{a}^{b}\cosh x\,dx=\int _{a}^{b}{\sqrt {1+\left({\frac {d}{dx}}\cosh x\right)^{2}}}\,dx={\text{arc length.}}} The hyperbolic tangent is the (unique) solution to thedifferential equationf′ = 1 −f2, withf(0) = 0.[18][19] The hyperbolic functions satisfy many identities, all of them similar in form to thetrigonometric identities. In fact,Osborn's rule[20]states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) forθ{\displaystyle \theta },2θ{\displaystyle 2\theta },3θ{\displaystyle 3\theta }orθ{\displaystyle \theta }andφ{\displaystyle \varphi }into a hyperbolic identity, by: Odd and even functions:sinh⁡(−x)=−sinh⁡xcosh⁡(−x)=cosh⁡x{\displaystyle {\begin{aligned}\sinh(-x)&=-\sinh x\\\cosh(-x)&=\cosh x\end{aligned}}} Hence:tanh⁡(−x)=−tanh⁡xcoth⁡(−x)=−coth⁡xsech⁡(−x)=sech⁡xcsch⁡(−x)=−csch⁡x{\displaystyle {\begin{aligned}\tanh(-x)&=-\tanh x\\\coth(-x)&=-\coth x\\\operatorname {sech} (-x)&=\operatorname {sech} x\\\operatorname {csch} (-x)&=-\operatorname {csch} x\end{aligned}}} Thus,coshxandsechxareeven functions; the others areodd functions. arsech⁡x=arcosh⁡(1x)arcsch⁡x=arsinh⁡(1x)arcoth⁡x=artanh⁡(1x){\displaystyle {\begin{aligned}\operatorname {arsech} x&=\operatorname {arcosh} \left({\frac {1}{x}}\right)\\\operatorname {arcsch} x&=\operatorname {arsinh} \left({\frac {1}{x}}\right)\\\operatorname {arcoth} x&=\operatorname {artanh} \left({\frac {1}{x}}\right)\end{aligned}}} Hyperbolic sine and cosine satisfy:cosh⁡x+sinh⁡x=excosh⁡x−sinh⁡x=e−x{\displaystyle {\begin{aligned}\cosh x+\sinh x&=e^{x}\\\cosh x-\sinh x&=e^{-x}\end{aligned}}} which are analogous toEuler's formula, and cosh2⁡x−sinh2⁡x=1{\displaystyle \cosh ^{2}x-\sinh ^{2}x=1} which is analogous to thePythagorean trigonometric identity. One also hassech2⁡x=1−tanh2⁡xcsch2⁡x=coth2⁡x−1{\displaystyle {\begin{aligned}\operatorname {sech} ^{2}x&=1-\tanh ^{2}x\\\operatorname {csch} ^{2}x&=\coth ^{2}x-1\end{aligned}}} for the other functions. sinh⁡(x+y)=sinh⁡xcosh⁡y+cosh⁡xsinh⁡ycosh⁡(x+y)=cosh⁡xcosh⁡y+sinh⁡xsinh⁡ytanh⁡(x+y)=tanh⁡x+tanh⁡y1+tanh⁡xtanh⁡y{\displaystyle {\begin{aligned}\sinh(x+y)&=\sinh x\cosh y+\cosh x\sinh y\\\cosh(x+y)&=\cosh x\cosh y+\sinh x\sinh y\\\tanh(x+y)&={\frac {\tanh x+\tanh y}{1+\tanh x\tanh y}}\\\end{aligned}}}particularlycosh⁡(2x)=sinh2⁡x+cosh2⁡x=2sinh2⁡x+1=2cosh2⁡x−1sinh⁡(2x)=2sinh⁡xcosh⁡xtanh⁡(2x)=2tanh⁡x1+tanh2⁡x{\displaystyle {\begin{aligned}\cosh(2x)&=\sinh ^{2}{x}+\cosh ^{2}{x}=2\sinh ^{2}x+1=2\cosh ^{2}x-1\\\sinh(2x)&=2\sinh x\cosh x\\\tanh(2x)&={\frac {2\tanh x}{1+\tanh ^{2}x}}\\\end{aligned}}} Also:sinh⁡x+sinh⁡y=2sinh⁡(x+y2)cosh⁡(x−y2)cosh⁡x+cosh⁡y=2cosh⁡(x+y2)cosh⁡(x−y2){\displaystyle {\begin{aligned}\sinh x+\sinh y&=2\sinh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\cosh x+\cosh y&=2\cosh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} sinh⁡(x−y)=sinh⁡xcosh⁡y−cosh⁡xsinh⁡ycosh⁡(x−y)=cosh⁡xcosh⁡y−sinh⁡xsinh⁡ytanh⁡(x−y)=tanh⁡x−tanh⁡y1−tanh⁡xtanh⁡y{\displaystyle {\begin{aligned}\sinh(x-y)&=\sinh x\cosh y-\cosh x\sinh y\\\cosh(x-y)&=\cosh x\cosh y-\sinh x\sinh y\\\tanh(x-y)&={\frac {\tanh x-\tanh y}{1-\tanh x\tanh y}}\\\end{aligned}}} Also:[21]sinh⁡x−sinh⁡y=2cosh⁡(x+y2)sinh⁡(x−y2)cosh⁡x−cosh⁡y=2sinh⁡(x+y2)sinh⁡(x−y2){\displaystyle {\begin{aligned}\sinh x-\sinh y&=2\cosh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\cosh x-\cosh y&=2\sinh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} sinh⁡(x2)=sinh⁡x2(cosh⁡x+1)=sgn⁡xcosh⁡x−12cosh⁡(x2)=cosh⁡x+12tanh⁡(x2)=sinh⁡xcosh⁡x+1=sgn⁡xcosh⁡x−1cosh⁡x+1=ex−1ex+1{\displaystyle {\begin{aligned}\sinh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\sqrt {2(\cosh x+1)}}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{2}}}\\[6px]\cosh \left({\frac {x}{2}}\right)&={\sqrt {\frac {\cosh x+1}{2}}}\\[6px]\tanh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\cosh x+1}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{\cosh x+1}}}={\frac {e^{x}-1}{e^{x}+1}}\end{aligned}}} wheresgnis thesign function. Ifx≠ 0, then[22] tanh⁡(x2)=cosh⁡x−1sinh⁡x=coth⁡x−csch⁡x{\displaystyle \tanh \left({\frac {x}{2}}\right)={\frac {\cosh x-1}{\sinh x}}=\coth x-\operatorname {csch} x} sinh2⁡x=12(cosh⁡2x−1)cosh2⁡x=12(cosh⁡2x+1){\displaystyle {\begin{aligned}\sinh ^{2}x&={\tfrac {1}{2}}(\cosh 2x-1)\\\cosh ^{2}x&={\tfrac {1}{2}}(\cosh 2x+1)\end{aligned}}} The following inequality is useful in statistics:[23]cosh⁡(t)≤et2/2.{\displaystyle \operatorname {cosh} (t)\leq e^{t^{2}/2}.} It can be proved by comparing the Taylor series of the two functions term by term. arsinh⁡(x)=ln⁡(x+x2+1)arcosh⁡(x)=ln⁡(x+x2−1)x≥1artanh⁡(x)=12ln⁡(1+x1−x)|x|<1arcoth⁡(x)=12ln⁡(x+1x−1)|x|>1arsech⁡(x)=ln⁡(1x+1x2−1)=ln⁡(1+1−x2x)0<x≤1arcsch⁡(x)=ln⁡(1x+1x2+1)x≠0{\displaystyle {\begin{aligned}\operatorname {arsinh} (x)&=\ln \left(x+{\sqrt {x^{2}+1}}\right)\\\operatorname {arcosh} (x)&=\ln \left(x+{\sqrt {x^{2}-1}}\right)&&x\geq 1\\\operatorname {artanh} (x)&={\frac {1}{2}}\ln \left({\frac {1+x}{1-x}}\right)&&|x|<1\\\operatorname {arcoth} (x)&={\frac {1}{2}}\ln \left({\frac {x+1}{x-1}}\right)&&|x|>1\\\operatorname {arsech} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}-1}}\right)=\ln \left({\frac {1+{\sqrt {1-x^{2}}}}{x}}\right)&&0<x\leq 1\\\operatorname {arcsch} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}+1}}\right)&&x\neq 0\end{aligned}}} ddxsinh⁡x=cosh⁡xddxcosh⁡x=sinh⁡xddxtanh⁡x=1−tanh2⁡x=sech2⁡x=1cosh2⁡xddxcoth⁡x=1−coth2⁡x=−csch2⁡x=−1sinh2⁡xx≠0ddxsech⁡x=−tanh⁡xsech⁡xddxcsch⁡x=−coth⁡xcsch⁡xx≠0{\displaystyle {\begin{aligned}{\frac {d}{dx}}\sinh x&=\cosh x\\{\frac {d}{dx}}\cosh x&=\sinh x\\{\frac {d}{dx}}\tanh x&=1-\tanh ^{2}x=\operatorname {sech} ^{2}x={\frac {1}{\cosh ^{2}x}}\\{\frac {d}{dx}}\coth x&=1-\coth ^{2}x=-\operatorname {csch} ^{2}x=-{\frac {1}{\sinh ^{2}x}}&&x\neq 0\\{\frac {d}{dx}}\operatorname {sech} x&=-\tanh x\operatorname {sech} x\\{\frac {d}{dx}}\operatorname {csch} x&=-\coth x\operatorname {csch} x&&x\neq 0\end{aligned}}}ddxarsinh⁡x=1x2+1ddxarcosh⁡x=1x2−11<xddxartanh⁡x=11−x2|x|<1ddxarcoth⁡x=11−x21<|x|ddxarsech⁡x=−1x1−x20<x<1ddxarcsch⁡x=−1|x|1+x2x≠0{\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arsinh} x&={\frac {1}{\sqrt {x^{2}+1}}}\\{\frac {d}{dx}}\operatorname {arcosh} x&={\frac {1}{\sqrt {x^{2}-1}}}&&1<x\\{\frac {d}{dx}}\operatorname {artanh} x&={\frac {1}{1-x^{2}}}&&|x|<1\\{\frac {d}{dx}}\operatorname {arcoth} x&={\frac {1}{1-x^{2}}}&&1<|x|\\{\frac {d}{dx}}\operatorname {arsech} x&=-{\frac {1}{x{\sqrt {1-x^{2}}}}}&&0<x<1\\{\frac {d}{dx}}\operatorname {arcsch} x&=-{\frac {1}{|x|{\sqrt {1+x^{2}}}}}&&x\neq 0\end{aligned}}} Each of the functionssinhandcoshis equal to itssecond derivative, that is:d2dx2sinh⁡x=sinh⁡x{\displaystyle {\frac {d^{2}}{dx^{2}}}\sinh x=\sinh x}d2dx2cosh⁡x=cosh⁡x.{\displaystyle {\frac {d^{2}}{dx^{2}}}\cosh x=\cosh x\,.} All functions with this property arelinear combinationsofsinhandcosh, in particular theexponential functionsex{\displaystyle e^{x}}ande−x{\displaystyle e^{-x}}.[24] ∫sinh⁡(ax)dx=a−1cosh⁡(ax)+C∫cosh⁡(ax)dx=a−1sinh⁡(ax)+C∫tanh⁡(ax)dx=a−1ln⁡(cosh⁡(ax))+C∫coth⁡(ax)dx=a−1ln⁡|sinh⁡(ax)|+C∫sech⁡(ax)dx=a−1arctan⁡(sinh⁡(ax))+C∫csch⁡(ax)dx=a−1ln⁡|tanh⁡(ax2)|+C=a−1ln⁡|coth⁡(ax)−csch⁡(ax)|+C=−a−1arcoth⁡(cosh⁡(ax))+C{\displaystyle {\begin{aligned}\int \sinh(ax)\,dx&=a^{-1}\cosh(ax)+C\\\int \cosh(ax)\,dx&=a^{-1}\sinh(ax)+C\\\int \tanh(ax)\,dx&=a^{-1}\ln(\cosh(ax))+C\\\int \coth(ax)\,dx&=a^{-1}\ln \left|\sinh(ax)\right|+C\\\int \operatorname {sech} (ax)\,dx&=a^{-1}\arctan(\sinh(ax))+C\\\int \operatorname {csch} (ax)\,dx&=a^{-1}\ln \left|\tanh \left({\frac {ax}{2}}\right)\right|+C=a^{-1}\ln \left|\coth \left(ax\right)-\operatorname {csch} \left(ax\right)\right|+C=-a^{-1}\operatorname {arcoth} \left(\cosh \left(ax\right)\right)+C\end{aligned}}} The following integrals can be proved usinghyperbolic substitution:∫1a2+u2du=arsinh⁡(ua)+C∫1u2−a2du=sgn⁡uarcosh⁡|ua|+C∫1a2−u2du=a−1artanh⁡(ua)+Cu2<a2∫1a2−u2du=a−1arcoth⁡(ua)+Cu2>a2∫1ua2−u2du=−a−1arsech⁡|ua|+C∫1ua2+u2du=−a−1arcsch⁡|ua|+C{\displaystyle {\begin{aligned}\int {{\frac {1}{\sqrt {a^{2}+u^{2}}}}\,du}&=\operatorname {arsinh} \left({\frac {u}{a}}\right)+C\\\int {{\frac {1}{\sqrt {u^{2}-a^{2}}}}\,du}&=\operatorname {sgn} {u}\operatorname {arcosh} \left|{\frac {u}{a}}\right|+C\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {artanh} \left({\frac {u}{a}}\right)+C&&u^{2}<a^{2}\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {arcoth} \left({\frac {u}{a}}\right)+C&&u^{2}>a^{2}\\\int {{\frac {1}{u{\sqrt {a^{2}-u^{2}}}}}\,du}&=-a^{-1}\operatorname {arsech} \left|{\frac {u}{a}}\right|+C\\\int {{\frac {1}{u{\sqrt {a^{2}+u^{2}}}}}\,du}&=-a^{-1}\operatorname {arcsch} \left|{\frac {u}{a}}\right|+C\end{aligned}}} whereCis theconstant of integration. It is possible to express explicitly theTaylor seriesat zero (or theLaurent series, if the function is not defined at zero) of the above functions. sinh⁡x=x+x33!+x55!+x77!+⋯=∑n=0∞x2n+1(2n+1)!{\displaystyle \sinh x=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}}This series isconvergentfor everycomplexvalue ofx. Since the functionsinhxisodd, only odd exponents forxoccur in its Taylor series. cosh⁡x=1+x22!+x44!+x66!+⋯=∑n=0∞x2n(2n)!{\displaystyle \cosh x=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}}This series isconvergentfor everycomplexvalue ofx. Since the functioncoshxiseven, only even exponents forxoccur in its Taylor series. The sum of the sinh and cosh series is theinfinite seriesexpression of theexponential function. The following series are followed by a description of a subset of theirdomain of convergence, where the series is convergent and its sum equals the function.tanh⁡x=x−x33+2x515−17x7315+⋯=∑n=1∞22n(22n−1)B2nx2n−1(2n)!,|x|<π2coth⁡x=x−1+x3−x345+2x5945+⋯=∑n=0∞22nB2nx2n−1(2n)!,0<|x|<πsech⁡x=1−x22+5x424−61x6720+⋯=∑n=0∞E2nx2n(2n)!,|x|<π2csch⁡x=x−1−x6+7x3360−31x515120+⋯=∑n=0∞2(1−22n−1)B2nx2n−1(2n)!,0<|x|<π{\displaystyle {\begin{aligned}\tanh x&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots =\sum _{n=1}^{\infty }{\frac {2^{2n}(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\coth x&=x^{-1}+{\frac {x}{3}}-{\frac {x^{3}}{45}}+{\frac {2x^{5}}{945}}+\cdots =\sum _{n=0}^{\infty }{\frac {2^{2n}B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \\\operatorname {sech} x&=1-{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}-{\frac {61x^{6}}{720}}+\cdots =\sum _{n=0}^{\infty }{\frac {E_{2n}x^{2n}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\operatorname {csch} x&=x^{-1}-{\frac {x}{6}}+{\frac {7x^{3}}{360}}-{\frac {31x^{5}}{15120}}+\cdots =\sum _{n=0}^{\infty }{\frac {2(1-2^{2n-1})B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \end{aligned}}} where: The following expansions are valid in the whole complex plane: The hyperbolic functions represent an expansion oftrigonometrybeyond thecircular functions. Both types depend on anargument, eithercircular angleorhyperbolic angle. Since thearea of a circular sectorwith radiusrand angleu(in radians) isr2u/2, it will be equal touwhenr=√2. In the diagram, such a circle is tangent to the hyperbolaxy= 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict ahyperbolic sectorwith area corresponding to hyperbolic angle magnitude. The legs of the tworight triangleswith hypotenuse on the ray defining the angles are of length√2times the circular and hyperbolic functions. The hyperbolic angle is aninvariant measurewith respect to thesqueeze mapping, just as the circular angle is invariant under rotation.[25] TheGudermannian functiongives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers. The graph of the functionacosh(x/a)is thecatenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity. The decomposition of the exponential function in itseven and odd partsgives the identitiesex=cosh⁡x+sinh⁡x,{\displaystyle e^{x}=\cosh x+\sinh x,}ande−x=cosh⁡x−sinh⁡x.{\displaystyle e^{-x}=\cosh x-\sinh x.}Combined withEuler's formulaeix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,}this givesex+iy=(cosh⁡x+sinh⁡x)(cos⁡y+isin⁡y){\displaystyle e^{x+iy}=(\cosh x+\sinh x)(\cos y+i\sin y)}for thegeneral complex exponential function. Additionally,ex=1+tanh⁡x1−tanh⁡x=1+tanh⁡x21−tanh⁡x2{\displaystyle e^{x}={\sqrt {\frac {1+\tanh x}{1-\tanh x}}}={\frac {1+\tanh {\frac {x}{2}}}{1-\tanh {\frac {x}{2}}}}} Since theexponential functioncan be defined for anycomplexargument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functionssinhzandcoshzare thenholomorphic. Relationships to ordinary trigonometric functions are given byEuler's formulafor complex numbers:eix=cos⁡x+isin⁡xe−ix=cos⁡x−isin⁡x{\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\e^{-ix}&=\cos x-i\sin x\end{aligned}}}so:cosh⁡(ix)=12(eix+e−ix)=cos⁡xsinh⁡(ix)=12(eix−e−ix)=isin⁡xcosh⁡(x+iy)=cosh⁡(x)cos⁡(y)+isinh⁡(x)sin⁡(y)sinh⁡(x+iy)=sinh⁡(x)cos⁡(y)+icosh⁡(x)sin⁡(y)tanh⁡(ix)=itan⁡xcosh⁡x=cos⁡(ix)sinh⁡x=−isin⁡(ix)tanh⁡x=−itan⁡(ix){\displaystyle {\begin{aligned}\cosh(ix)&={\frac {1}{2}}\left(e^{ix}+e^{-ix}\right)=\cos x\\\sinh(ix)&={\frac {1}{2}}\left(e^{ix}-e^{-ix}\right)=i\sin x\\\cosh(x+iy)&=\cosh(x)\cos(y)+i\sinh(x)\sin(y)\\\sinh(x+iy)&=\sinh(x)\cos(y)+i\cosh(x)\sin(y)\\\tanh(ix)&=i\tan x\\\cosh x&=\cos(ix)\\\sinh x&=-i\sin(ix)\\\tanh x&=-i\tan(ix)\end{aligned}}} Thus, hyperbolic functions areperiodicwith respect to the imaginary component, with period2πi{\displaystyle 2\pi i}(πi{\displaystyle \pi i}for hyperbolic tangent and cotangent).
https://en.wikipedia.org/wiki/Hyperbolic_function
Inmathematics, thelemniscate elliptic functionsareelliptic functionsrelated to the arc length of thelemniscate of Bernoulli. They were first studied byGiulio Fagnanoin 1718 and later byLeonhard EulerandCarl Friedrich Gauss, among others.[1] Thelemniscate sineandlemniscate cosinefunctions, usually written with the symbolsslandcl(sometimes the symbolssinlemandcoslemorsin lemnandcos lemnare used instead),[2]are analogous to thetrigonometric functionssine and cosine. While the trigonometric sine relates the arc length to the chord length in a unit-diametercirclex2+y2=x,{\displaystyle x^{2}+y^{2}=x,}[3]the lemniscate sine relates the arc length to the chord length of a lemniscate(x2+y2)2=x2−y2.{\displaystyle {\bigl (}x^{2}+y^{2}{\bigr )}{}^{2}=x^{2}-y^{2}.} The lemniscate functions have periods related to a numberϖ={\displaystyle \varpi =}2.622057...called thelemniscate constant, the ratio of a lemniscate's perimeter to its diameter. This number is aquarticanalog of the (quadratic)π={\displaystyle \pi =}3.141592...,ratio of perimeter to diameter of a circle. Ascomplex functions,slandclhave asquareperiod lattice(a multiple of theGaussian integers) withfundamental periods{(1+i)ϖ,(1−i)ϖ},{\displaystyle \{(1+i)\varpi ,(1-i)\varpi \},}[4]and are a special case of twoJacobi elliptic functionson that lattice,sl⁡z=sn⁡(z;i),{\displaystyle \operatorname {sl} z=\operatorname {sn} (z;i),}cl⁡z=cd⁡(z;i){\displaystyle \operatorname {cl} z=\operatorname {cd} (z;i)}. Similarly, thehyperbolic lemniscate sineslhandhyperbolic lemniscate cosineclhhave a square period lattice with fundamental periods{2ϖ,2ϖi}.{\displaystyle {\bigl \{}{\sqrt {2}}\varpi ,{\sqrt {2}}\varpi i{\bigr \}}.} The lemniscate functions and the hyperbolic lemniscate functions arerelatedto theWeierstrass elliptic function℘(z;a,0){\displaystyle \wp (z;a,0)}. The lemniscate functionsslandclcan be defined as the solution to theinitial value problem:[5] or equivalently as theinversesof anelliptic integral, theSchwarz–Christoffel mapfrom the complexunit diskto a square with corners{12ϖ,12ϖi,−12ϖ,−12ϖi}:{\displaystyle {\big \{}{\tfrac {1}{2}}\varpi ,{\tfrac {1}{2}}\varpi i,-{\tfrac {1}{2}}\varpi ,-{\tfrac {1}{2}}\varpi i{\big \}}\colon }[6] Beyond that square, the functions can beanalytically continuedto the wholecomplex planeby a series ofreflections. By comparison, the circular sine and cosine can be defined as the solution to the initial value problem: or as inverses of a map from theupper half-planeto a half-infinite strip with real part between−12π,12π{\displaystyle -{\tfrac {1}{2}}\pi ,{\tfrac {1}{2}}\pi }and positive imaginary part: The lemniscate functions have minimal real period2ϖ, minimalimaginaryperiod2ϖiand fundamental complex periods(1+i)ϖ{\displaystyle (1+i)\varpi }and(1−i)ϖ{\displaystyle (1-i)\varpi }for a constantϖcalled thelemniscate constant,[7] The lemniscate functions satisfy the basic relationcl⁡z=sl(12ϖ−z),{\displaystyle \operatorname {cl} z={\operatorname {sl} }{\bigl (}{\tfrac {1}{2}}\varpi -z{\bigr )},}analogous to the relationcos⁡z=sin(12π−z).{\displaystyle \cos z={\sin }{\bigl (}{\tfrac {1}{2}}\pi -z{\bigr )}.} The lemniscate constantϖis a close analog of thecircle constantπ, and many identities involvingπhave analogues involvingϖ, as identities involving thetrigonometric functionshave analogues involving the lemniscate functions. For example,Viète's formulaforπcan be written: 2π=12⋅12+1212⋅12+1212+1212⋯{\displaystyle {\frac {2}{\pi }}={\sqrt {\frac {1}{2}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {\frac {1}{2}}}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {\frac {1}{2}}}}}}}\cdots } An analogous formula forϖis:[8] 2ϖ=12⋅12+12/12⋅12+12/12+12/12⋯{\displaystyle {\frac {2}{\varpi }}={\sqrt {\frac {1}{2}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\bigg /}\!{\sqrt {\frac {1}{2}}}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\Bigg /}\!{\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\bigg /}\!{\sqrt {\frac {1}{2}}}}}}}\cdots } TheMachin formulaforπis14π=4arctan⁡15−arctan⁡1239,{\textstyle {\tfrac {1}{4}}\pi =4\arctan {\tfrac {1}{5}}-\arctan {\tfrac {1}{239}},}and several similar formulas forπcan be developed using trigonometric angle sum identities, e.g. Euler's formula14π=arctan⁡12+arctan⁡13{\textstyle {\tfrac {1}{4}}\pi =\arctan {\tfrac {1}{2}}+\arctan {\tfrac {1}{3}}}. Analogous formulas can be developed forϖ, including the following found by Gauss:12ϖ=2arcsl⁡12+arcsl⁡723.{\displaystyle {\tfrac {1}{2}}\varpi =2\operatorname {arcsl} {\tfrac {1}{2}}+\operatorname {arcsl} {\tfrac {7}{23}}.}[9] The lemniscate and circle constants were found by Gauss to be related to each-other by thearithmetic-geometric meanM:[10] πϖ=M(1,2){\displaystyle {\frac {\pi }{\varpi }}=M{\left(1,{\sqrt {2}}\!~\right)}} The lemniscate functionsclandslareeven and odd functions, respectively, At translations of12ϖ,{\displaystyle {\tfrac {1}{2}}\varpi ,}clandslare exchanged, and at translations of12iϖ{\displaystyle {\tfrac {1}{2}}i\varpi }they are additionally rotated andreciprocated:[12] Doubling these to translations by aunit-Gaussian-integer multiple ofϖ{\displaystyle \varpi }(that is,±ϖ{\displaystyle \pm \varpi }or±iϖ{\displaystyle \pm i\varpi }), negates each function, aninvolution: As a result, both functions are invariant under translation by aneven-Gaussian-integermultiple ofϖ{\displaystyle \varpi }.[13]That is, a displacement(a+bi)ϖ,{\displaystyle (a+bi)\varpi ,}witha+b=2k{\displaystyle a+b=2k}for integersa,b, andk. This makes themelliptic functions(doubly periodicmeromorphic functionsin the complex plane) with adiagonal squareperiod latticeof fundamental periods(1+i)ϖ{\displaystyle (1+i)\varpi }and(1−i)ϖ{\displaystyle (1-i)\varpi }.[14]Elliptic functions with a square period lattice are more symmetrical than arbitrary elliptic functions, following the symmetries of the square. Reflections and quarter-turn rotations of lemniscate function arguments have simple expressions: Theslfunction has simplezerosat Gaussian integer multiples ofϖ, complex numbers of the formaϖ+bϖi{\displaystyle a\varpi +b\varpi i}for integersaandb. It has simplepolesat Gaussianhalf-integermultiples ofϖ, complex numbers of the form(a+12)ϖ+(b+12)ϖi{\displaystyle {\bigl (}a+{\tfrac {1}{2}}{\bigr )}\varpi +{\bigl (}b+{\tfrac {1}{2}}{\bigr )}\varpi i}, withresidues(−1)a−b+1i{\displaystyle (-1)^{a-b+1}i}. Theclfunction is reflected and offset from theslfunction,cl⁡z=sl(12ϖ−z){\displaystyle \operatorname {cl} z={\operatorname {sl} }{\bigl (}{\tfrac {1}{2}}\varpi -z{\bigr )}}. It has zeros for arguments(a+12)ϖ+bϖi{\displaystyle {\bigl (}a+{\tfrac {1}{2}}{\bigr )}\varpi +b\varpi i}and poles for argumentsaϖ+(b+12)ϖi,{\displaystyle a\varpi +{\bigl (}b+{\tfrac {1}{2}}{\bigr )}\varpi i,}with residues(−1)a−bi.{\displaystyle (-1)^{a-b}i.} Also for somem,n∈Z{\displaystyle m,n\in \mathbb {Z} }and The last formula is a special case ofcomplex multiplication. Analogous formulas can be given forsl⁡((n+mi)z){\displaystyle \operatorname {sl} ((n+mi)z)}wheren+mi{\displaystyle n+mi}is any Gaussian integer – the functionsl{\displaystyle \operatorname {sl} }has complex multiplication byZ[i]{\displaystyle \mathbb {Z} [i]}.[15] There are also infinite series reflecting the distribution of the zeros and poles ofsl:[16][17] The lemniscate functions satisfy aPythagorean-like identity: As a result, the parametric equation(x,y)=(cl⁡t,sl⁡t){\displaystyle (x,y)=(\operatorname {cl} t,\operatorname {sl} t)}parametrizes thequartic curvex2+y2+x2y2=1.{\displaystyle x^{2}+y^{2}+x^{2}y^{2}=1.} This identity can alternately be rewritten:[18] Defining atangent-sumoperator asa⊕b:=tan⁡(arctan⁡a+arctan⁡b)=a+b1−ab,{\displaystyle a\oplus b\mathrel {:=} \tan(\arctan a+\arctan b)={\frac {a+b}{1-ab}},}gives: The functionscl~{\displaystyle {\tilde {\operatorname {cl} }}}andsl~{\displaystyle {\tilde {\operatorname {sl} }}}satisfy another Pythagorean-like identity: The derivatives are as follows: The second derivatives of lemniscate sine and lemniscate cosine are their negative duplicated cubes: The lemniscate functions can be integrated using the inverse tangent function: Like the trigonometric functions, the lemniscate functions satisfy argument sum and difference identities. The original identity used by Fagnano for bisection of the lemniscate was:[19] The derivative and Pythagorean-like identities can be used to rework the identity used by Fagano in terms ofslandcl. Defining atangent-sumoperatora⊕b:=tan⁡(arctan⁡a+arctan⁡b){\displaystyle a\oplus b\mathrel {:=} \tan(\arctan a+\arctan b)}and tangent-difference operatora⊖b:=a⊕(−b),{\displaystyle a\ominus b\mathrel {:=} a\oplus (-b),}the argument sum and difference identities can be expressed as:[20] These resemble theirtrigonometric analogs: In particular, to compute the complex-valued functions in real components, Gauss discovered that whereu,v∈C{\displaystyle u,v\in \mathbb {C} }such that both sides are well-defined. Also whereu,v∈C{\displaystyle u,v\in \mathbb {C} }such that both sides are well-defined; this resembles the trigonometric analog Bisection formulas: Duplication formulas:[21] Triplication formulas:[21] Note the "reverse symmetry" of the coefficients of numerator and denominator ofsl⁡3x{\displaystyle \operatorname {sl} 3x}. This phenomenon can be observed in multiplication formulas forsl⁡βx{\displaystyle \operatorname {sl} \beta x}whereβ=m+ni{\displaystyle \beta =m+ni}wheneverm,n∈Z{\displaystyle m,n\in \mathbb {Z} }andm+n{\displaystyle m+n}is odd.[15] LetL{\displaystyle L}be thelattice Furthermore, letK=Q(i){\displaystyle K=\mathbb {Q} (i)},O=Z[i]{\displaystyle {\mathcal {O}}=\mathbb {Z} [i]},z∈C{\displaystyle z\in \mathbb {C} },β=m+in{\displaystyle \beta =m+in},γ=m′+in′{\displaystyle \gamma =m'+in'}(wherem,n,m′,n′∈Z{\displaystyle m,n,m',n'\in \mathbb {Z} }),m+n{\displaystyle m+n}be odd,m′+n′{\displaystyle m'+n'}be odd,γ≡1mod2(1+i){\displaystyle \gamma \equiv 1\,\operatorname {mod} \,2(1+i)}andsl⁡βz=Mβ(sl⁡z){\displaystyle \operatorname {sl} \beta z=M_{\beta }(\operatorname {sl} z)}. Then for some coprime polynomialsPβ(x),Qβ(x)∈O[x]{\displaystyle P_{\beta }(x),Q_{\beta }(x)\in {\mathcal {O}}[x]}and someε∈{0,1,2,3}{\displaystyle \varepsilon \in \{0,1,2,3\}}[22]where and whereδβ{\displaystyle \delta _{\beta }}is anyβ{\displaystyle \beta }-torsiongenerator (i.e.δβ∈(1/β)L{\displaystyle \delta _{\beta }\in (1/\beta )L}and[δβ]∈(1/β)L/L{\displaystyle [\delta _{\beta }]\in (1/\beta )L/L}generates(1/β)L/L{\displaystyle (1/\beta )L/L}as anO{\displaystyle {\mathcal {O}}}-module). Examples ofβ{\displaystyle \beta }-torsion generators include2ϖ/β{\displaystyle 2\varpi /\beta }and(1+i)ϖ/β{\displaystyle (1+i)\varpi /\beta }. The polynomialΛβ(x)∈O[x]{\displaystyle \Lambda _{\beta }(x)\in {\mathcal {O}}[x]}is called theβ{\displaystyle \beta }-thlemnatomic polynomial. It is monic and is irreducible overK{\displaystyle K}. The lemnatomic polynomials are the "lemniscate analogs" of thecyclotomic polynomials,[23] Theβ{\displaystyle \beta }-th lemnatomic polynomialΛβ(x){\displaystyle \Lambda _{\beta }(x)}is theminimal polynomialofsl⁡δβ{\displaystyle \operatorname {sl} \delta _{\beta }}inK[x]{\displaystyle K[x]}. For convenience, letωβ=sl⁡(2ϖ/β){\displaystyle \omega _{\beta }=\operatorname {sl} (2\varpi /\beta )}andω~β=sl⁡((1+i)ϖ/β){\displaystyle {\tilde {\omega }}_{\beta }=\operatorname {sl} ((1+i)\varpi /\beta )}. So for example, the minimal polynomial ofω5{\displaystyle \omega _{5}}(and also ofω~5{\displaystyle {\tilde {\omega }}_{5}}) inK[x]{\displaystyle K[x]}is and[24] (an equivalent expression is given in the table below). Another example is[23] which is the minimal polynomial ofω−1+2i{\displaystyle \omega _{-1+2i}}(and also ofω~−1+2i{\displaystyle {\tilde {\omega }}_{-1+2i}}) inK[x].{\displaystyle K[x].} Ifp{\displaystyle p}is prime andβ{\displaystyle \beta }is positive and odd,[26]then[27] which can be compared to the cyclotomic analog Just as for the trigonometric functions, values of the lemniscate functions can be computed for divisions of the lemniscate intonparts of equal length, using only basic arithmetic and square roots, if and only ifnis of the formn=2kp1p2⋯pm{\displaystyle n=2^{k}p_{1}p_{2}\cdots p_{m}}wherekis a non-negativeintegerand eachpi(if any) is a distinctFermat prime.[28] L{\displaystyle {\mathcal {L}}}, thelemniscate of Bernoulliwith unit distance from its center to its furthest point (i.e. with unit "half-width"), is essential in the theory of the lemniscate elliptic functions. It can becharacterizedin at least three ways: Angular characterization:Given two pointsA{\displaystyle A}andB{\displaystyle B}which are unit distance apart, letB′{\displaystyle B'}be thereflectionofB{\displaystyle B}aboutA{\displaystyle A}. ThenL{\displaystyle {\mathcal {L}}}is theclosureof the locus of the pointsP{\displaystyle P}such that|APB−APB′|{\displaystyle |APB-APB'|}is aright angle.[29] Focal characterization:L{\displaystyle {\mathcal {L}}}is the locus of points in the plane such that the product of their distances from the two focal pointsF1=(−12,0){\displaystyle F_{1}={\bigl (}{-{\tfrac {1}{\sqrt {2}}}},0{\bigr )}}andF2=(12,0){\displaystyle F_{2}={\bigl (}{\tfrac {1}{\sqrt {2}}},0{\bigr )}}is the constant12{\displaystyle {\tfrac {1}{2}}}. Explicit coordinate characterization:L{\displaystyle {\mathcal {L}}}is aquartic curvesatisfying thepolarequationr2=cos⁡2θ{\displaystyle r^{2}=\cos 2\theta }or theCartesianequation(x2+y2)2=x2−y2.{\displaystyle {\bigl (}x^{2}+y^{2}{\bigr )}{}^{2}=x^{2}-y^{2}.} TheperimeterofL{\displaystyle {\mathcal {L}}}is2ϖ{\displaystyle 2\varpi }.[30] The points onL{\displaystyle {\mathcal {L}}}at distancer{\displaystyle r}from the origin are the intersections of the circlex2+y2=r2{\displaystyle x^{2}+y^{2}=r^{2}}and thehyperbolax2−y2=r4{\displaystyle x^{2}-y^{2}=r^{4}}. The intersection in the positive quadrant has Cartesian coordinates: Using thisparametrizationwithr∈[0,1]{\displaystyle r\in [0,1]}for a quarter ofL{\displaystyle {\mathcal {L}}}, thearc lengthfrom the origin to a point(x(r),y(r)){\displaystyle {\big (}x(r),y(r){\big )}}is:[31] Likewise, the arc length from(1,0){\displaystyle (1,0)}to(x(r),y(r)){\displaystyle {\big (}x(r),y(r){\big )}}is: Or in the inverse direction, the lemniscate sine and cosine functions give the distance from the origin as functions of arc length from the origin and the point(1,0){\displaystyle (1,0)}, respectively. Analogously, the circular sine and cosine functions relate the chord length to the arc length for the unit diameter circle with polar equationr=cos⁡θ{\displaystyle r=\cos \theta }or Cartesian equationx2+y2=x,{\displaystyle x^{2}+y^{2}=x,}using the same argument above but with the parametrization: Alternatively, just as theunit circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}is parametrized in terms of the arc lengths{\displaystyle s}from the point(1,0){\displaystyle (1,0)}by L{\displaystyle {\mathcal {L}}}is parametrized in terms of the arc lengths{\displaystyle s}from the point(1,0){\displaystyle (1,0)}by[32] The notationcl~,sl~{\displaystyle {\tilde {\operatorname {cl} }},\,{\tilde {\operatorname {sl} }}}is used solely for the purposes of this article; in references, notation for general Jacobi elliptic functions is used instead. The lemniscate integral and lemniscate functions satisfy an argument duplication identity discovered by Fagnano in 1718:[33] Later mathematicians generalized this result. Analogously to theconstructible polygonsin the circle, the lemniscate can be divided intonsections of equal arc length using onlystraightedge and compassif and only ifnis of the formn=2kp1p2⋯pm{\displaystyle n=2^{k}p_{1}p_{2}\cdots p_{m}}wherekis a non-negativeintegerand eachpi(if any) is a distinctFermat prime.[34]The "if" part of the theorem was proved byNiels Abelin 1827–1828, and the "only if" part was proved byMichael Rosenin 1981.[35]Equivalently, the lemniscate can be divided intonsections of equal arc length using only straightedge and compass if and only ifφ(n){\displaystyle \varphi (n)}is a power of two (whereφ{\displaystyle \varphi }isEuler's totient function). The lemniscate isnotassumed to be already drawn, as that would go against the rules of straightedge and compass constructions; instead, it is assumed that we are given only two points by which the lemniscate is defined, such as its center and radial point (one of the two points on the lemniscate such that their distance from the center is maximal) or its two foci. Letrj=sl⁡2jϖn{\displaystyle r_{j}=\operatorname {sl} {\dfrac {2j\varpi }{n}}}. Then then-division points forL{\displaystyle {\mathcal {L}}}are the points where⌊⋅⌋{\displaystyle \lfloor \cdot \rfloor }is thefloor function. Seebelowfor some specific values ofsl⁡2ϖn{\displaystyle \operatorname {sl} {\dfrac {2\varpi }{n}}}. The inverse lemniscate sine also describes the arc lengthsrelative to thexcoordinate of the rectangularelastica.[36]This curve hasycoordinate and arc length: The rectangular elastica solves a problem posed byJacob Bernoulli, in 1691, to describe the shape of an idealized flexible rod fixed in a vertical orientation at the bottom end, and pulled down by a weight from the far end until it has been bent horizontal. Bernoulli's proposed solution establishedEuler–Bernoulli beam theory, further developed by Euler in the 18th century. LetC{\displaystyle C}be a point on the ellipsex2+2y2=1{\displaystyle x^{2}+2y^{2}=1}in the first quadrant and letD{\displaystyle D}be the projection ofC{\displaystyle C}on the unit circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}. The distancer{\displaystyle r}between the originA{\displaystyle A}and the pointC{\displaystyle C}is a function ofφ{\displaystyle \varphi }(the angleBAC{\displaystyle BAC}whereB=(1,0){\displaystyle B=(1,0)}; equivalently the length of the circular arcBD{\displaystyle BD}). The parameteru{\displaystyle u}is given by IfE{\displaystyle E}is the projection ofD{\displaystyle D}on the x-axis and ifF{\displaystyle F}is the projection ofC{\displaystyle C}on the x-axis, then the lemniscate elliptic functions are given by Thepower seriesexpansion of the lemniscate sine at the origin is[37] where the coefficientsan{\displaystyle a_{n}}are determined as follows: wherei+j+k=n{\displaystyle i+j+k=n}stands for all three-termcompositionsofn{\displaystyle n}. For example, to evaluatea13{\displaystyle a_{13}}, it can be seen that there are only six compositions of13−2=11{\displaystyle 13-2=11}that give a nonzero contribution to the sum:11=9+1+1=1+9+1=1+1+9{\displaystyle 11=9+1+1=1+9+1=1+1+9}and11=5+5+1=5+1+5=1+5+5{\displaystyle 11=5+5+1=5+1+5=1+5+5}, so The expansion can be equivalently written as[38] where The power series expansion ofsl~{\displaystyle {\tilde {\operatorname {sl} }}}at the origin is whereαn=0{\displaystyle \alpha _{n}=0}ifn{\displaystyle n}is even and[39] ifn{\displaystyle n}is odd. The expansion can be equivalently written as[40] where For the lemniscate cosine,[41] where Ramanujan's famous cos/cosh identity states that if then[39] There is a close relation between the lemniscate functions andR(s){\displaystyle R(s)}. Indeed,[39][42] and Forz∈C∖{0}{\displaystyle z\in \mathbb {C} \setminus \{0\}}:[43] A fast algorithm, returning approximations tosl⁡x{\displaystyle \operatorname {sl} x}(which get closer tosl⁡x{\displaystyle \operatorname {sl} x}with increasingN{\displaystyle N}), is the following:[44] This is effectively using the arithmetic-geometric mean and is based onLanden's transformations.[45] Several methods of computingsl⁡x{\displaystyle \operatorname {sl} x}involve first making the change of variablesπx=ϖx~{\displaystyle \pi x=\varpi {\tilde {x}}}and then computingsl⁡(ϖx~/π).{\displaystyle \operatorname {sl} (\varpi {\tilde {x}}/\pi ).} Ahyperbolicseries method:[46][47] Fourier seriesmethod:[48] The lemniscate functions can be computed more rapidly by where are theJacobi theta functions.[49] Fourier series for the logarithm of the lemniscate sine: The following series identities were discovered byRamanujan:[50] The functionssl~{\displaystyle {\tilde {\operatorname {sl} }}}andcl~{\displaystyle {\tilde {\operatorname {cl} }}}analogous tosin{\displaystyle \sin }andcos{\displaystyle \cos }on the unit circle have the following Fourier and hyperbolic series expansions:[39][42][51] The following identities come from product representations of the theta functions:[52] A similar formula involving thesn{\displaystyle \operatorname {sn} }function can be given.[53] Since the lemniscate sine is a meromorphic function in the whole complex plane, it can be written as a ratio ofentire functions. Gauss showed thatslhas the following product expansion, reflecting the distribution of its zeros and poles:[54] where Here,α{\displaystyle \alpha }andβ{\displaystyle \beta }denote, respectively, the zeros and poles ofslwhich are in the quadrantRe⁡z>0,Im⁡z≥0{\displaystyle \operatorname {Re} z>0,\operatorname {Im} z\geq 0}. A proof can be found in.[54][55]Importantly, the infinite products converge to the same value for all possible orders in which their terms can be multiplied, as a consequence ofuniform convergence.[56] Proof by logarithmic differentiation It can be easily seen (using uniform andabsoluteconvergence arguments to justifyinterchanging of limiting operations) that (whereHn{\displaystyle \mathrm {H} _{n}}are the Hurwitz numbers defined inLemniscate elliptic functions § Hurwitz numbers) and Therefore It is known that Then from and we get Hence Therefore for some constantC{\displaystyle C}for|z|<ϖ/2{\displaystyle \left|z\right|<\varpi /{\sqrt {2}}}but this result holds for allz∈C{\displaystyle z\in \mathbb {C} }by analytic continuation. Using givesC=1{\displaystyle C=1}which completes the proof.◼{\displaystyle \blacksquare } Proof by Liouville's theorem Let with patches at removable singularities. The shifting formulas imply thatf{\displaystyle f}is an elliptic function with periods2ϖ{\displaystyle 2\varpi }and2ϖi{\displaystyle 2\varpi i}, just assl{\displaystyle \operatorname {sl} }. It follows that the functiong{\displaystyle g}defined by when patched, is an elliptic function without poles. ByLiouville's theorem, it is a constant. By usingsl⁡z=z+O⁡(z5){\displaystyle \operatorname {sl} z=z+\operatorname {O} (z^{5})},M(z)=z+O⁡(z5){\displaystyle M(z)=z+\operatorname {O} (z^{5})}andN(z)=1+O⁡(z4){\displaystyle N(z)=1+\operatorname {O} (z^{4})}, this constant is1{\displaystyle 1}, which proves the theorem.◼{\displaystyle \blacksquare } Gauss conjectured thatln⁡N(ϖ)=π/2{\displaystyle \ln N(\varpi )=\pi /2}(this later turned out to be true) and commented that this “is most remarkable and a proof of this property promises the most serious increase in analysis”.[57]Gauss expanded the products forM{\displaystyle M}andN{\displaystyle N}as infinite series (see below). He also discovered several identities involving the functionsM{\displaystyle M}andN{\displaystyle N}, such as and Thanks to a certain theorem[58]on splitting limits, we are allowed to multiply out the infinite products and collect like powers ofz{\displaystyle z}. Doing so gives the following power series expansions that are convergent everywhere in the complex plane:[59][60][61][62][63] This can be contrasted with the power series ofsl{\displaystyle \operatorname {sl} }which has only finite radius of convergence (because it is not entire). We defineS{\displaystyle S}andT{\displaystyle T}by Then the lemniscate cosine can be written as where[64] Furthermore, the identities and the Pythagorean-like identities hold for allz∈C{\displaystyle z\in \mathbb {C} }. The quasi-addition formulas (wherez,w∈C{\displaystyle z,w\in \mathbb {C} }) imply further multiplication formulas forM{\displaystyle M}andN{\displaystyle N}by recursion.[65] Gauss'M{\displaystyle M}andN{\displaystyle N}satisfy the following system of differential equations: wherez∈C{\displaystyle z\in \mathbb {C} }. BothM{\displaystyle M}andN{\displaystyle N}satisfy the differential equation[66] The functions can be also expressed by integrals involving elliptic functions: where the contours do not cross the poles; while the innermost integrals are path-independent, the outermost ones are path-dependent; however, the path dependence cancels out with the non-injectivity of the complex exponential function. An alternative way of expressing the lemniscate functions as a ratio of entire functions involves the theta functions (seeLemniscate elliptic functions § Methods of computation); the relation betweenM,N{\displaystyle M,N}andθ1,θ3{\displaystyle \theta _{1},\theta _{3}}is wherez∈C{\displaystyle z\in \mathbb {C} }. The lemniscate functions are closely related to theWeierstrass elliptic function℘(z;1,0){\displaystyle \wp (z;1,0)}(the "lemniscatic case"), with invariantsg2= 1andg3= 0. This lattice has fundamental periodsω1=2ϖ,{\displaystyle \omega _{1}={\sqrt {2}}\varpi ,}andω2=iω1{\displaystyle \omega _{2}=i\omega _{1}}. The associated constants of the Weierstrass function aree1=12,e2=0,e3=−12.{\displaystyle e_{1}={\tfrac {1}{2}},\ e_{2}=0,\ e_{3}=-{\tfrac {1}{2}}.} The related case of a Weierstrass elliptic function withg2=a,g3= 0may be handled by a scaling transformation. However, this may involve complex numbers. If it is desired to remain within real numbers, there are two cases to consider:a> 0anda< 0. The periodparallelogramis either asquareor arhombus. The Weierstrass elliptic function℘(z;−1,0){\displaystyle \wp (z;-1,0)}is called the "pseudolemniscatic case".[67] The square of the lemniscate sine can be represented as where the second and third argument of℘{\displaystyle \wp }denote the lattice invariantsg2andg3. The lemniscate sine is arational functionin the Weierstrass elliptic function and its derivative:[68] The lemniscate functions can also be written in terms ofJacobi elliptic functions. The Jacobi elliptic functionssn{\displaystyle \operatorname {sn} }andcd{\displaystyle \operatorname {cd} }with positive real elliptic modulus have an "upright" rectangular lattice aligned with real and imaginary axes. Alternately, the functionssn{\displaystyle \operatorname {sn} }andcd{\displaystyle \operatorname {cd} }with modulusi(andsd{\displaystyle \operatorname {sd} }andcn{\displaystyle \operatorname {cn} }with modulus1/2{\displaystyle 1/{\sqrt {2}}}) have a square period lattice rotated 1/8 turn.[69][70] where the second arguments denote the elliptic modulusk{\displaystyle k}. The functionssl~{\displaystyle {\tilde {\operatorname {sl} }}}andcl~{\displaystyle {\tilde {\operatorname {cl} }}}can also be expressed in terms of Jacobi elliptic functions: The lemniscate sine can be used for the computation of values of themodular lambda function: For example: The inverse function of the lemniscate sine is the lemniscate arcsine, defined as[71] It can also be represented by thehypergeometric function: which can be easily seen by using thebinomial series. The inverse function of the lemniscate cosine is the lemniscate arccosine. This function is defined by following expression: Forxin the interval−1≤x≤1{\displaystyle -1\leq x\leq 1},sl⁡arcsl⁡x=x{\displaystyle \operatorname {sl} \operatorname {arcsl} x=x}andcl⁡arccl⁡x=x{\displaystyle \operatorname {cl} \operatorname {arccl} x=x} For the halving of the lemniscate arc length these formulas are valid:[citation needed] Furthermore there are the so called Hyperbolic lemniscate area functions:[citation needed] The lemniscate arcsine and the lemniscate arccosine can also be expressed by the Legendre-Form: These functions can be displayed directly by using the incompleteelliptic integralof the first kind:[citation needed] The arc lengths of the lemniscate can also be expressed by only using the arc lengths ofellipses(calculated by elliptic integrals of the second kind):[citation needed] The lemniscate arccosine has this expression:[citation needed] The lemniscate arcsine can be used to integrate many functions. Here is a list of important integrals (the constants of integration are omitted): For convenience, letσ=2ϖ{\displaystyle \sigma ={\sqrt {2}}\varpi }.σ{\displaystyle \sigma }is the "squircular" analog ofπ{\displaystyle \pi }(see below). The decimal expansion ofσ{\displaystyle \sigma }(i.e.3.7081…{\displaystyle 3.7081\ldots }[72]) appears in entry 34e of chapter 11 of Ramanujan's second notebook.[73] The hyperbolic lemniscate sine (slh) and cosine (clh) can be defined as inverses of elliptic integrals as follows: where in(∗){\displaystyle (*)},z{\displaystyle z}is in the square with corners{σ/2,σi/2,−σ/2,−σi/2}{\displaystyle \{\sigma /2,\sigma i/2,-\sigma /2,-\sigma i/2\}}. Beyond that square, the functions can be analytically continued to meromorphic functions in the whole complex plane. The complete integral has the value: Therefore, the two defined functions have following relation to each other: The product of hyperbolic lemniscate sine and hyperbolic lemniscate cosine is equal to one: The functionsslh{\displaystyle \operatorname {slh} }andclh{\displaystyle \operatorname {clh} }have a square period lattice with fundamental periods{σ,σi}{\displaystyle \{\sigma ,\sigma i\}}. The hyperbolic lemniscate functions can be expressed in terms of lemniscate sine and lemniscate cosine: But there is also a relation to theJacobi elliptic functionswith the elliptic modulus one by square root of two: The hyperbolic lemniscate sine has following imaginary relation to the lemniscate sine: This is analogous to the relationship between hyperbolic and trigonometric sine: This image shows the standardized superelliptic Fermat squircle curve of the fourth degree: In a quarticFermat curvex4+y4=1{\displaystyle x^{4}+y^{4}=1}(sometimes called asquircle) the hyperbolic lemniscate sine and cosine are analogous to the tangent and cotangent functions in a unit circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}(the quadratic Fermat curve). If the origin and a point on the curve are connected to each other by a lineL, the hyperbolic lemniscate sine of twice the enclosed area between this line and the x-axis is the y-coordinate of the intersection ofLwith the linex=1{\displaystyle x=1}.[74]Just asπ{\displaystyle \pi }is the area enclosed by the circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}, the area enclosed by the squirclex4+y4=1{\displaystyle x^{4}+y^{4}=1}isσ{\displaystyle \sigma }. Moreover, whereM{\displaystyle M}is thearithmetic–geometric mean. The hyperbolic lemniscate sine satisfies the argument addition identity: Whenu{\displaystyle u}is real, the derivative and the original antiderivative ofslh{\displaystyle \operatorname {slh} }andclh{\displaystyle \operatorname {clh} }can be expressed in this way: dduslh⁡(u)=1+slh⁡(u)4{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} u}}\operatorname {slh} (u)={\sqrt {1+\operatorname {slh} (u)^{4}}}} dduclh⁡(u)=−1+clh⁡(u)4{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} u}}\operatorname {clh} (u)=-{\sqrt {1+\operatorname {clh} (u)^{4}}}} ddu12arsinh⁡[slh⁡(u)2]=slh⁡(u){\displaystyle {\frac {\mathrm {d} }{\mathrm {d} u}}\,{\frac {1}{2}}\operatorname {arsinh} {\bigl [}\operatorname {slh} (u)^{2}{\bigr ]}=\operatorname {slh} (u)} ddu−12arsinh⁡[clh⁡(u)2]=clh⁡(u){\displaystyle {\frac {\mathrm {d} }{\mathrm {d} u}}-\,{\frac {1}{2}}\operatorname {arsinh} {\bigl [}\operatorname {clh} (u)^{2}{\bigr ]}=\operatorname {clh} (u)} There are also the Hyperbolic lemniscate tangent and the Hyperbolic lemniscate coangent als further functions: The functions tlh and ctlh fulfill the identities described in the differential equation mentioned: The functional designation sl stands for the lemniscatic sine and the designation cl stands for the lemniscatic cosine. In addition, those relations to theJacobi elliptic functionsare valid: Whenu{\displaystyle u}is real, the derivative and quarter period integral oftlh{\displaystyle \operatorname {tlh} }andctlh{\displaystyle \operatorname {ctlh} }can be expressed in this way: ddutlh⁡(u)=ctlh⁡(u)3{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} u}}\operatorname {tlh} (u)=\operatorname {ctlh} (u)^{3}} dductlh⁡(u)=−tlh⁡(u)3{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} u}}\operatorname {ctlh} (u)=-\operatorname {tlh} (u)^{3}} ∫0ϖ/2tlh⁡(u)du=ϖ2{\displaystyle \int _{0}^{\varpi /{\sqrt {2}}}\operatorname {tlh} (u)\,\mathrm {d} u={\frac {\varpi }{2}}} ∫0ϖ/2ctlh⁡(u)du=ϖ2{\displaystyle \int _{0}^{\varpi /{\sqrt {2}}}\operatorname {ctlh} (u)\,\mathrm {d} u={\frac {\varpi }{2}}} The horizontal and vertical coordinates of this superellipse are dependent on twice the enclosed area w = 2A, so the following conditions must be met: The solutions to this system of equations are as follows: The following therefore applies to the quotient: The functions x(w) and y(w) are calledcotangent hyperbolic lemniscatusandhyperbolic tangent. The sketch also shows the fact that the derivation of the Areasinus hyperbolic lemniscatus function is equal to the reciprocal of the square root of the successor of the fourth power function. There is a black diagonal on the sketch shown on the right. The length of the segment that runs perpendicularly from the intersection of this black diagonal with the red vertical axis to the point (1|0) should be called s. And the length of the section of the black diagonal from the coordinate origin point to the point of intersection of this diagonal with the cyan curved line of the superellipse has the following value depending on the slh value: This connection is described by thePythagorean theorem. An analogous unit circle results in the arctangent of the circle trigonometric with the described area allocation. The following derivation applies to this: To determine the derivation of the areasinus lemniscatus hyperbolicus, the comparison of the infinitesimally small triangular areas for the same diagonal in the superellipse and the unit circle is set up below. Because the summation of the infinitesimally small triangular areas describes the area dimensions. In the case of the superellipse in the picture, half of the area concerned is shown in green. Because of the quadratic ratio of the areas to the lengths of triangles with the same infinitesimally small angle at the origin of the coordinates, the following formula applies: In the picture shown, the area tangent lemniscatus hyperbolicus assigns the height of the intersection of the diagonal and the curved line to twice the green area. The green area itself is created as the difference integral of the superellipse function from zero to the relevant height value minus the area of the adjacent triangle: The following transformation applies: And so, according to thechain rule, this derivation holds: This list shows the values of theHyperbolic Lemniscate Sineaccurately. Recall that, whereas12B(12,12)=π2,{\displaystyle {\tfrac {1}{2}}\mathrm {B} {\bigl (}{\tfrac {1}{2}},{\tfrac {1}{2}}{\bigr )}={\tfrac {\pi }{2}},}so the values below such asslh(ϖ22)=slh(σ4)=1{\displaystyle {\operatorname {slh} }{\bigl (}{\tfrac {\varpi }{2{\sqrt {2}}}}{\bigr )}={\operatorname {slh} }{\bigl (}{\tfrac {\sigma }{4}}{\bigr )}=1}are analogous to the trigonometricsin(π2)=1{\displaystyle {\sin }{\bigl (}{\tfrac {\pi }{2}}{\bigr )}=1}. That table shows the most important values of theHyperbolic Lemniscate Tangent and Cotangentfunctions: Given thehyperbolic lemniscate tangent(tlh{\displaystyle \operatorname {tlh} }) andhyperbolic lemniscate cotangent(ctlh{\displaystyle \operatorname {ctlh} }). Recall thehyperbolic lemniscate area functionsfrom the section on inverse functions, Then the following identities can be established, hence the 4th power oftlh{\displaystyle \operatorname {tlh} }andctlh{\displaystyle \operatorname {ctlh} }for these arguments is equal to one, so a 4th power version of thePythagorean theorem. The bisection theorem of the hyperbolic sinus lemniscatus reads as follows: This formula can be revealed as a combination of the following two formulas: In addition, the following formulas are valid for all real valuesx∈R{\displaystyle x\in \mathbb {R} }: These identities follow from the last-mentioned formula: Hence, their 4th powers again equal one, The following formulas for the lemniscatic sine and lemniscatic cosine are closely related: Analogous to the determination of the improper integral in theGaussian bell curve function, the coordinate transformation of ageneral cylindercan be used to calculate the integral from 0 to the positive infinity in the functionf(x)=exp⁡(−x4){\displaystyle f(x)=\exp(-x^{4})}integrated in relation to x. In the following, the proofs of both integrals are given in a parallel way of displaying. This is thecylindrical coordinate transformationin the Gaussian bell curve function: And this is the analogous coordinate transformation for the lemniscatory case: In the last line of this elliptically analogous equation chain there is again the original Gauss bell curve integrated with the square function as the inner substitution according to theChain ruleof infinitesimal analytics (analysis). In both cases, the determinant of theJacobi matrixis multiplied to the original function in the integration domain. The resulting new functions in the integration area are then integrated according to the new parameters. Inalgebraic number theory, every finiteabelian extensionof theGaussian rationalsQ(i){\displaystyle \mathbb {Q} (i)}is asubfieldofQ(i,ωn){\displaystyle \mathbb {Q} (i,\omega _{n})}for some positive integern{\displaystyle n}.[23][76]This is analogous to theKronecker–Weber theoremfor the rational numbersQ{\displaystyle \mathbb {Q} }which is based on division of the circle – in particular, every finite abelian extension ofQ{\displaystyle \mathbb {Q} }is a subfield ofQ(ζn){\displaystyle \mathbb {Q} (\zeta _{n})}for some positive integern{\displaystyle n}. Both are special cases of Kronecker's Jugendtraum, which becameHilbert's twelfth problem. ThefieldQ(i,sl⁡(ϖ/n)){\displaystyle \mathbb {Q} (i,\operatorname {sl} (\varpi /n))}(for positive oddn{\displaystyle n}) is the extension ofQ(i){\displaystyle \mathbb {Q} (i)}generated by thex{\displaystyle x}- andy{\displaystyle y}-coordinates of the(1+i)n{\displaystyle (1+i)n}-torsion pointson theelliptic curvey2=4x3+x{\displaystyle y^{2}=4x^{3}+x}.[76] TheBernoulli numbersBn{\displaystyle \mathrm {B} _{n}}can be defined by and appear in whereζ{\displaystyle \zeta }is theRiemann zeta function. TheHurwitz numbersHn,{\displaystyle \mathrm {H} _{n},}named afterAdolf Hurwitz, are the "lemniscate analogs" of the Bernoulli numbers. They can be defined by[77][78] whereζ(⋅;1/4,0){\displaystyle \zeta (\cdot ;1/4,0)}is theWeierstrass zeta functionwith lattice invariants1/4{\displaystyle 1/4}and0{\displaystyle 0}. They appear in whereZ[i]{\displaystyle \mathbb {Z} [i]}are theGaussian integersandG4n{\displaystyle G_{4n}}are theEisenstein seriesof weight4n{\displaystyle 4n}, and in The Hurwitz numbers can also be determined as follows:H4=1/10{\displaystyle \mathrm {H} _{4}=1/10}, andHn=0{\displaystyle \mathrm {H} _{n}=0}ifn{\displaystyle n}is not a multiple of4{\displaystyle 4}.[79]This yields[77] Also[80] wherep∈P{\displaystyle p\in \mathbb {P} }such thatp≢3(mod4),{\displaystyle p\not \equiv 3\,({\text{mod}}\,4),}just as wherep∈P{\displaystyle p\in \mathbb {P} }(by thevon Staudt–Clausen theorem). In fact, the von Staudt–Clausen theorem determines thefractional partof the Bernoulli numbers: (sequenceA000146in theOEIS) wherep{\displaystyle p}is any prime, and an analogous theorem holds for the Hurwitz numbers: suppose thata∈Z{\displaystyle a\in \mathbb {Z} }is odd,b∈Z{\displaystyle b\in \mathbb {Z} }is even,p{\displaystyle p}is a prime such thatp≡1(mod4){\displaystyle p\equiv 1\,(\mathrm {mod} \,4)},p=a2+b2{\displaystyle p=a^{2}+b^{2}}(seeFermat's theorem on sums of two squares) anda≡b+1(mod4){\displaystyle a\equiv b+1\,(\mathrm {mod} \,4)}. Then for any givenp{\displaystyle p},2a=ν(p){\displaystyle 2a=\nu (p)}is uniquely determined; equivalentlyν(p)=p−Np{\displaystyle \nu (p)=p-{\mathcal {N}}_{p}}whereNp{\displaystyle {\mathcal {N}}_{p}}is the number of solutions of the congruenceX3−X≡Y2(mod⁡p){\displaystyle X^{3}-X\equiv Y^{2}\,(\operatorname {mod} p)}in variablesX,Y{\displaystyle X,Y}that are non-negative integers.[81]The Hurwitz theorem then determines the fractional part of the Hurwitz numbers:[77] The sequence of the integersGn{\displaystyle \mathrm {G} _{n}}starts with0,−1,5,253,….{\displaystyle 0,-1,5,253,\ldots .}[77] Letn≥2{\displaystyle n\geq 2}. If4n+1{\displaystyle 4n+1}is a prime, thenGn≡1(mod4){\displaystyle \mathrm {G} _{n}\equiv 1\,(\mathrm {mod} \,4)}. If4n+1{\displaystyle 4n+1}is not a prime, thenGn≡3(mod4){\displaystyle \mathrm {G} _{n}\equiv 3\,(\mathrm {mod} \,4)}.[82] Some authors instead define the Hurwitz numbers asHn′=H4n{\displaystyle \mathrm {H} _{n}'=\mathrm {H} _{4n}}. The Hurwitz numbers appear in severalLaurent seriesexpansions related to the lemniscate functions:[83] Analogously, in terms of the Bernoulli numbers: Letp{\displaystyle p}be a prime such thatp≡1(mod4){\displaystyle p\equiv 1\,({\text{mod}}\,4)}. Aquartic residue(modp{\displaystyle p}) is any number congruent to the fourth power of an integer. Define(ap)4{\displaystyle \left({\tfrac {a}{p}}\right)_{4}}to be1{\displaystyle 1}ifa{\displaystyle a}is a quartic residue (modp{\displaystyle p}) and define it to be−1{\displaystyle -1}ifa{\displaystyle a}is not a quartic residue (modp{\displaystyle p}). Ifa{\displaystyle a}andp{\displaystyle p}are coprime, then there exist numbersp′∈Z[i]{\displaystyle p'\in \mathbb {Z} [i]}(see[84]for these numbers) such that[85] This theorem is analogous to where(⋅⋅){\displaystyle \left({\tfrac {\cdot }{\cdot }}\right)}is theLegendre symbol. ThePeirce quincuncial projection, designed byCharles Sanders Peirceof theUS Coast Surveyin the 1870s, is a worldmap projectionbased on the inverse lemniscate sine ofstereographically projectedpoints (treated as complex numbers).[86] When lines of constant real or imaginary part are projected onto the complex plane via the hyperbolic lemniscate sine, and thence stereographically projected onto the sphere (seeRiemann sphere), the resulting curves arespherical conics, the spherical analog of planarellipsesandhyperbolas.[87]Thus the lemniscate functions (and more generally, theJacobi elliptic functions) provide a parametrization for spherical conics. A conformal map projection from the globe onto the 6 square faces of acubecan also be defined using the lemniscate functions.[88]Because manypartial differential equationscan be effectively solved by conformal mapping, this map from sphere to cube is convenient foratmospheric modeling.[89]
https://en.wikipedia.org/wiki/Lemniscate_elliptic_functions
Intrigonometry, thelaw of sines(sometimes called thesine formulaorsine rule) is a mathematicalequationrelating thelengthsof the sides of anytriangleto thesinesof itsangles. According to the law,asin⁡α=bsin⁡β=csin⁡γ=2R,{\displaystyle {\frac {a}{\sin {\alpha }}}\,=\,{\frac {b}{\sin {\beta }}}\,=\,{\frac {c}{\sin {\gamma }}}\,=\,2R,}wherea,b, andcare the lengths of the sides of a triangle, andα,β, andγare the opposite angles (see figure 2), whileRis theradiusof the triangle'scircumcircle. When the last part of the equation is not used, the law is sometimes stated using thereciprocals;sin⁡αa=sin⁡βb=sin⁡γc.{\displaystyle {\frac {\sin {\alpha }}{a}}\,=\,{\frac {\sin {\beta }}{b}}\,=\,{\frac {\sin {\gamma }}{c}}.}The law of sines can be used to compute the remaining sides of a triangle when two angles and a side are known—a technique known astriangulation. It can also be used when two sides and one of the non-enclosed angles are known. In some such cases, the triangle is not uniquely determined by this data (called theambiguous case) and the technique gives two possible values for the enclosed angle. The law of sines is one of two trigonometric equations commonly applied to find lengths and angles inscalene triangles, with the other being thelaw of cosines. The law of sines can be generalized to higher dimensions on surfaces with constant curvature.[1] With the side of lengthaas the base, the triangle'saltitudecan be computed asbsinγor ascsinβ. Equating these two expressions givesbsin⁡β=csin⁡γ,{\displaystyle {\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}\,,}and similar equations arise by choosing the side of lengthbor the side of lengthcas the base of the triangle. When using the law of sines to find a side of a triangle, an ambiguous case occurs when two separate triangles can be constructed from the data provided (i.e., there are two different possible solutions to the triangle). In the case shown below they are trianglesABCandABC′. Given a general triangle, the following conditions would need to be fulfilled for the case to be ambiguous: If all the above conditions are true, then each of anglesβandβ′produces a valid triangle, meaning that both of the following are true:γ′=arcsin⁡csin⁡αaorγ=π−arcsin⁡csin⁡αa.{\displaystyle {\gamma }'=\arcsin {\frac {c\sin {\alpha }}{a}}\quad {\text{or}}\quad {\gamma }=\pi -\arcsin {\frac {c\sin {\alpha }}{a}}.} From there we can find the correspondingβandborβ′andb′if required, wherebis the side bounded by verticesAandCandb′is bounded byAandC′. The following are examples of how to solve a problem using the law of sines. Given: sidea= 20, sidec= 24, and angleγ= 40°. Angleαis desired. Using the law of sines, we conclude thatsin⁡α20=sin⁡(40∘)24.{\displaystyle {\frac {\sin \alpha }{20}}={\frac {\sin(40^{\circ })}{24}}.}α=arcsin⁡(20sin⁡(40∘)24)≈32.39∘.{\displaystyle \alpha =\arcsin \left({\frac {20\sin(40^{\circ })}{24}}\right)\approx 32.39^{\circ }.} Note that the potential solutionα= 147.61°is excluded because that would necessarily giveα+β+γ> 180°. If the lengths of two sides of the triangleaandbare equal tox, the third side has lengthc, and the angles opposite the sides of lengthsa,b, andcareα,β, andγrespectively thenα=β=180∘−γ2=90∘−γ2sin⁡α=sin⁡β=sin⁡(90∘−γ2)=cos⁡(γ2)csin⁡γ=asin⁡α=xcos⁡(γ2)ccos⁡(γ2)sin⁡γ=x{\displaystyle {\begin{aligned}&\alpha =\beta ={\frac {180^{\circ }-\gamma }{2}}=90^{\circ }-{\frac {\gamma }{2}}\\[6pt]&\sin \alpha =\sin \beta =\sin \left(90^{\circ }-{\frac {\gamma }{2}}\right)=\cos \left({\frac {\gamma }{2}}\right)\\[6pt]&{\frac {c}{\sin \gamma }}={\frac {a}{\sin \alpha }}={\frac {x}{\cos \left({\frac {\gamma }{2}}\right)}}\\[6pt]&{\frac {c\cos \left({\frac {\gamma }{2}}\right)}{\sin \gamma }}=x\end{aligned}}} In the identityasin⁡α=bsin⁡β=csin⁡γ,{\displaystyle {\frac {a}{\sin {\alpha }}}={\frac {b}{\sin {\beta }}}={\frac {c}{\sin {\gamma }}},}the common value of the three fractions is actually thediameterof the triangle'scircumcircle. This result dates back toPtolemy.[2][3] As shown in the figure, let there be a circle with inscribed△ABC{\displaystyle \triangle ABC}and another inscribed△ADB{\displaystyle \triangle ADB}that passes through the circle's centerO. The∠AOD{\displaystyle \angle AOD}has acentral angleof180∘{\displaystyle 180^{\circ }}and thus∠ABD=90∘{\displaystyle \angle ABD=90^{\circ }},byThales's theorem. Since△ABD{\displaystyle \triangle ABD}is a right triangle,sin⁡δ=oppositehypotenuse=c2R,{\displaystyle \sin {\delta }={\frac {\text{opposite}}{\text{hypotenuse}}}={\frac {c}{2R}},}whereR=d2{\textstyle R={\frac {d}{2}}}is the radius of the circumscribing circle of the triangle.[3]Anglesγ{\displaystyle {\gamma }}andδ{\displaystyle {\delta }}lie on the same circle andsubtendthe samechordc; thus, by theinscribed angle theorem,γ=δ{\displaystyle {\gamma }={\delta }}.Therefore,sin⁡δ=sin⁡γ=c2R.{\displaystyle \sin {\delta }=\sin {\gamma }={\frac {c}{2R}}.} Rearranging yields2R=csin⁡γ.{\displaystyle 2R={\frac {c}{\sin {\gamma }}}.} Repeating the process of creating△ADB{\displaystyle \triangle ADB}with other points gives asin⁡α=bsin⁡β=csin⁡γ=2R.{\displaystyle {\frac {a}{\sin {\alpha }}}={\frac {b}{\sin {\beta }}}={\frac {c}{\sin {\gamma }}}=2R.} The area of a triangle is given byT=12absin⁡θ{\textstyle T={\frac {1}{2}}ab\sin \theta },whereθ{\displaystyle \theta }is the angle enclosed by the sides of lengthsaandb. Substituting the sine law into this equation givesT=12ab⋅c2R.{\displaystyle T={\frac {1}{2}}ab\cdot {\frac {c}{2R}}.} TakingR{\displaystyle R}as the circumscribing radius,[4] T=abc4R.{\displaystyle T={\frac {abc}{4R}}.} It can also be shown that this equality impliesabc2T=abc2s(s−a)(s−b)(s−c)=2abc(a2+b2+c2)2−2(a4+b4+c4),{\displaystyle {\begin{aligned}{\frac {abc}{2T}}&={\frac {abc}{2{\sqrt {s(s-a)(s-b)(s-c)}}}}\\[6pt]&={\frac {2abc}{\sqrt {{(a^{2}+b^{2}+c^{2})}^{2}-2(a^{4}+b^{4}+c^{4})}}},\end{aligned}}}whereTis the area of the triangle andsis thesemiperimeters=12(a+b+c).{\textstyle s={\frac {1}{2}}\left(a+b+c\right).} The second equality above readily simplifies toHeron's formulafor the area. The sine rule can also be used in deriving the following formula for the triangle's area: denoting the semi-sum of the angles' sines asS=12(sin⁡A+sin⁡B+sin⁡C){\textstyle S={\frac {1}{2}}\left(\sin A+\sin B+\sin C\right)},we have[5] T=4R2S(S−sin⁡A)(S−sin⁡B)(S−sin⁡C){\displaystyle T=4R^{2}{\sqrt {S\left(S-\sin A\right)\left(S-\sin B\right)\left(S-\sin C\right)}}} whereR{\displaystyle R}is the radius of the circumcircle:2R=asin⁡A=bsin⁡B=csin⁡C{\displaystyle 2R={\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}}. The spherical law of sines deals with triangles on a sphere, whose sides are arcs ofgreat circles. Suppose the radius of the sphere is 1. Leta,b, andcbe the lengths of the great-arcs that are the sides of the triangle. Because it is a unit sphere,a,b, andcare the angles at the center of the sphere subtended by those arcs, in radians. LetA,B, andCbe the angles opposite those respective sides. These aredihedral anglesbetween the planes of the three great circles. Then the spherical law of sines says:sin⁡Asin⁡a=sin⁡Bsin⁡b=sin⁡Csin⁡c.{\displaystyle {\frac {\sin A}{\sin a}}={\frac {\sin B}{\sin b}}={\frac {\sin C}{\sin c}}.} Consider a unit sphere with three unit vectorsOA,OBandOCdrawn from the origin to the vertices of the triangle. Thus the anglesα,β, andγare the anglesa,b, andc, respectively. The arcBCsubtends an angle of magnitudeaat the centre. Introduce a Cartesian basis withOAalong thez-axis andOBin thexz-plane making an anglecwith thez-axis. The vectorOCprojects toONin thexy-plane and the angle betweenONand thex-axis isA. Therefore, the three vectors have components:OA=(001),OB=(sin⁡c0cos⁡c),OC=(sin⁡bcos⁡Asin⁡bsin⁡Acos⁡b).{\displaystyle \mathbf {OA} ={\begin{pmatrix}0\\0\\1\end{pmatrix}},\quad \mathbf {OB} ={\begin{pmatrix}\sin c\\0\\\cos c\end{pmatrix}},\quad \mathbf {OC} ={\begin{pmatrix}\sin b\cos A\\\sin b\sin A\\\cos b\end{pmatrix}}.} Thescalar triple product,OA⋅ (OB×OC)is the volume of theparallelepipedformed by the position vectors of the vertices of the spherical triangleOA,OBandOC. This volume is invariant to the specific coordinate system used to representOA,OBandOC. The value of thescalar triple productOA⋅ (OB×OC)is the3 × 3determinant withOA,OBandOCas its rows. With thez-axis alongOAthe square of this determinant is(OA⋅(OB×OC))2=(det(OAOBOC))2=|001sin⁡c0cos⁡csin⁡bcos⁡Asin⁡bsin⁡Acos⁡b|2=(sin⁡bsin⁡csin⁡A)2.{\displaystyle {\begin{aligned}{\bigl (}\mathbf {OA} \cdot (\mathbf {OB} \times \mathbf {OC} ){\bigr )}^{2}&=\left(\det {\begin{pmatrix}\mathbf {OA} &\mathbf {OB} &\mathbf {OC} \end{pmatrix}}\right)^{2}\\[4pt]&={\begin{vmatrix}0&0&1\\\sin c&0&\cos c\\\sin b\cos A&\sin b\sin A&\cos b\end{vmatrix}}^{2}=\left(\sin b\sin c\sin A\right)^{2}.\end{aligned}}}Repeating this calculation with thez-axis alongOBgives(sincsinasinB)2, while with thez-axis alongOCit is(sinasinbsinC)2. Equating these expressions and dividing throughout by(sinasinbsinc)2givessin2⁡Asin2⁡a=sin2⁡Bsin2⁡b=sin2⁡Csin2⁡c=V2sin2⁡(a)sin2⁡(b)sin2⁡(c),{\displaystyle {\frac {\sin ^{2}A}{\sin ^{2}a}}={\frac {\sin ^{2}B}{\sin ^{2}b}}={\frac {\sin ^{2}C}{\sin ^{2}c}}={\frac {V^{2}}{\sin ^{2}(a)\sin ^{2}(b)\sin ^{2}(c)}},}whereVis the volume of theparallelepipedformed by the position vector of the vertices of the spherical triangle. Consequently, the result follows. It is easy to see how for small spherical triangles, when the radius of the sphere is much greater than the sides of the triangle, this formula becomes the planar formula at the limit, sincelima→0sin⁡aa=1{\displaystyle \lim _{a\to 0}{\frac {\sin a}{a}}=1}and the same forsinbandsinc. Consider a unit sphere with:OA=OB=OC=1{\displaystyle OA=OB=OC=1} Construct pointD{\displaystyle D}and pointE{\displaystyle E}such that∠ADO=∠AEO=90∘{\displaystyle \angle ADO=\angle AEO=90^{\circ }} Construct pointA′{\displaystyle A'}such that∠A′DO=∠A′EO=90∘{\displaystyle \angle A'DO=\angle A'EO=90^{\circ }} It can therefore be seen that∠ADA′=B{\displaystyle \angle ADA'=B}and∠AEA′=C{\displaystyle \angle AEA'=C} Notice thatA′{\displaystyle A'}is the projection ofA{\displaystyle A}on planeOBC{\displaystyle OBC}. Therefore∠AA′D=∠AA′E=90∘{\displaystyle \angle AA'D=\angle AA'E=90^{\circ }} By basic trigonometry, we have:AD=sin⁡cAE=sin⁡b{\displaystyle {\begin{aligned}AD&=\sin c\\AE&=\sin b\end{aligned}}} ButAA′=ADsin⁡B=AEsin⁡C{\displaystyle AA'=AD\sin B=AE\sin C} Combining them we have:sin⁡csin⁡B=sin⁡bsin⁡C⇒sin⁡Bsin⁡b=sin⁡Csin⁡c{\displaystyle {\begin{aligned}\sin c\sin B&=\sin b\sin C\\\Rightarrow {\frac {\sin B}{\sin b}}&={\frac {\sin C}{\sin c}}\end{aligned}}} By applying similar reasoning, we obtain the spherical law of sines:sin⁡Asin⁡a=sin⁡Bsin⁡b=sin⁡Csin⁡c{\displaystyle {\frac {\sin A}{\sin a}}={\frac {\sin B}{\sin b}}={\frac {\sin C}{\sin c}}} A purely algebraic proof can be constructed from thespherical law of cosines. From the identitysin2⁡A=1−cos2⁡A{\displaystyle \sin ^{2}A=1-\cos ^{2}A}and the explicit expression forcos⁡A{\displaystyle \cos A}from the spherical law of cosinessin2A=1−(cos⁡a−cos⁡bcos⁡csin⁡bsin⁡c)2=(1−cos2b)(1−cos2c)−(cos⁡a−cos⁡bcos⁡c)2sin2bsin2csin⁡Asin⁡a=[1−cos2a−cos2b−cos2c+2cos⁡acos⁡bcos⁡c]1/2sin⁡asin⁡bsin⁡c.{\displaystyle {\begin{aligned}\sin ^{2}\!A&=1-\left({\frac {\cos a-\cos b\,\cos c}{\sin b\,\sin c}}\right)^{2}\\&={\frac {\left(1-\cos ^{2}\!b\right)\left(1-\cos ^{2}\!c\right)-\left(\cos a-\cos b\,\cos c\right)^{2}}{\sin ^{2}\!b\,\sin ^{2}\!c}}\\[8pt]{\frac {\sin A}{\sin a}}&={\frac {\left[1-\cos ^{2}\!a-\cos ^{2}\!b-\cos ^{2}\!c+2\cos a\cos b\cos c\right]^{1/2}}{\sin a\sin b\sin c}}.\end{aligned}}}Since the right hand side is invariant under a cyclic permutation ofa,b,c{\displaystyle a,\;b,\;c}the spherical sine rule follows immediately. The figure used in the Geometric proof above is used by and also provided in Banerjee[6](see Figure 3 in this paper) to derive the sine law using elementary linear algebra and projection matrices. Inhyperbolic geometrywhen the curvature is −1, the law of sines becomessin⁡Asinh⁡a=sin⁡Bsinh⁡b=sin⁡Csinh⁡c.{\displaystyle {\frac {\sin A}{\sinh a}}={\frac {\sin B}{\sinh b}}={\frac {\sin C}{\sinh c}}\,.} In the special case whenBis a right angle, one getssin⁡C=sinh⁡csinh⁡b{\displaystyle \sin C={\frac {\sinh c}{\sinh b}}} which is the analog of the formula in Euclidean geometry expressing the sine of an angle as the opposite side divided by the hypotenuse. Define a generalized sine function, depending also on a real parameterκ{\displaystyle \kappa }:sinκ⁡(x)=x−κ3!x3+κ25!x5−κ37!x7+⋯=∑n=0∞(−1)nκn(2n+1)!x2n+1.{\displaystyle \sin _{\kappa }(x)=x-{\frac {\kappa }{3!}}x^{3}+{\frac {\kappa ^{2}}{5!}}x^{5}-{\frac {\kappa ^{3}}{7!}}x^{7}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}\kappa ^{n}}{(2n+1)!}}x^{2n+1}.} The law of sines in constant curvatureκ{\displaystyle \kappa }reads as[1]sin⁡Asinκ⁡a=sin⁡Bsinκ⁡b=sin⁡Csinκ⁡c.{\displaystyle {\frac {\sin A}{\sin _{\kappa }a}}={\frac {\sin B}{\sin _{\kappa }b}}={\frac {\sin C}{\sin _{\kappa }c}}\,.} By substitutingκ=0{\displaystyle \kappa =0},κ=1{\displaystyle \kappa =1}, andκ=−1{\displaystyle \kappa =-1}, one obtains respectivelysin0⁡(x)=x{\displaystyle \sin _{0}(x)=x},sin1⁡(x)=sin⁡x{\displaystyle \sin _{1}(x)=\sin x}, andsin−1⁡(x)=sinh⁡x{\displaystyle \sin _{-1}(x)=\sinh x}, that is, the Euclidean, spherical, and hyperbolic cases of the law of sines described above.[1] Letpκ(r){\displaystyle p_{\kappa }(r)}indicate the circumference of a circle of radiusr{\displaystyle r}in a space of constant curvatureκ{\displaystyle \kappa }. Thenpκ(r)=2πsinκ⁡(r){\displaystyle p_{\kappa }(r)=2\pi \sin _{\kappa }(r)}. Therefore, the law of sines can also be expressed as:sin⁡Apκ(a)=sin⁡Bpκ(b)=sin⁡Cpκ(c).{\displaystyle {\frac {\sin A}{p_{\kappa }(a)}}={\frac {\sin B}{p_{\kappa }(b)}}={\frac {\sin C}{p_{\kappa }(c)}}\,.} This formulation was discovered byJános Bolyai.[7] Atetrahedronhas four triangularfacets. Theabsolute valueof thepolar sine(psin) of thenormal vectorsto the three facets that share avertexof the tetrahedron, divided by the area of the fourth facet will not depend upon the choice of the vertex:[8] |psin⁡(b,c,d)|Areaa=|psin⁡(a,c,d)|Areab=|psin⁡(a,b,d)|Areac=|psin⁡(a,b,c)|Aread=(3Volumetetrahedron)22AreaaAreabAreacAread.{\displaystyle {\begin{aligned}&{\frac {\left|\operatorname {psin} (\mathbf {b} ,\mathbf {c} ,\mathbf {d} )\right|}{\mathrm {Area} _{a}}}={\frac {\left|\operatorname {psin} (\mathbf {a} ,\mathbf {c} ,\mathbf {d} )\right|}{\mathrm {Area} _{b}}}={\frac {\left|\operatorname {psin} (\mathbf {a} ,\mathbf {b} ,\mathbf {d} )\right|}{\mathrm {Area} _{c}}}={\frac {\left|\operatorname {psin} (\mathbf {a} ,\mathbf {b} ,\mathbf {c} )\right|}{\mathrm {Area} _{d}}}\\[4pt]={}&{\frac {(3~\mathrm {Volume} _{\mathrm {tetrahedron} })^{2}}{2~\mathrm {Area} _{a}\mathrm {Area} _{b}\mathrm {Area} _{c}\mathrm {Area} _{d}}}\,.\end{aligned}}} More generally, for ann-dimensionalsimplex(i.e.,triangle(n= 2),tetrahedron(n= 3),pentatope(n= 4), etc.) inn-dimensionalEuclidean space, the absolute value of the polar sine of the normal vectors of the facets that meet at a vertex, divided by the hyperarea of the facet opposite the vertex is independent of the choice of the vertex. WritingVfor the hypervolume of then-dimensional simplex andPfor the product of the hyperareas of its(n− 1)-dimensional facets, the common ratio is|psin⁡(b,…,z)|Areaa=⋯=|psin⁡(a,…,y)|Areaz=(nV)n−1(n−1)!P.{\displaystyle {\frac {\left|\operatorname {psin} (\mathbf {b} ,\ldots ,\mathbf {z} )\right|}{\mathrm {Area} _{a}}}=\cdots ={\frac {\left|\operatorname {psin} (\mathbf {a} ,\ldots ,\mathbf {y} )\right|}{\mathrm {Area} _{z}}}={\frac {(nV)^{n-1}}{(n-1)!P}}.} Note that when the vectorsv1, ...,vn, from a selected vertex to each of the other vertices, are the columns of a matrixVthen the columns of the matrixN=−V(VTV)−1detVTV/(n−1)!{\displaystyle N=-V(V^{T}V)^{-1}{\sqrt {\det {V^{T}V}}}/(n-1)!}are outward-facing normal vectors of those facets that meet at the selected vertex. This formula also works when the vectors are in am-dimensional space havingm>n. In them=ncase thatVis square, the formula simplifies toN=−(VT)−1|detV|/(n−1)!.{\displaystyle N=-(V^{T})^{-1}|\det {V}|/(n-1)!\,.} An equivalent of the law of sines, that the sides of a triangle are proportional to thechordsof double the opposite angles, was known to the 2nd century Hellenistic astronomerPtolemyand used occasionally in hisAlmagest.[9] Statements related to the law of sines appear in the astronomical and trigonometric work of 7th century Indian mathematicianBrahmagupta. In hisBrāhmasphuṭasiddhānta, Brahmagupta expresses the circumradius of a triangle as the product of two sides divided by twice thealtitude; the law of sines can be derived by alternately expressing the altitude as the sine of one or the other base angle times its opposite side, then equating the two resulting variants.[10]An equation even closer to the modern law of sines appears in Brahmagupta'sKhaṇḍakhādyaka, in a method for finding the distance between the Earth and a planet following anepicycle; however, Brahmagupta never treated the law of sines as an independent subject or used it systematically for solving triangles.[11] The spherical law of sines is sometimes credited to 10th century scholarsAbu-Mahmud KhujandiorAbū al-Wafāʾ(it appears in hisAlmagest), but it is given prominence inAbū Naṣr Manṣūr'sTreatise on the Determination of Spherical Arcs, and was credited to Abū Naṣr Manṣūr by his studental-Bīrūnīin hisKeys to Astronomy.[12]Ibn Muʿādh al-Jayyānī's 11th-centuryBook of Unknown Arcs of a Spherealso contains the spherical law of sines.[13] The 13th-century Persian mathematicianNaṣīr al-Dīn al-Ṭūsīstated and proved the planar law of sines:[14] In any plane triangle, the ratio of the sides is equal to the ratio of the sines of the angles opposite to those sides. That is, in triangle ABC, we have AB : AC = Sin(∠ACB) : Sin(∠ABC) By employing the law of sines, al-Tusi could solve triangles where either two angles and a side were known or two sides and an angle opposite one of them were given. For triangles with two sides and the included angle, he divided them into right triangles that he could then solve. When three sides were given, he dropped a perpendicular line and then used Proposition II-13 of Euclid'sElements(a geometric version of thelaw of cosines). Al-Tusi established the important result that if the sum or difference of two arcs is provided along with the ratio of their sines, then the arcs can be calculated.[15] According toGlen Van Brummelen, "The Law of Sines is reallyRegiomontanus's foundation for his solutions of right-angled triangles in Book IV, and these solutions are in turn the bases for his solutions of general triangles."[16]Regiomontanus was a 15th-century German mathematician.
https://en.wikipedia.org/wiki/Law_of_sines
This is a list of some well-knownperiodic functions. The constant functionf(x) =c, wherecis independent ofx, is periodic with any period, but lacks afundamental period. A definition is given for some of the following functions, though each function may have many equivalent definitions. All trigonometric functions listed have period2π{\displaystyle 2\pi }, unless otherwise stated. For the following trigonometric functions: The following functions have periodp{\displaystyle p}and takex{\displaystyle x}as their argument. The symbol⌊n⌋{\displaystyle \lfloor n\rfloor }is thefloor functionofn{\displaystyle n}andsgn{\displaystyle \operatorname {sgn} }is thesign function. K meansElliptic integralK(m) whereH{\displaystyle H}is theHeaviside step functiont is how long the pulse stays at 1 givenf(x)=x−sin⁡(x){\displaystyle f(x)=x-\sin(x)}andf(−1)(x){\displaystyle f^{(-1)}(x)}is its real-valued inverse. whereJn⁡(x){\displaystyle \operatorname {J} _{n}(x)}is theBessel Function of the first kind.
https://en.wikipedia.org/wiki/List_of_periodic_functions
Intrigonometry,trigonometric identitiesareequalitiesthat involvetrigonometric functionsand are true for every value of the occurringvariablesfor which both sides of the equality are defined. Geometrically, these areidentitiesinvolving certain functions of one or moreangles. They are distinct fromtriangle identities, which are identities potentially involving angles but also involving side lengths or other lengths of atriangle. These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is theintegrationof non-trigonometric functions: a common technique involves first using thesubstitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. The basic relationship between thesine and cosineis given by the Pythagorean identity: sin2⁡θ+cos2⁡θ=1,{\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1,} wheresin2⁡θ{\displaystyle \sin ^{2}\theta }means(sin⁡θ)2{\displaystyle {(\sin \theta )}^{2}}andcos2⁡θ{\displaystyle \cos ^{2}\theta }means(cos⁡θ)2.{\displaystyle {(\cos \theta )}^{2}.} This can be viewed as a version of thePythagorean theorem, and follows from the equationx2+y2=1{\displaystyle x^{2}+y^{2}=1}for theunit circle. This equation can be solved for either the sine or the cosine: sin⁡θ=±1−cos2⁡θ,cos⁡θ=±1−sin2⁡θ.{\displaystyle {\begin{aligned}\sin \theta &=\pm {\sqrt {1-\cos ^{2}\theta }},\\\cos \theta &=\pm {\sqrt {1-\sin ^{2}\theta }}.\end{aligned}}} where the sign depends on thequadrantofθ.{\displaystyle \theta .} Dividing this identity bysin2⁡θ{\displaystyle \sin ^{2}\theta },cos2⁡θ{\displaystyle \cos ^{2}\theta }, or both yields the following identities:1+cot2⁡θ=csc2⁡θ1+tan2⁡θ=sec2⁡θsec2⁡θ+csc2⁡θ=sec2⁡θcsc2⁡θ{\displaystyle {\begin{aligned}&1+\cot ^{2}\theta =\csc ^{2}\theta \\&1+\tan ^{2}\theta =\sec ^{2}\theta \\&\sec ^{2}\theta +\csc ^{2}\theta =\sec ^{2}\theta \csc ^{2}\theta \end{aligned}}} Using these identities, it is possible to express any trigonometric function in terms of any other (up toa plus or minus sign): By examining the unit circle, one can establish the following properties of the trigonometric functions. When the direction of aEuclidean vectoris represented by an angleθ,{\displaystyle \theta ,}this is the angle determined by the free vector (starting at the origin) and the positivex{\displaystyle x}-unit vector. The same concept may also be applied to lines in anEuclidean space, where the angle is that determined by a parallel to the given line through the origin and the positivex{\displaystyle x}-axis. If a line (vector) with directionθ{\displaystyle \theta }is reflected about a line with directionα,{\displaystyle \alpha ,}then the direction angleθ′{\displaystyle \theta ^{\prime }}of this reflected line (vector) has the valueθ′=2α−θ.{\displaystyle \theta ^{\prime }=2\alpha -\theta .} The values of the trigonometric functions of these anglesθ,θ′{\displaystyle \theta ,\;\theta ^{\prime }}for specific anglesα{\displaystyle \alpha }satisfy simple identities: either they are equal, or have opposite signs, or employ the complementary trigonometric function. These are also known asreduction formulae.[2] The sign of trigonometric functions depends on quadrant of the angle. If−π<θ≤π{\displaystyle {-\pi }<\theta \leq \pi }andsgnis thesign function, sgn⁡(sin⁡θ)=sgn⁡(csc⁡θ)={+1if0<θ<π−1if−π<θ<00ifθ∈{0,π}sgn⁡(cos⁡θ)=sgn⁡(sec⁡θ)={+1if−12π<θ<12π−1if−π<θ<−12πor12π<θ<π0ifθ∈{−12π,12π}sgn⁡(tan⁡θ)=sgn⁡(cot⁡θ)={+1if−π<θ<−12πor0<θ<12π−1if−12π<θ<0or12π<θ<π0ifθ∈{−12π,0,12π,π}{\displaystyle {\begin{aligned}\operatorname {sgn}(\sin \theta )=\operatorname {sgn}(\csc \theta )&={\begin{cases}+1&{\text{if}}\ \ 0<\theta <\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <0\\0&{\text{if}}\ \ \theta \in \{0,\pi \}\end{cases}}\\[5mu]\operatorname {sgn}(\cos \theta )=\operatorname {sgn}(\sec \theta )&={\begin{cases}+1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },{\tfrac {1}{2}}\pi {\bigr \}}\end{cases}}\\[5mu]\operatorname {sgn}(\tan \theta )=\operatorname {sgn}(\cot \theta )&={\begin{cases}+1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ 0<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <0\ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },0,{\tfrac {1}{2}}\pi ,\pi {\bigr \}}\end{cases}}\end{aligned}}} The trigonometric functions are periodic with common period2π,{\displaystyle 2\pi ,}so for values ofθoutside the interval(−π,π],{\displaystyle ({-\pi },\pi ],}they take repeating values (see§ Shifts and periodicityabove). These are also known as theangle addition and subtraction theorems(orformulae).sin⁡(α+β)=sin⁡αcos⁡β+cos⁡αsin⁡βsin⁡(α−β)=sin⁡αcos⁡β−cos⁡αsin⁡βcos⁡(α+β)=cos⁡αcos⁡β−sin⁡αsin⁡βcos⁡(α−β)=cos⁡αcos⁡β+sin⁡αsin⁡β{\displaystyle {\begin{aligned}\sin(\alpha +\beta )&=\sin \alpha \cos \beta +\cos \alpha \sin \beta \\\sin(\alpha -\beta )&=\sin \alpha \cos \beta -\cos \alpha \sin \beta \\\cos(\alpha +\beta )&=\cos \alpha \cos \beta -\sin \alpha \sin \beta \\\cos(\alpha -\beta )&=\cos \alpha \cos \beta +\sin \alpha \sin \beta \end{aligned}}} The angle difference identities forsin⁡(α−β){\displaystyle \sin(\alpha -\beta )}andcos⁡(α−β){\displaystyle \cos(\alpha -\beta )}can be derived from the angle sum versions by substituting−β{\displaystyle -\beta }forβ{\displaystyle \beta }and using the facts thatsin⁡(−β)=−sin⁡(β){\displaystyle \sin(-\beta )=-\sin(\beta )}andcos⁡(−β)=cos⁡(β){\displaystyle \cos(-\beta )=\cos(\beta )}. They can also be derived by using a slightly modified version of the figure for the angle sum identities, both of which are shown here. These identities are summarized in the first two rows of the following table, which also includes sum and difference identities for the other trigonometric functions. When the series∑i=1∞θi{\textstyle \sum _{i=1}^{\infty }\theta _{i}}converges absolutelythen sin(∑i=1∞θi)=∑oddk≥1(−1)k−12∑A⊆{1,2,3,…}|A|=k(∏i∈Asin⁡θi∏i∉Acos⁡θi)cos(∑i=1∞θi)=∑evenk≥0(−1)k2∑A⊆{1,2,3,…}|A|=k(∏i∈Asin⁡θi∏i∉Acos⁡θi).{\displaystyle {\begin{aligned}{\sin }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggl )}&=\sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\!\!\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}\\{\cos }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggr )}&=\sum _{{\text{even}}\ k\geq 0}(-1)^{\frac {k}{2}}\,\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}.\end{aligned}}} Because the series∑i=1∞θi{\textstyle \sum _{i=1}^{\infty }\theta _{i}}converges absolutely, it is necessarily the case thatlimi→∞θi=0,{\textstyle \lim _{i\to \infty }\theta _{i}=0,}limi→∞sin⁡θi=0,{\textstyle \lim _{i\to \infty }\sin \theta _{i}=0,}andlimi→∞cos⁡θi=1.{\textstyle \lim _{i\to \infty }\cos \theta _{i}=1.}In particular, in these two identities an asymmetry appears that is not seen in the case of sums of finitely many angles: in each product, there are only finitely many sine factors but there arecofinitelymany cosine factors. Terms with infinitely many sine factors would necessarily be equal to zero. When only finitely many of the anglesθi{\displaystyle \theta _{i}}are nonzero then only finitely many of the terms on the right side are nonzero because all but finitely many sine factors vanish. Furthermore, in each term all but finitely many of the cosine factors are unity. Letek{\displaystyle e_{k}}(fork=0,1,2,3,…{\displaystyle k=0,1,2,3,\ldots }) be thekth-degreeelementary symmetric polynomialin the variablesxi=tan⁡θi{\displaystyle x_{i}=\tan \theta _{i}}fori=0,1,2,3,…,{\displaystyle i=0,1,2,3,\ldots ,}that is, e0=1e1=∑ixi=∑itan⁡θie2=∑i<jxixj=∑i<jtan⁡θitan⁡θje3=∑i<j<kxixjxk=∑i<j<ktan⁡θitan⁡θjtan⁡θk⋮⋮{\displaystyle {\begin{aligned}e_{0}&=1\\[6pt]e_{1}&=\sum _{i}x_{i}&&=\sum _{i}\tan \theta _{i}\\[6pt]e_{2}&=\sum _{i<j}x_{i}x_{j}&&=\sum _{i<j}\tan \theta _{i}\tan \theta _{j}\\[6pt]e_{3}&=\sum _{i<j<k}x_{i}x_{j}x_{k}&&=\sum _{i<j<k}\tan \theta _{i}\tan \theta _{j}\tan \theta _{k}\\&\ \ \vdots &&\ \ \vdots \end{aligned}}} Then tan(∑iθi)=sin(∑iθi)/∏icos⁡θicos(∑iθi)/∏icos⁡θi=∑oddk≥1(−1)k−12∑A⊆{1,2,3,…}|A|=k∏i∈Atan⁡θi∑evenk≥0(−1)k2∑A⊆{1,2,3,…}|A|=k∏i∈Atan⁡θi=e1−e3+e5−⋯e0−e2+e4−⋯cot(∑iθi)=e0−e2+e4−⋯e1−e3+e5−⋯{\displaystyle {\begin{aligned}{\tan }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {{\sin }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}{{\cos }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}}\\[10pt]&={\frac {\displaystyle \sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}{\displaystyle \sum _{{\text{even}}\ k\geq 0}~(-1)^{\frac {k}{2}}~~\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}}={\frac {e_{1}-e_{3}+e_{5}-\cdots }{e_{0}-e_{2}+e_{4}-\cdots }}\\[10pt]{\cot }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {e_{0}-e_{2}+e_{4}-\cdots }{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} using the sine and cosine sum formulae above. The number of terms on the right side depends on the number of terms on the left side. For example:tan⁡(θ1+θ2)=e1e0−e2=x1+x21−x1x2=tan⁡θ1+tan⁡θ21−tan⁡θ1tan⁡θ2,tan⁡(θ1+θ2+θ3)=e1−e3e0−e2=(x1+x2+x3)−(x1x2x3)1−(x1x2+x1x3+x2x3),tan⁡(θ1+θ2+θ3+θ4)=e1−e3e0−e2+e4=(x1+x2+x3+x4)−(x1x2x3+x1x2x4+x1x3x4+x2x3x4)1−(x1x2+x1x3+x1x4+x2x3+x2x4+x3x4)+(x1x2x3x4),{\displaystyle {\begin{aligned}\tan(\theta _{1}+\theta _{2})&={\frac {e_{1}}{e_{0}-e_{2}}}={\frac {x_{1}+x_{2}}{1\ -\ x_{1}x_{2}}}={\frac {\tan \theta _{1}+\tan \theta _{2}}{1\ -\ \tan \theta _{1}\tan \theta _{2}}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}}}={\frac {(x_{1}+x_{2}+x_{3})\ -\ (x_{1}x_{2}x_{3})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3})}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3}+\theta _{4})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}+e_{4}}}\\[8pt]&={\frac {(x_{1}+x_{2}+x_{3}+x_{4})\ -\ (x_{1}x_{2}x_{3}+x_{1}x_{2}x_{4}+x_{1}x_{3}x_{4}+x_{2}x_{3}x_{4})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{1}x_{4}+x_{2}x_{3}+x_{2}x_{4}+x_{3}x_{4})\ +\ (x_{1}x_{2}x_{3}x_{4})}},\end{aligned}}} and so on. The case of only finitely many terms can be proved bymathematical induction.[14]The case of infinitely many terms can be proved by using some elementary inequalities.[15] sec(∑iθi)=∏isec⁡θie0−e2+e4−⋯csc(∑iθi)=∏isec⁡θie1−e3+e5−⋯{\displaystyle {\begin{aligned}{\sec }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{0}-e_{2}+e_{4}-\cdots }}\\[8pt]{\csc }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} whereek{\displaystyle e_{k}}is thekth-degreeelementary symmetric polynomialin thenvariablesxi=tan⁡θi,{\displaystyle x_{i}=\tan \theta _{i},}i=1,…,n,{\displaystyle i=1,\ldots ,n,}and the number of terms in the denominator and the number of factors in the product in the numerator depend on the number of terms in the sum on the left.[16]The case of only finitely many terms can be proved by mathematical induction on the number of such terms. For example, sec⁡(α+β+γ)=sec⁡αsec⁡βsec⁡γ1−tan⁡αtan⁡β−tan⁡αtan⁡γ−tan⁡βtan⁡γcsc⁡(α+β+γ)=sec⁡αsec⁡βsec⁡γtan⁡α+tan⁡β+tan⁡γ−tan⁡αtan⁡βtan⁡γ.{\displaystyle {\begin{aligned}\sec(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{1-\tan \alpha \tan \beta -\tan \alpha \tan \gamma -\tan \beta \tan \gamma }}\\[8pt]\csc(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{\tan \alpha +\tan \beta +\tan \gamma -\tan \alpha \tan \beta \tan \gamma }}.\end{aligned}}} Ptolemy's theorem is important in the history of trigonometric identities, as it is how results equivalent to the sum and difference formulas for sine and cosine were first proved. It states that in a cyclic quadrilateralABCD{\displaystyle ABCD}, as shown in the accompanying figure, the sum of the products of the lengths of opposite sides is equal to the product of the lengths of the diagonals. In the special cases of one of the diagonals or sides being a diameter of the circle, this theorem gives rise directly to the angle sum and difference trigonometric identities.[17]The relationship follows most easily when the circle is constructed to have a diameter of length one, as shown here. ByThales's theorem,∠DAB{\displaystyle \angle DAB}and∠DCB{\displaystyle \angle DCB}are both right angles. The right-angled trianglesDAB{\displaystyle DAB}andDCB{\displaystyle DCB}both share the hypotenuseBD¯{\displaystyle {\overline {BD}}}of length 1. Thus, the sideAB¯=sin⁡α{\displaystyle {\overline {AB}}=\sin \alpha },AD¯=cos⁡α{\displaystyle {\overline {AD}}=\cos \alpha },BC¯=sin⁡β{\displaystyle {\overline {BC}}=\sin \beta }andCD¯=cos⁡β{\displaystyle {\overline {CD}}=\cos \beta }. By theinscribed angletheorem, thecentral anglesubtended by the chordAC¯{\displaystyle {\overline {AC}}}at the circle's center is twice the angle∠ADC{\displaystyle \angle ADC}, i.e.2(α+β){\displaystyle 2(\alpha +\beta )}. Therefore, the symmetrical pair of red triangles each has the angleα+β{\displaystyle \alpha +\beta }at the center. Each of these triangles has a hypotenuse of length12{\textstyle {\frac {1}{2}}}, so the length ofAC¯{\displaystyle {\overline {AC}}}is2×12sin⁡(α+β){\textstyle 2\times {\frac {1}{2}}\sin(\alpha +\beta )}, i.e. simplysin⁡(α+β){\displaystyle \sin(\alpha +\beta )}. The quadrilateral's other diagonal is the diameter of length 1, so the product of the diagonals' lengths is alsosin⁡(α+β){\displaystyle \sin(\alpha +\beta )}. When these values are substituted into the statement of Ptolemy's theorem that|AC¯|⋅|BD¯|=|AB¯|⋅|CD¯|+|AD¯|⋅|BC¯|{\displaystyle |{\overline {AC}}|\cdot |{\overline {BD}}|=|{\overline {AB}}|\cdot |{\overline {CD}}|+|{\overline {AD}}|\cdot |{\overline {BC}}|}, this yields the angle sum trigonometric identity for sine:sin⁡(α+β)=sin⁡αcos⁡β+cos⁡αsin⁡β{\displaystyle \sin(\alpha +\beta )=\sin \alpha \cos \beta +\cos \alpha \sin \beta }. The angle difference formula forsin⁡(α−β){\displaystyle \sin(\alpha -\beta )}can be similarly derived by letting the sideCD¯{\displaystyle {\overline {CD}}}serve as a diameter instead ofBD¯{\displaystyle {\overline {BD}}}.[17] Formulae for twice an angle.[20] Formulae for triple angles.[20] Formulae for multiple angles.[21] TheChebyshevmethod is arecursivealgorithmfor finding thenth multiple angle formula knowing the(n−1){\displaystyle (n-1)}th and(n−2){\displaystyle (n-2)}th values.[22] cos⁡(nx){\displaystyle \cos(nx)}can be computed fromcos⁡((n−1)x){\displaystyle \cos((n-1)x)},cos⁡((n−2)x){\displaystyle \cos((n-2)x)}, andcos⁡(x){\displaystyle \cos(x)}with cos⁡(nx)=2cos⁡xcos⁡((n−1)x)−cos⁡((n−2)x).{\displaystyle \cos(nx)=2\cos x\cos((n-1)x)-\cos((n-2)x).} This can be proved by adding together the formulae cos⁡((n−1)x+x)=cos⁡((n−1)x)cos⁡x−sin⁡((n−1)x)sin⁡xcos⁡((n−1)x−x)=cos⁡((n−1)x)cos⁡x+sin⁡((n−1)x)sin⁡x{\displaystyle {\begin{aligned}\cos((n-1)x+x)&=\cos((n-1)x)\cos x-\sin((n-1)x)\sin x\\\cos((n-1)x-x)&=\cos((n-1)x)\cos x+\sin((n-1)x)\sin x\end{aligned}}} It follows by induction thatcos⁡(nx){\displaystyle \cos(nx)}is a polynomial ofcos⁡x,{\displaystyle \cos x,}the so-called Chebyshev polynomial of the first kind, seeChebyshev polynomials#Trigonometric definition. Similarly,sin⁡(nx){\displaystyle \sin(nx)}can be computed fromsin⁡((n−1)x),{\displaystyle \sin((n-1)x),}sin⁡((n−2)x),{\displaystyle \sin((n-2)x),}andcos⁡x{\displaystyle \cos x}withsin⁡(nx)=2cos⁡xsin⁡((n−1)x)−sin⁡((n−2)x){\displaystyle \sin(nx)=2\cos x\sin((n-1)x)-\sin((n-2)x)}This can be proved by adding formulae forsin⁡((n−1)x+x){\displaystyle \sin((n-1)x+x)}andsin⁡((n−1)x−x).{\displaystyle \sin((n-1)x-x).} Serving a purpose similar to that of the Chebyshev method, for the tangent we can write: tan⁡(nx)=tan⁡((n−1)x)+tan⁡x1−tan⁡((n−1)x)tan⁡x.{\displaystyle \tan(nx)={\frac {\tan((n-1)x)+\tan x}{1-\tan((n-1)x)\tan x}}\,.} sin⁡θ2=sgn⁡(sin⁡θ2)1−cos⁡θ2cos⁡θ2=sgn⁡(cos⁡θ2)1+cos⁡θ2tan⁡θ2=1−cos⁡θsin⁡θ=sin⁡θ1+cos⁡θ=csc⁡θ−cot⁡θ=tan⁡θ1+sec⁡θ=sgn⁡(sin⁡θ)1−cos⁡θ1+cos⁡θ=−1+sgn⁡(cos⁡θ)1+tan2⁡θtan⁡θcot⁡θ2=1+cos⁡θsin⁡θ=sin⁡θ1−cos⁡θ=csc⁡θ+cot⁡θ=sgn⁡(sin⁡θ)1+cos⁡θ1−cos⁡θsec⁡θ2=sgn⁡(cos⁡θ2)21+cos⁡θcsc⁡θ2=sgn⁡(sin⁡θ2)21−cos⁡θ{\displaystyle {\begin{aligned}\sin {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {1-\cos \theta }{2}}}\\[3pt]\cos {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {1+\cos \theta }{2}}}\\[3pt]\tan {\frac {\theta }{2}}&={\frac {1-\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1+\cos \theta }}=\csc \theta -\cot \theta ={\frac {\tan \theta }{1+\sec {\theta }}}\\[6mu]&=\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1-\cos \theta }{1+\cos \theta }}}={\frac {-1+\operatorname {sgn}(\cos \theta ){\sqrt {1+\tan ^{2}\theta }}}{\tan \theta }}\\[3pt]\cot {\frac {\theta }{2}}&={\frac {1+\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1-\cos \theta }}=\csc \theta +\cot \theta =\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1+\cos \theta }{1-\cos \theta }}}\\\sec {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1+\cos \theta }}}\\\csc {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1-\cos \theta }}}\\\end{aligned}}}[23][24] Alsotan⁡η±θ2=sin⁡η±sin⁡θcos⁡η+cos⁡θtan⁡(θ2+π4)=sec⁡θ+tan⁡θ1−sin⁡θ1+sin⁡θ=|1−tan⁡θ2||1+tan⁡θ2|{\displaystyle {\begin{aligned}\tan {\frac {\eta \pm \theta }{2}}&={\frac {\sin \eta \pm \sin \theta }{\cos \eta +\cos \theta }}\\[3pt]\tan \left({\frac {\theta }{2}}+{\frac {\pi }{4}}\right)&=\sec \theta +\tan \theta \\[3pt]{\sqrt {\frac {1-\sin \theta }{1+\sin \theta }}}&={\frac {\left|1-\tan {\frac {\theta }{2}}\right|}{\left|1+\tan {\frac {\theta }{2}}\right|}}\end{aligned}}} These can be shown by using either the sum and difference identities or the multiple-angle formulae. The fact that the triple-angle formula for sine and cosine only involves powers of a single function allows one to relate the geometric problem of acompass and straightedge constructionofangle trisectionto the algebraic problem of solving acubic equation, which allows one to prove thattrisection is in general impossibleusing the given tools. A formula for computing the trigonometric identities for the one-third angle exists, but it requires finding the zeroes of thecubic equation4x3− 3x+d= 0, wherex{\displaystyle x}is the value of the cosine function at the one-third angle anddis the known value of the cosine function at the full angle. However, thediscriminantof this equation is positive, so this equation has three real roots (of which only one is the solution for the cosine of the one-third angle).None of these solutions are reducibleto a realalgebraic expression, as they use intermediate complex numbers under thecube roots. Obtained by solving the second and third versions of the cosine double-angle formula. In general terms of powers ofsin⁡θ{\displaystyle \sin \theta }orcos⁡θ{\displaystyle \cos \theta }the following is true, and can be deduced usingDe Moivre's formula,Euler's formulaand thebinomial theorem. The product-to-sum identities[28]orprosthaphaeresisformulae can be proven by expanding their right-hand sides using theangle addition theorems. Historically, the first four of these were known asWerner's formulas, afterJohannes Wernerwho used them for astronomical calculations.[29]Seeamplitude modulationfor an application of the product-to-sum formulae, andbeat (acoustics)andphase detectorfor applications of the sum-to-product formulae. The sum-to-product identities are as follows:[30] Charles Hermitedemonstrated the following identity.[31]Supposea1,…,an{\displaystyle a_{1},\ldots ,a_{n}}arecomplex numbers, no two of which differ by an integer multiple ofπ. Let An,k=∏1≤j≤nj≠kcot⁡(ak−aj){\displaystyle A_{n,k}=\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq k\end{smallmatrix}}\cot(a_{k}-a_{j})} (in particular,A1,1,{\displaystyle A_{1,1},}being anempty product, is 1). Then cot⁡(z−a1)⋯cot⁡(z−an)=cos⁡nπ2+∑k=1nAn,kcot⁡(z−ak).{\displaystyle \cot(z-a_{1})\cdots \cot(z-a_{n})=\cos {\frac {n\pi }{2}}+\sum _{k=1}^{n}A_{n,k}\cot(z-a_{k}).} The simplest non-trivial example is the casen= 2: cot⁡(z−a1)cot⁡(z−a2)=−1+cot⁡(a1−a2)cot⁡(z−a1)+cot⁡(a2−a1)cot⁡(z−a2).{\displaystyle \cot(z-a_{1})\cot(z-a_{2})=-1+\cot(a_{1}-a_{2})\cot(z-a_{1})+\cot(a_{2}-a_{1})\cot(z-a_{2}).} Forcoprimeintegersn,m ∏k=1n(2a+2cos⁡(2πkmn+x))=2(Tn(a)+(−1)n+mcos⁡(nx)){\displaystyle \prod _{k=1}^{n}\left(2a+2\cos \left({\frac {2\pi km}{n}}+x\right)\right)=2\left(T_{n}(a)+{(-1)}^{n+m}\cos(nx)\right)} whereTnis theChebyshev polynomial.[citation needed] The following relationship holds for the sine function ∏k=1n−1sin⁡(kπn)=n2n−1.{\displaystyle \prod _{k=1}^{n-1}\sin \left({\frac {k\pi }{n}}\right)={\frac {n}{2^{n-1}}}.} More generally for an integern> 0[32] sin⁡(nx)=2n−1∏k=0n−1sin⁡(knπ+x)=2n−1∏k=1nsin⁡(knπ−x).{\displaystyle \sin(nx)=2^{n-1}\prod _{k=0}^{n-1}\sin \left({\frac {k}{n}}\pi +x\right)=2^{n-1}\prod _{k=1}^{n}\sin \left({\frac {k}{n}}\pi -x\right).} or written in terms of thechordfunctioncrd⁡x≡2sin⁡12x{\textstyle \operatorname {crd} x\equiv 2\sin {\tfrac {1}{2}}x}, crd⁡(nx)=∏k=1ncrd⁡(kn2π−x).{\displaystyle \operatorname {crd} (nx)=\prod _{k=1}^{n}\operatorname {crd} \left({\frac {k}{n}}2\pi -x\right).} This comes from thefactorization of the polynomialzn−1{\textstyle z^{n}-1}into linear factors (cf.root of unity): For any complexzand an integern> 0, zn−1=∏k=1n(z−exp⁡(kn2πi)).{\displaystyle z^{n}-1=\prod _{k=1}^{n}\left(z-\exp {\Bigl (}{\frac {k}{n}}2\pi i{\Bigr )}\right).} For some purposes it is important to know that any linear combination of sine waves of the same period or frequency but differentphase shiftsis also a sine wave with the same period or frequency, but a different phase shift. This is useful insinusoiddata fitting, because the measured or observed data are linearly related to theaandbunknowns of thein-phase and quadrature componentsbasis below, resulting in a simplerJacobian, compared to that ofc{\displaystyle c}andφ{\displaystyle \varphi }. The linear combination, or harmonic addition, of sine and cosine waves is equivalent to a single sine wave with a phase shift and scaled amplitude,[33][34] acos⁡x+bsin⁡x=ccos⁡(x+φ){\displaystyle a\cos x+b\sin x=c\cos(x+\varphi )} wherec{\displaystyle c}andφ{\displaystyle \varphi }are defined as so: c=sgn⁡(a)a2+b2,φ=arctan(−b/a),{\displaystyle {\begin{aligned}c&=\operatorname {sgn}(a){\sqrt {a^{2}+b^{2}}},\\\varphi &={\arctan }{\bigl (}{-b/a}{\bigr )},\end{aligned}}} given thata≠0.{\displaystyle a\neq 0.} More generally, for arbitrary phase shifts, we have asin⁡(x+θa)+bsin⁡(x+θb)=csin⁡(x+φ){\displaystyle a\sin(x+\theta _{a})+b\sin(x+\theta _{b})=c\sin(x+\varphi )} wherec{\displaystyle c}andφ{\displaystyle \varphi }satisfy: c2=a2+b2+2abcos⁡(θa−θb),tan⁡φ=asin⁡θa+bsin⁡θbacos⁡θa+bcos⁡θb.{\displaystyle {\begin{aligned}c^{2}&=a^{2}+b^{2}+2ab\cos \left(\theta _{a}-\theta _{b}\right),\\\tan \varphi &={\frac {a\sin \theta _{a}+b\sin \theta _{b}}{a\cos \theta _{a}+b\cos \theta _{b}}}.\end{aligned}}} The general case reads[34] ∑iaisin⁡(x+θi)=asin⁡(x+θ),{\displaystyle \sum _{i}a_{i}\sin(x+\theta _{i})=a\sin(x+\theta ),}wherea2=∑i,jaiajcos⁡(θi−θj){\displaystyle a^{2}=\sum _{i,j}a_{i}a_{j}\cos(\theta _{i}-\theta _{j})}andtan⁡θ=∑iaisin⁡θi∑iaicos⁡θi.{\displaystyle \tan \theta ={\frac {\sum _{i}a_{i}\sin \theta _{i}}{\sum _{i}a_{i}\cos \theta _{i}}}.} These identities, named afterJoseph Louis Lagrange, are:[35][36][37]∑k=0nsin⁡kθ=cos⁡12θ−cos⁡((n+12)θ)2sin⁡12θ∑k=1ncos⁡kθ=−sin⁡12θ+sin⁡((n+12)θ)2sin⁡12θ{\displaystyle {\begin{aligned}\sum _{k=0}^{n}\sin k\theta &={\frac {\cos {\tfrac {1}{2}}\theta -\cos \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\\[5pt]\sum _{k=1}^{n}\cos k\theta &={\frac {-\sin {\tfrac {1}{2}}\theta +\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\end{aligned}}}forθ≢0(mod2π).{\displaystyle \theta \not \equiv 0{\pmod {2\pi }}.} A related function is theDirichlet kernel: Dn(θ)=1+2∑k=1ncos⁡kθ=sin⁡((n+12)θ)sin⁡12θ.{\displaystyle D_{n}(\theta )=1+2\sum _{k=1}^{n}\cos k\theta ={\frac {\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{\sin {\tfrac {1}{2}}\theta }}.} A similar identity is[38] ∑k=1ncos⁡(2k−1)α=sin⁡(2nα)2sin⁡α.{\displaystyle \sum _{k=1}^{n}\cos(2k-1)\alpha ={\frac {\sin(2n\alpha )}{2\sin \alpha }}.} The proof is the following. By using theangle sum and difference identities,sin⁡(A+B)−sin⁡(A−B)=2cos⁡Asin⁡B.{\displaystyle \sin(A+B)-\sin(A-B)=2\cos A\sin B.}Then let's examine the following formula, 2sin⁡α∑k=1ncos⁡(2k−1)α=2sin⁡αcos⁡α+2sin⁡αcos⁡3α+2sin⁡αcos⁡5α+…+2sin⁡αcos⁡(2n−1)α{\displaystyle 2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha =2\sin \alpha \cos \alpha +2\sin \alpha \cos 3\alpha +2\sin \alpha \cos 5\alpha +\ldots +2\sin \alpha \cos(2n-1)\alpha }and this formula can be written by using the above identity, 2sin⁡α∑k=1ncos⁡(2k−1)α=∑k=1n(sin⁡(2kα)−sin⁡(2(k−1)α))=(sin⁡2α−sin⁡0)+(sin⁡4α−sin⁡2α)+(sin⁡6α−sin⁡4α)+…+(sin⁡(2nα)−sin⁡(2(n−1)α))=sin⁡(2nα).{\displaystyle {\begin{aligned}&2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha \\&\quad =\sum _{k=1}^{n}(\sin(2k\alpha )-\sin(2(k-1)\alpha ))\\&\quad =(\sin 2\alpha -\sin 0)+(\sin 4\alpha -\sin 2\alpha )+(\sin 6\alpha -\sin 4\alpha )+\ldots +(\sin(2n\alpha )-\sin(2(n-1)\alpha ))\\&\quad =\sin(2n\alpha ).\end{aligned}}} So, dividing this formula with2sin⁡α{\displaystyle 2\sin \alpha }completes the proof. Iff(x){\displaystyle f(x)}is given by thelinear fractional transformationf(x)=(cos⁡α)x−sin⁡α(sin⁡α)x+cos⁡α,{\displaystyle f(x)={\frac {(\cos \alpha )x-\sin \alpha }{(\sin \alpha )x+\cos \alpha }},}and similarlyg(x)=(cos⁡β)x−sin⁡β(sin⁡β)x+cos⁡β,{\displaystyle g(x)={\frac {(\cos \beta )x-\sin \beta }{(\sin \beta )x+\cos \beta }},}thenf(g(x))=g(f(x))=(cos⁡(α+β))x−sin⁡(α+β)(sin⁡(α+β))x+cos⁡(α+β).{\displaystyle f{\big (}g(x){\big )}=g{\big (}f(x){\big )}={\frac {{\big (}\cos(\alpha +\beta ){\big )}x-\sin(\alpha +\beta )}{{\big (}\sin(\alpha +\beta ){\big )}x+\cos(\alpha +\beta )}}.} More tersely stated, if for allα{\displaystyle \alpha }we letfα{\displaystyle f_{\alpha }}be what we calledf{\displaystyle f}above, thenfα∘fβ=fα+β.{\displaystyle f_{\alpha }\circ f_{\beta }=f_{\alpha +\beta }.} Ifx{\displaystyle x}is the slope of a line, thenf(x){\displaystyle f(x)}is the slope of its rotation through an angle of−α.{\displaystyle -\alpha .} Euler's formula states that, for any real numberx:[39]eix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,}whereiis theimaginary unit. Substituting −xforxgives us:e−ix=cos⁡(−x)+isin⁡(−x)=cos⁡x−isin⁡x.{\displaystyle e^{-ix}=\cos(-x)+i\sin(-x)=\cos x-i\sin x.} These two equations can be used to solve for cosine and sine in terms of theexponential function. Specifically,[40][41]cos⁡x=eix+e−ix2{\displaystyle \cos x={\frac {e^{ix}+e^{-ix}}{2}}}sin⁡x=eix−e−ix2i{\displaystyle \sin x={\frac {e^{ix}-e^{-ix}}{2i}}} These formulae are useful for proving many other trigonometric identities. For example, thatei(θ+φ)=eiθeiφmeans that That the real part of the left hand side equals the real part of the right hand side is an angle addition formula for cosine. The equality of the imaginary parts gives an angle addition formula for sine. The following table expresses the trigonometric functions and their inverses in terms of the exponential function and thecomplex logarithm. Trigonometric functions may be deduced fromhyperbolic functionswithcomplexarguments. The formulae for the relations are shown below[43][44].sin⁡x=−isinh⁡(ix)cos⁡x=cosh⁡(ix)tan⁡x=−itanh⁡(ix)cot⁡x=icoth⁡(ix)sec⁡x=sech⁡(ix)csc⁡x=icsch⁡(ix){\displaystyle {\begin{aligned}\sin x&=-i\sinh(ix)\\\cos x&=\cosh(ix)\\\tan x&=-i\tanh(ix)\\\cot x&=i\coth(ix)\\\sec x&=\operatorname {sech} (ix)\\\csc x&=i\operatorname {csch} (ix)\\\end{aligned}}} When using apower seriesexpansion to define trigonometric functions, the following identities are obtained:[45] For applications tospecial functions, the followinginfinite productformulae for trigonometric functions are useful:[46][47] sin⁡x=x∏n=1∞(1−x2π2n2),cos⁡x=∏n=1∞(1−x2π2(n−12))2),sinh⁡x=x∏n=1∞(1+x2π2n2),cosh⁡x=∏n=1∞(1+x2π2(n−12))2).{\displaystyle {\begin{aligned}\sin x&=x\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cos x&=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right),\\[10mu]\sinh x&=x\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cosh x&=\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right).\end{aligned}}} The following identities give the result of composing a trigonometric function with an inverse trigonometric function.[48] sin⁡(arcsin⁡x)=xcos⁡(arcsin⁡x)=1−x2tan⁡(arcsin⁡x)=x1−x2sin⁡(arccos⁡x)=1−x2cos⁡(arccos⁡x)=xtan⁡(arccos⁡x)=1−x2xsin⁡(arctan⁡x)=x1+x2cos⁡(arctan⁡x)=11+x2tan⁡(arctan⁡x)=xsin⁡(arccsc⁡x)=1xcos⁡(arccsc⁡x)=x2−1xtan⁡(arccsc⁡x)=1x2−1sin⁡(arcsec⁡x)=x2−1xcos⁡(arcsec⁡x)=1xtan⁡(arcsec⁡x)=x2−1sin⁡(arccot⁡x)=11+x2cos⁡(arccot⁡x)=x1+x2tan⁡(arccot⁡x)=1x{\displaystyle {\begin{aligned}\sin(\arcsin x)&=x&\cos(\arcsin x)&={\sqrt {1-x^{2}}}&\tan(\arcsin x)&={\frac {x}{\sqrt {1-x^{2}}}}\\\sin(\arccos x)&={\sqrt {1-x^{2}}}&\cos(\arccos x)&=x&\tan(\arccos x)&={\frac {\sqrt {1-x^{2}}}{x}}\\\sin(\arctan x)&={\frac {x}{\sqrt {1+x^{2}}}}&\cos(\arctan x)&={\frac {1}{\sqrt {1+x^{2}}}}&\tan(\arctan x)&=x\\\sin(\operatorname {arccsc} x)&={\frac {1}{x}}&\cos(\operatorname {arccsc} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\tan(\operatorname {arccsc} x)&={\frac {1}{\sqrt {x^{2}-1}}}\\\sin(\operatorname {arcsec} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\cos(\operatorname {arcsec} x)&={\frac {1}{x}}&\tan(\operatorname {arcsec} x)&={\sqrt {x^{2}-1}}\\\sin(\operatorname {arccot} x)&={\frac {1}{\sqrt {1+x^{2}}}}&\cos(\operatorname {arccot} x)&={\frac {x}{\sqrt {1+x^{2}}}}&\tan(\operatorname {arccot} x)&={\frac {1}{x}}\\\end{aligned}}} Taking themultiplicative inverseof both sides of the each equation above results in the equations forcsc=1sin,sec=1cos,andcot=1tan.{\displaystyle \csc ={\frac {1}{\sin }},\;\sec ={\frac {1}{\cos }},{\text{ and }}\cot ={\frac {1}{\tan }}.}The right hand side of the formula above will always be flipped. For example, the equation forcot⁡(arcsin⁡x){\displaystyle \cot(\arcsin x)}is:cot⁡(arcsin⁡x)=1tan⁡(arcsin⁡x)=1x1−x2=1−x2x{\displaystyle \cot(\arcsin x)={\frac {1}{\tan(\arcsin x)}}={\frac {1}{\frac {x}{\sqrt {1-x^{2}}}}}={\frac {\sqrt {1-x^{2}}}{x}}}while the equations forcsc⁡(arccos⁡x){\displaystyle \csc(\arccos x)}andsec⁡(arccos⁡x){\displaystyle \sec(\arccos x)}are:csc⁡(arccos⁡x)=1sin⁡(arccos⁡x)=11−x2andsec⁡(arccos⁡x)=1cos⁡(arccos⁡x)=1x.{\displaystyle \csc(\arccos x)={\frac {1}{\sin(\arccos x)}}={\frac {1}{\sqrt {1-x^{2}}}}\qquad {\text{ and }}\quad \sec(\arccos x)={\frac {1}{\cos(\arccos x)}}={\frac {1}{x}}.} The following identities are implied by thereflection identities. They hold wheneverx,r,s,−x,−r,and−s{\displaystyle x,r,s,-x,-r,{\text{ and }}-s}are in the domains of the relevant functions.π2=arcsin⁡(x)+arccos⁡(x)=arctan⁡(r)+arccot⁡(r)=arcsec⁡(s)+arccsc⁡(s)π=arccos⁡(x)+arccos⁡(−x)=arccot⁡(r)+arccot⁡(−r)=arcsec⁡(s)+arcsec⁡(−s)0=arcsin⁡(x)+arcsin⁡(−x)=arctan⁡(r)+arctan⁡(−r)=arccsc⁡(s)+arccsc⁡(−s){\displaystyle {\begin{alignedat}{9}{\frac {\pi }{2}}~&=~\arcsin(x)&&+\arccos(x)~&&=~\arctan(r)&&+\operatorname {arccot}(r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arccsc}(s)\\[0.4ex]\pi ~&=~\arccos(x)&&+\arccos(-x)~&&=~\operatorname {arccot}(r)&&+\operatorname {arccot}(-r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arcsec}(-s)\\[0.4ex]0~&=~\arcsin(x)&&+\arcsin(-x)~&&=~\arctan(r)&&+\arctan(-r)~&&=~\operatorname {arccsc}(s)&&+\operatorname {arccsc}(-s)\\[1.0ex]\end{alignedat}}} Also,[49]arctan⁡x+arctan⁡1x={π2,ifx>0−π2,ifx<0arccot⁡x+arccot⁡1x={π2,ifx>03π2,ifx<0{\displaystyle {\begin{aligned}\arctan x+\arctan {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\-{\frac {\pi }{2}},&{\text{if }}x<0\end{cases}}\\\operatorname {arccot} x+\operatorname {arccot} {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\{\frac {3\pi }{2}},&{\text{if }}x<0\end{cases}}\\\end{aligned}}}arccos⁡1x=arcsec⁡xandarcsec⁡1x=arccos⁡x{\displaystyle \arccos {\frac {1}{x}}=\operatorname {arcsec} x\qquad {\text{ and }}\qquad \operatorname {arcsec} {\frac {1}{x}}=\arccos x}arcsin⁡1x=arccsc⁡xandarccsc⁡1x=arcsin⁡x{\displaystyle \arcsin {\frac {1}{x}}=\operatorname {arccsc} x\qquad {\text{ and }}\qquad \operatorname {arccsc} {\frac {1}{x}}=\arcsin x} Thearctangentfunction can be expanded as a series:[50]arctan⁡(nx)=∑m=1narctan⁡x1+(m−1)mx2{\displaystyle \arctan(nx)=\sum _{m=1}^{n}\arctan {\frac {x}{1+(m-1)mx^{2}}}} In terms of thearctangentfunction we have[49]arctan⁡12=arctan⁡13+arctan⁡17.{\displaystyle \arctan {\frac {1}{2}}=\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} The curious identity known asMorrie's law,cos⁡20∘⋅cos⁡40∘⋅cos⁡80∘=18,{\displaystyle \cos 20^{\circ }\cdot \cos 40^{\circ }\cdot \cos 80^{\circ }={\frac {1}{8}},} is a special case of an identity that contains one variable:∏j=0k−1cos⁡(2jx)=sin⁡(2kx)2ksin⁡x.{\displaystyle \prod _{j=0}^{k-1}\cos \left(2^{j}x\right)={\frac {\sin \left(2^{k}x\right)}{2^{k}\sin x}}.} Similarly,sin⁡20∘⋅sin⁡40∘⋅sin⁡80∘=38{\displaystyle \sin 20^{\circ }\cdot \sin 40^{\circ }\cdot \sin 80^{\circ }={\frac {\sqrt {3}}{8}}}is a special case of an identity withx=20∘{\displaystyle x=20^{\circ }}:sin⁡x⋅sin⁡(60∘−x)⋅sin⁡(60∘+x)=sin⁡3x4.{\displaystyle \sin x\cdot \sin \left(60^{\circ }-x\right)\cdot \sin \left(60^{\circ }+x\right)={\frac {\sin 3x}{4}}.} For the casex=15∘{\displaystyle x=15^{\circ }},sin⁡15∘⋅sin⁡45∘⋅sin⁡75∘=28,sin⁡15∘⋅sin⁡75∘=14.{\displaystyle {\begin{aligned}\sin 15^{\circ }\cdot \sin 45^{\circ }\cdot \sin 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\sin 15^{\circ }\cdot \sin 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} For the casex=10∘{\displaystyle x=10^{\circ }},sin⁡10∘⋅sin⁡50∘⋅sin⁡70∘=18.{\displaystyle \sin 10^{\circ }\cdot \sin 50^{\circ }\cdot \sin 70^{\circ }={\frac {1}{8}}.} The same cosine identity iscos⁡x⋅cos⁡(60∘−x)⋅cos⁡(60∘+x)=cos⁡3x4.{\displaystyle \cos x\cdot \cos \left(60^{\circ }-x\right)\cdot \cos \left(60^{\circ }+x\right)={\frac {\cos 3x}{4}}.} Similarly,cos⁡10∘⋅cos⁡50∘⋅cos⁡70∘=38,cos⁡15∘⋅cos⁡45∘⋅cos⁡75∘=28,cos⁡15∘⋅cos⁡75∘=14.{\displaystyle {\begin{aligned}\cos 10^{\circ }\cdot \cos 50^{\circ }\cdot \cos 70^{\circ }&={\frac {\sqrt {3}}{8}},\\\cos 15^{\circ }\cdot \cos 45^{\circ }\cdot \cos 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\cos 15^{\circ }\cdot \cos 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} Similarly,tan⁡50∘⋅tan⁡60∘⋅tan⁡70∘=tan⁡80∘,tan⁡40∘⋅tan⁡30∘⋅tan⁡20∘=tan⁡10∘.{\displaystyle {\begin{aligned}\tan 50^{\circ }\cdot \tan 60^{\circ }\cdot \tan 70^{\circ }&=\tan 80^{\circ },\\\tan 40^{\circ }\cdot \tan 30^{\circ }\cdot \tan 20^{\circ }&=\tan 10^{\circ }.\end{aligned}}} The following is perhaps not as readily generalized to an identity containing variables (but see explanation below):cos⁡24∘+cos⁡48∘+cos⁡96∘+cos⁡168∘=12.{\displaystyle \cos 24^{\circ }+\cos 48^{\circ }+\cos 96^{\circ }+\cos 168^{\circ }={\frac {1}{2}}.} Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators:cos⁡2π21+cos⁡(2⋅2π21)+cos⁡(4⋅2π21)+cos⁡(5⋅2π21)+cos⁡(8⋅2π21)+cos⁡(10⋅2π21)=12.{\displaystyle \cos {\frac {2\pi }{21}}+\cos \left(2\cdot {\frac {2\pi }{21}}\right)+\cos \left(4\cdot {\frac {2\pi }{21}}\right)+\cos \left(5\cdot {\frac {2\pi }{21}}\right)+\cos \left(8\cdot {\frac {2\pi }{21}}\right)+\cos \left(10\cdot {\frac {2\pi }{21}}\right)={\frac {1}{2}}.} The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than⁠21/2⁠that arerelatively primeto (or have noprime factorsin common with) 21. The last several examples are corollaries of a basic fact about the irreduciblecyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is theMöbius functionevaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively. Other cosine identities include:[51]2cos⁡π3=1,2cos⁡π5×2cos⁡2π5=1,2cos⁡π7×2cos⁡2π7×2cos⁡3π7=1,{\displaystyle {\begin{aligned}2\cos {\frac {\pi }{3}}&=1,\\2\cos {\frac {\pi }{5}}\times 2\cos {\frac {2\pi }{5}}&=1,\\2\cos {\frac {\pi }{7}}\times 2\cos {\frac {2\pi }{7}}\times 2\cos {\frac {3\pi }{7}}&=1,\end{aligned}}}and so forth for all odd numbers, and hencecos⁡π3+cos⁡π5×cos⁡2π5+cos⁡π7×cos⁡2π7×cos⁡3π7+⋯=1.{\displaystyle \cos {\frac {\pi }{3}}+\cos {\frac {\pi }{5}}\times \cos {\frac {2\pi }{5}}+\cos {\frac {\pi }{7}}\times \cos {\frac {2\pi }{7}}\times \cos {\frac {3\pi }{7}}+\dots =1.} Many of those curious identities stem from more general facts like the following:[52]∏k=1n−1sin⁡kπn=n2n−1{\displaystyle \prod _{k=1}^{n-1}\sin {\frac {k\pi }{n}}={\frac {n}{2^{n-1}}}}and∏k=1n−1cos⁡kπn=sin⁡πn22n−1.{\displaystyle \prod _{k=1}^{n-1}\cos {\frac {k\pi }{n}}={\frac {\sin {\frac {\pi n}{2}}}{2^{n-1}}}.} Combining these gives us∏k=1n−1tan⁡kπn=nsin⁡πn2{\displaystyle \prod _{k=1}^{n-1}\tan {\frac {k\pi }{n}}={\frac {n}{\sin {\frac {\pi n}{2}}}}} Ifnis an odd number (n=2m+1{\displaystyle n=2m+1}) we can make use of the symmetries to get∏k=1mtan⁡kπ2m+1=2m+1{\displaystyle \prod _{k=1}^{m}\tan {\frac {k\pi }{2m+1}}={\sqrt {2m+1}}} The transfer function of theButterworth low pass filtercan be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved:∏k=1nsin⁡(2k−1)π4n=∏k=1ncos⁡(2k−1)π4n=22n{\displaystyle \prod _{k=1}^{n}\sin {\frac {\left(2k-1\right)\pi }{4n}}=\prod _{k=1}^{n}\cos {\frac {\left(2k-1\right)\pi }{4n}}={\frac {\sqrt {2}}{2^{n}}}} An efficient way tocomputeπto alarge number of digitsis based on the following identity without variables, due toMachin. This is known as aMachin-like formula:π4=4arctan⁡15−arctan⁡1239{\displaystyle {\frac {\pi }{4}}=4\arctan {\frac {1}{5}}-\arctan {\frac {1}{239}}}or, alternatively, by using an identity ofLeonhard Euler:π4=5arctan⁡17+2arctan⁡379{\displaystyle {\frac {\pi }{4}}=5\arctan {\frac {1}{7}}+2\arctan {\frac {3}{79}}}or by usingPythagorean triples:π=arccos⁡45+arccos⁡513+arccos⁡1665=arcsin⁡35+arcsin⁡1213+arcsin⁡6365.{\displaystyle \pi =\arccos {\frac {4}{5}}+\arccos {\frac {5}{13}}+\arccos {\frac {16}{65}}=\arcsin {\frac {3}{5}}+\arcsin {\frac {12}{13}}+\arcsin {\frac {63}{65}}.} Others include:[53][49]π4=arctan⁡12+arctan⁡13,{\displaystyle {\frac {\pi }{4}}=\arctan {\frac {1}{2}}+\arctan {\frac {1}{3}},}π=arctan⁡1+arctan⁡2+arctan⁡3,{\displaystyle \pi =\arctan 1+\arctan 2+\arctan 3,}π4=2arctan⁡13+arctan⁡17.{\displaystyle {\frac {\pi }{4}}=2\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} Generally, for numberst1, ...,tn−1∈ (−1, 1)for whichθn= Σn−1k=1arctantk∈ (π/4, 3π/4), lettn= tan(π/2 −θn) = cotθn. This last expression can be computed directly using the formula for the cotangent of a sum of angles whose tangents aret1, ...,tn−1and its value will be in(−1, 1). In particular, the computedtnwill be rational whenever all thet1, ...,tn−1values are rational. With these values,π2=∑k=1narctan⁡(tk)π=∑k=1nsgn⁡(tk)arccos⁡(1−tk21+tk2)π=∑k=1narcsin⁡(2tk1+tk2)π=∑k=1narctan⁡(2tk1−tk2),{\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\sum _{k=1}^{n}\arctan(t_{k})\\\pi &=\sum _{k=1}^{n}\operatorname {sgn}(t_{k})\arccos \left({\frac {1-t_{k}^{2}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arcsin \left({\frac {2t_{k}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arctan \left({\frac {2t_{k}}{1-t_{k}^{2}}}\right)\,,\end{aligned}}} where in all but the first expression, we have used tangent half-angle formulae. The first two formulae work even if one or more of thetkvalues is not within(−1, 1). Note that ift=p/qis rational, then the(2t, 1 −t2, 1 +t2)values in the above formulae are proportional to the Pythagorean triple(2pq,q2−p2,q2+p2). For example, forn= 3terms,π2=arctan⁡(ab)+arctan⁡(cd)+arctan⁡(bd−acad+bc){\displaystyle {\frac {\pi }{2}}=\arctan \left({\frac {a}{b}}\right)+\arctan \left({\frac {c}{d}}\right)+\arctan \left({\frac {bd-ac}{ad+bc}}\right)}for anya,b,c,d> 0. Euclidshowed in Book XIII, Proposition 10 of hisElementsthat the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says:sin2⁡18∘+sin2⁡30∘=sin2⁡36∘.{\displaystyle \sin ^{2}18^{\circ }+\sin ^{2}30^{\circ }=\sin ^{2}36^{\circ }.} Ptolemyused this proposition to compute some angles inhis table of chordsin Book I, chapter 11 ofAlmagest. These identities involve a trigonometric function of a trigonometric function:[54] whereJiareBessel functions. Aconditional trigonometric identityis a trigonometric identity that holds if specified conditions on the arguments to the trigonometric functions are satisfied.[55]The following formulae apply to arbitrary plane triangles and follow fromα+β+γ=180∘,{\displaystyle \alpha +\beta +\gamma =180^{\circ },}as long as the functions occurring in the formulae are well-defined (the latter applies only to the formulae in which tangents and cotangents occur).[56]tan⁡α+tan⁡β+tan⁡γ=tan⁡αtan⁡βtan⁡γ1=cot⁡βcot⁡γ+cot⁡γcot⁡α+cot⁡αcot⁡βcot⁡(α2)+cot⁡(β2)+cot⁡(γ2)=cot⁡(α2)cot⁡(β2)cot⁡(γ2)1=tan⁡(β2)tan⁡(γ2)+tan⁡(γ2)tan⁡(α2)+tan⁡(α2)tan⁡(β2)sin⁡α+sin⁡β+sin⁡γ=4cos⁡(α2)cos⁡(β2)cos⁡(γ2)−sin⁡α+sin⁡β+sin⁡γ=4cos⁡(α2)sin⁡(β2)sin⁡(γ2)cos⁡α+cos⁡β+cos⁡γ=4sin⁡(α2)sin⁡(β2)sin⁡(γ2)+1−cos⁡α+cos⁡β+cos⁡γ=4sin⁡(α2)cos⁡(β2)cos⁡(γ2)−1sin⁡(2α)+sin⁡(2β)+sin⁡(2γ)=4sin⁡αsin⁡βsin⁡γ−sin⁡(2α)+sin⁡(2β)+sin⁡(2γ)=4sin⁡αcos⁡βcos⁡γcos⁡(2α)+cos⁡(2β)+cos⁡(2γ)=−4cos⁡αcos⁡βcos⁡γ−1−cos⁡(2α)+cos⁡(2β)+cos⁡(2γ)=−4cos⁡αsin⁡βsin⁡γ+1sin2⁡α+sin2⁡β+sin2⁡γ=2cos⁡αcos⁡βcos⁡γ+2−sin2⁡α+sin2⁡β+sin2⁡γ=2cos⁡αsin⁡βsin⁡γcos2⁡α+cos2⁡β+cos2⁡γ=−2cos⁡αcos⁡βcos⁡γ+1−cos2⁡α+cos2⁡β+cos2⁡γ=−2cos⁡αsin⁡βsin⁡γ+1sin2⁡(2α)+sin2⁡(2β)+sin2⁡(2γ)=−2cos⁡(2α)cos⁡(2β)cos⁡(2γ)+2cos2⁡(2α)+cos2⁡(2β)+cos2⁡(2γ)=2cos⁡(2α)cos⁡(2β)cos⁡(2γ)+11=sin2⁡(α2)+sin2⁡(β2)+sin2⁡(γ2)+2sin⁡(α2)sin⁡(β2)sin⁡(γ2){\displaystyle {\begin{aligned}\tan \alpha +\tan \beta +\tan \gamma &=\tan \alpha \tan \beta \tan \gamma \\1&=\cot \beta \cot \gamma +\cot \gamma \cot \alpha +\cot \alpha \cot \beta \\\cot \left({\frac {\alpha }{2}}\right)+\cot \left({\frac {\beta }{2}}\right)+\cot \left({\frac {\gamma }{2}}\right)&=\cot \left({\frac {\alpha }{2}}\right)\cot \left({\frac {\beta }{2}}\right)\cot \left({\frac {\gamma }{2}}\right)\\1&=\tan \left({\frac {\beta }{2}}\right)\tan \left({\frac {\gamma }{2}}\right)+\tan \left({\frac {\gamma }{2}}\right)\tan \left({\frac {\alpha }{2}}\right)+\tan \left({\frac {\alpha }{2}}\right)\tan \left({\frac {\beta }{2}}\right)\\\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)\\-\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)\\\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)+1\\-\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)-1\\\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \sin \beta \sin \gamma \\-\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \cos \beta \cos \gamma \\\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \cos \beta \cos \gamma -1\\-\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \cos \beta \cos \gamma +2\\-\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \sin \beta \sin \gamma \\\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \cos \beta \cos \gamma +1\\-\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}(2\alpha )+\sin ^{2}(2\beta )+\sin ^{2}(2\gamma )&=-2\cos(2\alpha )\cos(2\beta )\cos(2\gamma )+2\\\cos ^{2}(2\alpha )+\cos ^{2}(2\beta )+\cos ^{2}(2\gamma )&=2\cos(2\alpha )\,\cos(2\beta )\,\cos(2\gamma )+1\\1&=\sin ^{2}\left({\frac {\alpha }{2}}\right)+\sin ^{2}\left({\frac {\beta }{2}}\right)+\sin ^{2}\left({\frac {\gamma }{2}}\right)+2\sin \left({\frac {\alpha }{2}}\right)\,\sin \left({\frac {\beta }{2}}\right)\,\sin \left({\frac {\gamma }{2}}\right)\end{aligned}}} Theversine,coversine,haversine, andexsecantwere used in navigation. For example, thehaversine formulawas used to calculate the distance between two points on a sphere. They are rarely used today. TheDirichlet kernelDn(x)is the function occurring on both sides of the next identity:1+2cos⁡x+2cos⁡(2x)+2cos⁡(3x)+⋯+2cos⁡(nx)=sin⁡((n+12)x)sin⁡(12x).{\displaystyle 1+2\cos x+2\cos(2x)+2\cos(3x)+\cdots +2\cos(nx)={\frac {\sin \left(\left(n+{\frac {1}{2}}\right)x\right)}{\sin \left({\frac {1}{2}}x\right)}}.} Theconvolutionof anyintegrable functionof period2π{\displaystyle 2\pi }with the Dirichlet kernel coincides with the function'sn{\displaystyle n}th-degree Fourier approximation. The same holds for anymeasureorgeneralized function. If we sett=tan⁡x2,{\displaystyle t=\tan {\frac {x}{2}},}then[57]sin⁡x=2t1+t2;cos⁡x=1−t21+t2;eix=1+it1−it;dx=2dt1+t2,{\displaystyle \sin x={\frac {2t}{1+t^{2}}};\qquad \cos x={\frac {1-t^{2}}{1+t^{2}}};\qquad e^{ix}={\frac {1+it}{1-it}};\qquad dx={\frac {2\,dt}{1+t^{2}}},}whereeix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,}sometimes abbreviated tocisx. When this substitution oft{\displaystyle t}fortan⁠x/2⁠is used incalculus, it follows thatsin⁡x{\displaystyle \sin x}is replaced by⁠2t/1 +t2⁠,cos⁡x{\displaystyle \cos x}is replaced by⁠1 −t2/1 +t2⁠and the differentialdxis replaced by⁠2 dt/1 +t2⁠. Thereby one converts rational functions ofsin⁡x{\displaystyle \sin x}andcos⁡x{\displaystyle \cos x}to rational functions oft{\displaystyle t}in order to find theirantiderivatives. cos⁡θ2⋅cos⁡θ4⋅cos⁡θ8⋯=∏n=1∞cos⁡θ2n=sin⁡θθ=sinc⁡θ.{\displaystyle \cos {\frac {\theta }{2}}\cdot \cos {\frac {\theta }{4}}\cdot \cos {\frac {\theta }{8}}\cdots =\prod _{n=1}^{\infty }\cos {\frac {\theta }{2^{n}}}={\frac {\sin \theta }{\theta }}=\operatorname {sinc} \theta .}
https://en.wikipedia.org/wiki/List_of_trigonometric_identities
Inmathematics, aMadhava seriesis one of the threeTaylor seriesexpansions for thesine,cosine, andarctangentfunctionsdiscovered in 14th or 15th century inKerala,Indiaby the mathematician and astronomerMadhava of Sangamagrama(c. 1350 – c. 1425) or his followers in theKerala school of astronomy and mathematics.[1]Using modern notation, these series are: All three series were later independently discovered in 17th century Europe. The series for sine and cosine were rediscovered byIsaac Newtonin 1669,[2]and the series for arctangent was rediscovered byJames Gregoryin 1671 andGottfried Leibnizin 1673,[3]and is conventionally calledGregory's series. The specific valuearctan⁡1=π4{\textstyle \arctan 1={\tfrac {\pi }{4}}}can be used to calculatethe circle constantπ, and the arctangent series for1is conventionally calledLeibniz's series. In recognition of Madhava'spriority, in recent literature these series are sometimes called theMadhava–Newton series,[4]Madhava–Gregory series,[5]orMadhava–Leibnizseries[6](among other combinations).[7] No surviving works of Madhava contain explicit statements regarding the expressions which are now referred to as Madhava series. However, in the writing of later Kerala school mathematiciansNilakantha Somayaji(1444 – 1544) andJyeshthadeva(c. 1500 – c. 1575) one can find unambiguous attribution of these series to Madhava. These later works also include proofs and commentary which suggest how Madhava may have arrived at the series. The translations of the relevant verses as given in theYuktidipikacommentary ofTantrasamgraha(also known asTantrasamgraha-vyakhya) bySankara Variar(circa. 1500 - 1560 CE) are reproduced below. These are then rendered in current mathematical notations.[8][9] Madhava's sine series is stated in verses 2.440 and 2.441 inYukti-dipikacommentary (Tantrasamgraha-vyakhya) bySankara Variar. A translation of the verses follows. Multiply the arc by the square of the arc, and take the result of repeating that (any number of times). Divide by the squares of the successive even numbers (such that current is multiplied by previous) increased by that number and multiplied by the square of the radius. Place the arc and the successive results so obtained one below the other, and subtract each from the one above. These together give thejiva[sine], as collected together in the verse beginning with "vidvan" etc. Letrdenote the radius of the circle andsthe arc-length. Let θ be the angle subtended by the arcsat the centre of the circle. Thens=r θandjiva=rsinθ. Substituting these in the last expression and simplifying we get which is the infinitepower seriesexpansion of the sine function. The last line in the verse ′as collected together in the verse beginning with "vidvan" etc.′ is a reference to a reformulation of the series introduced by Madhava himself to make it convenient for easy computations for specified values of the arc and the radius. For such a reformulation, Madhava considers a circle one quarter of which measures 5400 minutes (sayCminutes) and develops a scheme for the easy computations of thejiva′s of the various arcs of such a circle. LetRbe the radius of a circle one quarter of which measures C. Madhava had already computed the value ofπusing his series formula forπ.[10]Using this value ofπ, namely 3.1415926535922, the radiusRis computed as follows: Then Madhava's expression forjivacorresponding to any arcsof a circle of radiusRis equivalent to the following: Madhava now computes the following values: Thejivacan now be computed using the following scheme: jiva=s−(sC)3[(R(π2)33!)−(sC)2[(R(π2)55!)−(sC)2[(R(π2)77!)−(sC)2[(R(π2)99!)−(sC)2(R(π2)1111!)]]]].{\displaystyle {\text{jiva }}=s-\left({\frac {s}{C}}\right)^{3}\left[\left({\frac {R({\frac {\pi }{2}})^{3}}{3!}}\right)-\left({\frac {s}{C}}\right)^{2}\left[\left({\frac {R({\frac {\pi }{2}})^{5}}{5!}}\right)-\left({\frac {s}{C}}\right)^{2}\left[\left({\frac {R({\frac {\pi }{2}})^{7}}{7!}}\right)-\left({\frac {s}{C}}\right)^{2}\left[\left({\frac {R({\frac {\pi }{2}})^{9}}{9!}}\right)-\left({\frac {s}{C}}\right)^{2}\left({\frac {R({\frac {\pi }{2}})^{11}}{11!}}\right)\right]\right]\right]\right].} This gives an approximation ofjivaby its Taylor polynomial of the 11'th order. It involves one division, six multiplications and five subtractions only. Madhava prescribes this numerically efficient computational scheme in the following words (translation of verse 2.437 inYukti-dipika): vi-dvān, tu-nna-ba-la, ka-vī-śa-ni-ca-ya, sa-rvā-rtha-śī-la-sthi-ro, ni-rvi-ddhā-nga-na-rē-ndra-rung . Successively multiply these five numbers in order by the square of the arc divided by the quarter of the circumference (5400′), and subtract from the next number. (Continue this process with the result so obtained and the next number.) Multiply the final result by the cube of the arc divided by quarter of the circumference and subtract from the arc. Madhava's cosine series is stated in verses 2.442 and 2.443 inYukti-dipikacommentary (Tantrasamgraha-vyakhya) bySankara Variar. A translation of the verses follows. Multiply the square of the arc by the unit (i.e. the radius) and take the result of repeating that (any number of times). Divide (each of the above numerators) by the square of the successive even numbers decreased by that number and multiplied by the square of the radius. But the first term is (now)(the one which is) divided by twice the radius. Place the successive results so obtained one below the other and subtract each from the one above. These together give the śara as collected together in the verse beginning with stena, stri, etc. Letrdenote the radius of the circle andsthe arc-length. Letθbe the angle subtended by the arcsat the centre of the circle. Thens=rθandśara=r(1 − cosθ). Substituting these in the last expression and simplifying we get which gives the infinite power series expansion of the cosine function. The last line in the verse ′as collected together in the verse beginning with stena, stri, etc.′ is a reference to a reformulation introduced by Madhava himself to make the series convenient for easy computations for specified values of the arc and the radius. As in the case of the sine series, Madhava considers a circle one quarter of which measures 5400 minutes (sayCminutes) and develops a scheme for the easy computations of theśara′s of the various arcs of such a circle. LetRbe the radius of a circle one quarter of which measures C. Then, as in the case of the sine series, Madhava getsR= 3437′ 44′′ 48′′′. Madhava's expression forśaracorresponding to any arcsof a circle of radiusRis equivalent to the following: Madhava now computes the following values: Theśaracan now be computed using the following scheme: This gives an approximation ofśaraby its Taylor polynomial of the 12'th order. This also involves one division, six multiplications and five subtractions only. Madhava prescribes this numerically efficient computational scheme in the following words (translation of verse 2.438 inYukti-dipika): The six stena, strīpiśuna, sugandhinaganud, bhadrāngabhavyāsana, mīnāngonarasimha, unadhanakrtbhureva. Multiply by the square of the arc divided by the quarter of the circumference and subtract from the next number. (Continue with the result and the next number.) Final result will beutkrama-jya(R versed sign). Madhava's arctangent series is stated in verses 2.206 – 2.209 inYukti-dipikacommentary (Tantrasamgraha-vyakhya) bySankara Variar. A translation of the verses is given below.[11]Jyesthadevahas also given a description of this series inYuktibhasa.[12][13][14] Now, by just the same argument, the determination of the arc of a desired sine can be (made). That is as follows: The first result is the product of the desired sine and the radius divided by the cosine of the arc. When one has made the square of the sine the multiplier and the square of the cosine the divisor, now a group of results is to be determined from the (previous) results beginning from the first. When these are divided in order by the odd numbers 1, 3, and so forth, and when one has subtracted the sum of the even(-numbered) results from the sum of the odd (ones), that should be the arc. Here the smaller of the sine and cosine is required to be considered as the desired (sine). Otherwise, there would be no termination of results even if repeatedly (computed). By means of the same argument, the circumference can be computed in another way too. That is as (follows): The first result should by the square root of the square of the diameter multiplied by twelve. From then on, the result should be divided by three (in) each successive (case). When these are divided in order by the odd numbers, beginning with 1, and when one has subtracted the (even) results from the sum of the odd, (that) should be the circumference. Letsbe the arc of the desired sine (jyaorjiva)y. Letrbe the radius andxbe the cosine (kotijya). Let θ be the angle subtended by the arcsat the centre of the circle. Thens=rθ,x=kotijya=rcos θ andy=jya=rsin θ. Theny/x= tan θ. Substituting these in the last expression and simplifying we get Letting tan θ =qwe finally have The second part of the quoted text specifies another formula for the computation of the circumferencecof a circle having diameterd. This is as follows. Sincec=πdthis can be reformulated as a formula to computeπas follows. This is obtained by substitutingq=1/3{\displaystyle 1/{\sqrt {3}}}(thereforeθ=π/ 6) in the power series expansion for tan−1qabove.
https://en.wikipedia.org/wiki/Madhava_series
Madhava's sine tableis thetableoftrigonometric sinesconstructed by the 14th centuryKeralamathematician-astronomerMadhava of Sangamagrama(c. 1340 – c. 1425). The table lists thejya-sor Rsines of the twenty-fouranglesfrom 3.75°to 90° in steps of 3.75° (1/24 of aright angle, 90°). Rsine is just the sine multiplied by a selected radius and given as an integer. In this table, as inAryabhata's earlier table,Ris taken as 21600 ÷ 2π≈ 3437.75. The table isencodedin the letters of theSanskritalphabet using theKatapayadi system, giving entries the appearance of the verses of a poem. Madhava's original work containing the table has not been found. The table is reproduced in theAryabhatiyabhashyaofNilakantha Somayaji[1](1444–1544) and also in theYuktidipika/Laghuvivrticommentary ofTantrasamgrahabySankara Variar(circa. 1500–1560).[2]: 114–123 The verses below are given as inCultural foundations of mathematicsby C.K. Raju.[2]: 114–123They are also given in theMalayalam Commentary ofKaranapaddhatiby P.K. Koru[3]but slightly differently. The verses are: श्रेष्ठं नाम वरिष्ठानां हिमाद्रिर्वेदभावनः ।तपनो भानु सूक्तज्ञो मध्यमं विद्धि दोहनम् ॥ १ ॥धिगाज्यो नाशनं कष्टं छन्नभोगाशयाम्बिका ।मृगाहारो नरेशोयं वीरो रणजयोत्सुकः ॥ २ ॥मूलं विशुद्धं नाळस्य गानेषु विरळा नराः ।अशुद्धिगुप्ता चोरश्रीः शङ्कुकर्णो नगेश्वरः ॥ ३ ॥तनुजो गर्भजो मित्रं श्रीमानत्र सुखी सखे ।शशी रात्रौ हिमाहारौ वेगज्ञः पथि सिन्धुरः ॥ ४ ॥छाया लयो गजो नीलो निर्मलो नास्ति सत्कुले ।रात्रौ दर्पणमभ्राङ्गं नागस्तुङ्गनखो बली ॥ ५ ॥धीरो युवा कथालोलः पूज्यो नारीजनैर्भगः ।कन्यागारे नागवल्ली देवो विश्वस्थली भृगुः ॥ ६ ॥तत्परादिकलान्तास्तु महाज्या माधवोदिताः ।स्वस्वपूर्वविशुद्धे तु शिष्टास्तत्खण्डमौर्विकाः ॥ ७ ॥ The quarters of the first six verses represent entries for the twenty-four angles from 3.75° to 90° in steps of 3.75° (first column). The second column contains the Rsine values encoded as Sanskrit words (in Devanagari). The third column contains the same inISO 15919 transliterations. The fourth column contains the numbers decoded into arcminutes, arcseconds, and arcthirds in modern numerals. The modern values scaled by the traditional “radius” (21600 ÷ 2π, with the modern value ofπwith two decimals in the arcthirds are given in the fifth column. The last verse means: “These are the great R-sines as said by Madhava, comprising arcminutes, seconds and thirds. Subtracting from each the previous will give the R-sine-differences.” By comparing, one can note that Madhava's values are accurately given rounded to the declared precision of thirds except for Rsin(15°) where one feels he should have rounded up to 889′45″16‴ instead. Note that in theKatapayadi systemthe digits are written in the reverse order, so for example the literal entry corresponding to 15° is 51549880 which is reversed and then read as 0889′45″15‴. Note that the 0 does not carry a value but is used for the metre of the poem alone. Without going into the philosophy of why the value ofR= 21600 ÷ 2πwas chosen etc, the simplest way to relate the jya tables to our modern concept of sine tables is as follows: Even today sine tables are given as decimals to a certain precision. If sin(15°) is given as 0.1736, it means the rational 1736 ÷ 10000 is a good approximation of the actual infinite precision number. The only difference is that in the earlier days they had not standardized on decimal values (or powers of ten as denominator) for fractions. Hence they used other denominators based on other considerations (which are not discussed here). Hence the sine values represented in the tables may simply be taken as approximated by the given integer values divided by theRchosen for the table. Another possible confusion point is the usage of angle measures like arcminute etc in expressing the R-sines. Modern sines are unitless ratios. Jya-s or R-sines are the same multiplied by a measure of length or distance. However, since these tables were mostly used for astronomy, and distance on the celestial sphere is expressed in angle measures, these values are also given likewise. However, the unit is not really important and need not be taken too seriously, as the value will anyhow be used as part of a rational and the unit will cancel out. However, this also leads to the usage of sexagesimal subdivisions in Madhava's refining the earlier table of Aryabhata. Instead of choosing a largerR, he gave the extra precision determined by him on top of the earlier given minutes by using seconds and thirds. As before, these may simply be taken as a different way of expressing fractions and not necessarily as angle measures. Consider some angle whose measure isA. Consider acircleof unit radius and center O. Let the arc PQ of the circle subtend an angleAat the center O. Drop theperpendicularQR from Q to OP; then the length of the line segment RQ is the value of the trigonometric sine of the angleA. Let PS be an arc of the circle whose length is equal to the length of the segment RQ. For various anglesA, Madhava's table gives the measures of the corresponding angles∠{\displaystyle \angle }POS inarcminutes,arcsecondsand sixtieths of anarcsecond. As an example, letAbe an angle whose measure is 22.50°. In Madhava's table, the entry corresponding to 22.50° is the measure in arcminutes, arcseconds and sixtieths of an arcsecond of the angle whose radian measure is the value ofsin 22.50°, which is 0.3826834; For an angle whose measure isA, let Then: Each of the lines in the table specifies eight digits. Let the digits corresponding to angleA(read from left to right) be: Then according to the rules of theKatapayadi systemthey should be taken from right to left and we have: The value of the above angleBexpressed in radians will correspond to the sine value ofA. As said earlier, this is the same as dividing the encoded value by the takenRvalue: The table lists the following digits corresponding to the angleA= 45.00°: This yields the angle with measure: From which we get: The value of the sine ofA= 45.00° as given in Madhava's table is then justBconverted to radians: Evaluating the above, one can find that sin 45° is 0.70710681… This is accurate to 6 decimal places. No work of Madhava detailing the methods used by him for the computation of the sine table has survived. However from the writings of later Kerala mathematicians includingNilakantha Somayaji(Tantrasangraha) andJyeshtadeva(Yuktibhāṣā) that give ample references to Madhava's accomplishments, it is conjectured that Madhava computed his sine table using thepower series expansion of sinx:
https://en.wikipedia.org/wiki/Madhava%27s_sine_table
Inoptics, theoptical sine theoremstates that the products of the index, height, andsineof the slope angle of a ray in object space and its corresponding ray inimage spaceare equal. That is: Thisoptics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Optical_sine_theorem
Ingeometry, thepolar sinegeneralizes thesine functionofangleto thevertex angleof apolytope. It is denoted bypsin. Letv1, ...,vn(n≥ 1) be non-zeroEuclidean vectorsinn-dimensional space(Rn) that are directed from avertexof aparallelotope, forming the edges of the parallelotope. The polar sine of the vertex angle is: where the numerator is thedeterminant which equals thesigned hypervolumeof the parallelotope with vector edges[1] and where the denominator is then-foldproduct of themagnitudesof the vectors, which equals the hypervolume of then-dimensionalhyperrectanglewith edges equal to the magnitudes of the vectors ||v1||, ||v2||, ... ||vn|| rather than the vectors themselves. Also see Ericksson.[2] The parallelotope is like a "squashed hyperrectangle", so it has less hypervolume than the hyperrectangle, meaning (see image for the 3d case): as for the ordinary sine, with either bound being reached only in the case that all vectors are mutuallyorthogonal. In the casen= 2, the polar sine is the ordinarysineof the angle between the two vectors. A non-negative version of the polar sine that works in anym-dimensional space can be defined using theGram determinant. It is a ratio where the denominator is as described above. The numerator is where the superscript T indicatesmatrix transposition. This can be nonzero only ifm≥n. In the casem=n, this is equivalent to theabsolute valueof the definition given previously. In the degenerate casem<n, the determinant will be of asingularn×nmatrix, givingΩ = 0andpsin = 0, because it is not possible to havenlinearly independent vectors inm-dimensional space whenm<n. The polar sine changes sign whenever two vectors are interchanged, due to the antisymmetry ofrow-exchangingin the determinant; however, its absolute value will remain unchanged. The polar sine does not change if all of the vectorsv1, ...,vnarescalar-multipliedby positive constantsci, due tofactorization If anodd numberof these constants are instead negative, then the sign of the polar sine will change; however, its absolute value will remain unchanged. If the vectors are notlinearly independent, the polar sine will be zero. This will always be so in thedegenerate casethat the number of dimensionsmis strictly less than the number of vectorsn. Thecosineof the angle between two non-zero vectors is given by using thedot product. Comparison of this expression to the definition of the absolute value of the polar sine as given above gives: In particular, forn= 2, this is equivalent to which is thePythagorean theorem. Polar sines were investigated byEulerin the 18th century.[3]
https://en.wikipedia.org/wiki/Polar_sine
There are several equivalent ways for definingtrigonometric functions, and the proofs of thetrigonometric identitiesbetween them depend on the chosen definition. The oldest and most elementary definitions are based on the geometry ofright trianglesand the ratio between their sides. The proofs given in this article use these definitions, and thus apply to non-negative angles not greater than aright angle. For greater and negativeangles, seeTrigonometric functions. Other definitions, and therefore other proofs are based on theTaylor seriesofsineandcosine, or on thedifferential equationf″+f=0{\displaystyle f''+f=0}to which they are solutions. The six trigonometric functions are defined for everyreal number, except, for some of them, for angles that differ from 0 by a multiple of the right angle (90°). Referring to the diagram at the right, the six trigonometric functions of θ are, for angles smaller than the right angle: In the case of angles smaller than a right angle, the following identities are direct consequences of above definitions through the division identity They remain valid for angles greater than 90° and for negative angles. Or Two angles whose sum is π/2 radians (90 degrees) arecomplementary. In the diagram, the angles at vertices A and B are complementary, so we can exchange a and b, and change θ to π/2 − θ, obtaining: Identity 1: The following two results follow from this and the ratio identities. To obtain the first, divide both sides ofsin2⁡θ+cos2⁡θ=1{\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1}bycos2⁡θ{\displaystyle \cos ^{2}\theta }; for the second, divide bysin2⁡θ{\displaystyle \sin ^{2}\theta }. Similarly Identity 2: The following accounts for all three reciprocal functions. Proof 2: Refer to the triangle diagram above. Note thata2+b2=h2{\displaystyle a^{2}+b^{2}=h^{2}}byPythagorean theorem. Substituting with appropriate functions - Rearranging gives: Draw a horizontal line (thex-axis); mark an origin O. Draw a line from O at an angleα{\displaystyle \alpha }above the horizontal line and a second line at an angleβ{\displaystyle \beta }above that; the angle between the second line and thex-axis isα+β.{\displaystyle \alpha +\beta .} Place P on the line defined byα+β{\displaystyle \alpha +\beta }at a unit distance from the origin. Let PQ be a line perpendicular to line OQ defined by angleα{\displaystyle \alpha }, drawn from point Q on this line to point P.∴{\displaystyle \therefore }OQP is a right angle. Let QA be a perpendicular from point A on thex-axis to Q and PB be a perpendicular from point B on thex-axis to P.∴{\displaystyle \therefore }OAQ and OBP are right angles. Draw R on PB so that QR is parallel to thex-axis. Now angleRPQ=α{\displaystyle RPQ=\alpha }(becauseOQA=π2−α{\displaystyle OQA={\frac {\pi }{2}}-\alpha }, makingRQO=α,RQP=π2−α{\displaystyle RQO=\alpha ,RQP={\frac {\pi }{2}}-\alpha }, and finallyRPQ=α{\displaystyle RPQ=\alpha }) By substituting−β{\displaystyle -\beta }forβ{\displaystyle \beta }and using thereflection identitiesofeven and odd functions, we also get: Using the figure above, By substituting−β{\displaystyle -\beta }forβ{\displaystyle \beta }and using thereflection identitiesofeven and odd functions, we also get: Also, using thecomplementary angle formulae, From the sine and cosine formulae, we get Dividing both numerator and denominator bycos⁡αcos⁡β{\displaystyle \cos \alpha \cos \beta }, we get Subtractingβ{\displaystyle \beta }fromα{\displaystyle \alpha }, usingtan⁡(−β)=−tan⁡β{\displaystyle \tan(-\beta )=-\tan \beta }, Similarly, from the sine and cosine formulae, we get Then by dividing both numerator and denominator bysin⁡αsin⁡β{\displaystyle \sin \alpha \sin \beta }, we get Or, usingcot⁡θ=1tan⁡θ{\displaystyle \cot \theta ={\frac {1}{\tan \theta }}}, Usingcot⁡(−β)=−cot⁡β{\displaystyle \cot(-\beta )=-\cot \beta }, From the angle sum identities, we get and The Pythagorean identities give the two alternative forms for the latter of these: The angle sum identities also give It can also be proved usingEuler's formula Squaring both sides yields But replacing the angle with its doubled version, which achieves the same result in the left side of the equation, yields It follows that Expanding the square and simplifying on the left hand side of the equation gives Because the imaginary and real parts have to be the same, we are left with the original identities and also The two identities giving the alternative forms for cos 2θ lead to the following equations: The sign of the square root needs to be chosen properly—note that if 2πis added to θ, the quantities inside the square roots are unchanged, but the left-hand-sides of the equations change sign. Therefore, the correct sign to use depends on the value of θ. For the tan function, the equation is: Then multiplying the numerator and denominator inside the square root by (1 + cos θ) and using Pythagorean identities leads to: Also, if the numerator and denominator are both multiplied by (1 - cos θ), the result is: This also gives: Similar manipulations for the cot function give: Ifψ+θ+ϕ=π={\displaystyle \psi +\theta +\phi =\pi =}half circle (for example,ψ{\displaystyle \psi },θ{\displaystyle \theta }andϕ{\displaystyle \phi }are the angles of a triangle), Proof:[1] Ifψ+θ+ϕ=π2={\displaystyle \psi +\theta +\phi ={\tfrac {\pi }{2}}=}quarter circle, Proof: Replace each ofψ{\displaystyle \psi },θ{\displaystyle \theta }, andϕ{\displaystyle \phi }with their complementary angles, so cotangents turn into tangents and vice versa. Given so the result follows from the triple tangent identity. First, start with the sum-angle identities: By adding these together, Similarly, by subtracting the two sum-angle identities, Letα+β=θ{\displaystyle \alpha +\beta =\theta }andα−β=ϕ{\displaystyle \alpha -\beta =\phi }, Substituteθ{\displaystyle \theta }andϕ{\displaystyle \phi } Therefore, Similarly for cosine, start with the sum-angle identities: Again, by adding and subtracting Substituteθ{\displaystyle \theta }andϕ{\displaystyle \phi }as before, The figure at the right shows a sector of a circle with radius 1. The sector isθ/(2π)of the whole circle, so its area isθ/2. We assume here thatθ<π/2. The area of triangleOADisAB/2, orsin(θ)/2. The area of triangleOCDisCD/2, ortan(θ)/2. Since triangleOADlies completely inside the sector, which in turn lies completely inside triangleOCD, we have This geometric argument relies on definitions ofarc lengthandarea, which act as assumptions, so it is rather a condition imposed in construction oftrigonometric functionsthan a provable property.[2]For the sine function, we can handle other values. Ifθ>π/2, thenθ> 1. Butsinθ≤ 1(because of the Pythagorean identity), sosinθ<θ. So we have For negative values ofθwe have, by the symmetry of the sine function Hence and In other words, the function sine isdifferentiableat 0, and itsderivativeis 1. Proof: From the previous inequalities, we have, for small angles Therefore, Consider the right-hand inequality. Since Multiply through bycos⁡θ{\displaystyle \cos \theta } Combining with the left-hand inequality: Takingcos⁡θ{\displaystyle \cos \theta }to the limit asθ→0{\displaystyle \theta \to 0} Therefore, Proof: The limits of those three quantities are 1, 0, and 1/2, so the resultant limit is zero. Proof: As in the preceding proof, The limits of those three quantities are 1, 1, and 1/2, so the resultant limit is 1/2. All these functions follow from the Pythagorean trigonometric identity. We can prove for instance the function Proof: We start from Then we divide this equation (I) bycos2⁡θ{\displaystyle \cos ^{2}\theta } Then use the substitutionθ=arctan⁡(x){\displaystyle \theta =\arctan(x)}: Then we use the identitytan⁡[arctan⁡(x)]≡x{\displaystyle \tan[\arctan(x)]\equiv x} And initial Pythagorean trigonometric identity proofed... Similarly if we divide this equation (I) bysin2⁡θ{\displaystyle \sin ^{2}\theta } Then use the substitutionθ=arctan⁡(x){\displaystyle \theta =\arctan(x)}: Then we use the identitytan⁡[arctan⁡(x)]≡x{\displaystyle \tan[\arctan(x)]\equiv x} And initial Pythagorean trigonometric identity proofed... Let we guess that we have to prove: Replacing (V) into (IV) : So it's true:y2=y2{\displaystyle y^{2}=y^{2}}and guessing statement was true:x=y1−y2{\displaystyle x={\frac {y}{\sqrt {1-y^{2}}}}} Now y can be written as x ; and we have [arcsin] expressed through [arctan]... Similarly if we seek :[arccos⁡(x)]{\displaystyle [\arccos(x)]}... From :[arcsin⁡(x)]{\displaystyle [\arcsin(x)]}... And finally we have [arccos] expressed through [arctan]...
https://en.wikipedia.org/wiki/Proofs_of_trigonometric_identities
Inmathematics,physicsandengineering, thesinc function(/ˈsɪŋk/SINK), denoted bysinc(x), has two forms, normalized and unnormalized.[1] In mathematics, the historicalunnormalized sinc functionis defined forx≠ 0bysinc⁡(x)=sin⁡xx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin x}{x}}.} Alternatively, the unnormalized sinc function is often called thesampling function, indicated as Sa(x).[2] Indigital signal processingandinformation theory, thenormalized sinc functionis commonly defined forx≠ 0bysinc⁡(x)=sin⁡(πx)πx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}.} In either case, the value atx= 0is defined to be the limiting valuesinc⁡(0):=limx→0sin⁡(ax)ax=1{\displaystyle \operatorname {sinc} (0):=\lim _{x\to 0}{\frac {\sin(ax)}{ax}}=1}for all reala≠ 0(the limit can be proven using thesqueeze theorem). Thenormalizationcauses thedefinite integralof the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value ofπ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values ofx. The normalized sinc function is theFourier transformof therectangular functionwith no scaling. It is used in the concept ofreconstructinga continuous bandlimited signal from uniformly spacedsamplesof that signal. The only difference between the two definitions is in the scaling of theindependent variable(thexaxis) by a factor ofπ. In both cases, the value of the function at theremovable singularityat zero is understood to be the limit value 1. The sinc function is thenanalyticeverywhere and hence anentire function. The function has also been called thecardinal sineorsine cardinalfunction.[3][4]The termsincwas introduced byPhilip M. Woodwardin his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own",[5]and his 1953 bookProbability and Information Theory, with Applications to Radar.[6][7]The function itself was first mathematically derived in this form byLord Rayleighin his expression (Rayleigh's formula) for the zeroth-order sphericalBessel functionof the first kind. Thezero crossingsof the unnormalized sinc are at non-zero integer multiples ofπ, while zero crossings of the normalized sinc occur at non-zero integers. The local maxima and minima of the unnormalized sinc correspond to its intersections with thecosinefunction. That is,⁠sin(ξ)/ξ⁠= cos(ξ)for all pointsξwhere the derivative of⁠sin(x)/x⁠is zero and thus a local extremum is reached. This follows from the derivative of the sinc function:ddxsinc⁡(x)={cos⁡(x)−sinc⁡(x)x,x≠00,x=0.{\displaystyle {\frac {d}{dx}}\operatorname {sinc} (x)={\begin{cases}{\dfrac {\cos(x)-\operatorname {sinc} (x)}{x}},&x\neq 0\\0,&x=0\end{cases}}.} The first few terms of the infinite series for thexcoordinate of then-th extremum with positivexcoordinate are[citation needed]xn=q−q−1−23q−3−1315q−5−146105q−7−⋯,{\displaystyle x_{n}=q-q^{-1}-{\frac {2}{3}}q^{-3}-{\frac {13}{15}}q^{-5}-{\frac {146}{105}}q^{-7}-\cdots ,}whereq=(n+12)π,{\displaystyle q=\left(n+{\frac {1}{2}}\right)\pi ,}and where oddnlead to a local minimum, and evennto a local maximum. Because of symmetry around theyaxis, there exist extrema withxcoordinates−xn. In addition, there is an absolute maximum atξ0= (0, 1). The normalized sinc function has a simple representation as theinfinite product:sin⁡(πx)πx=∏n=1∞(1−x2n2){\displaystyle {\frac {\sin(\pi x)}{\pi x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}}}\right)} and is related to thegamma functionΓ(x)throughEuler's reflection formula:sin⁡(πx)πx=1Γ(1+x)Γ(1−x).{\displaystyle {\frac {\sin(\pi x)}{\pi x}}={\frac {1}{\Gamma (1+x)\Gamma (1-x)}}.} Eulerdiscovered[8]thatsin⁡(x)x=∏n=1∞cos⁡(x2n),{\displaystyle {\frac {\sin(x)}{x}}=\prod _{n=1}^{\infty }\cos \left({\frac {x}{2^{n}}}\right),}and because of the product-to-sum identity[9] ∏n=1kcos⁡(x2n)=12k−1∑n=12k−1cos⁡(n−1/22k−1x),∀k≥1,{\displaystyle \prod _{n=1}^{k}\cos \left({\frac {x}{2^{n}}}\right)={\frac {1}{2^{k-1}}}\sum _{n=1}^{2^{k-1}}\cos \left({\frac {n-1/2}{2^{k-1}}}x\right),\quad \forall k\geq 1,}Euler's product can be recast as a sumsin⁡(x)x=limN→∞1N∑n=1Ncos⁡(n−1/2Nx).{\displaystyle {\frac {\sin(x)}{x}}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{n=1}^{N}\cos \left({\frac {n-1/2}{N}}x\right).} Thecontinuous Fourier transformof the normalized sinc (to ordinary frequency) isrect(f):∫−∞∞sinc⁡(t)e−i2πftdt=rect⁡(f),{\displaystyle \int _{-\infty }^{\infty }\operatorname {sinc} (t)\,e^{-i2\pi ft}\,dt=\operatorname {rect} (f),}where therectangular functionis 1 for argument between −⁠1/2⁠and⁠1/2⁠, and zero otherwise. This corresponds to the fact that thesinc filteris the ideal (brick-wall, meaning rectangularfrequency response)low-pass filter. This Fourier integral, including the special case∫−∞∞sin⁡(πx)πxdx=rect⁡(0)=1{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin(\pi x)}{\pi x}}\,dx=\operatorname {rect} (0)=1}is animproper integral(seeDirichlet integral) and not a convergentLebesgue integral, as∫−∞∞|sin⁡(πx)πx|dx=+∞.{\displaystyle \int _{-\infty }^{\infty }\left|{\frac {\sin(\pi x)}{\pi x}}\right|\,dx=+\infty .} The normalized sinc function has properties that make it ideal in relationship tointerpolationofsampledbandlimitedfunctions: Other properties of the two sinc functions include: The normalized sinc function can be used as anascent delta function, meaning that the followingweak limitholds: lima→0sin⁡(πxa)πx=lima→01asinc⁡(xa)=δ(x).{\displaystyle \lim _{a\to 0}{\frac {\sin \left({\frac {\pi x}{a}}\right)}{\pi x}}=\lim _{a\to 0}{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)=\delta (x).} This is not an ordinary limit, since the left side does not converge. Rather, it means that lima→0∫−∞∞1asinc⁡(xa)φ(x)dx=φ(0){\displaystyle \lim _{a\to 0}\int _{-\infty }^{\infty }{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)\varphi (x)\,dx=\varphi (0)} for everySchwartz function, as can be seen from theFourier inversion theorem. In the above expression, asa→ 0, the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of±⁠1/πx⁠, regardless of the value ofa. This complicates the informal picture ofδ(x)as being zero for allxexcept at the pointx= 0, and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in theGibbs phenomenon. We can also make an immediate connection with the standard Dirac representation ofδ(x){\displaystyle \delta (x)}by writingb=1/a{\displaystyle b=1/a}and limb→∞sin⁡(bπx)πx=limb→∞12π∫−bπbπeikxdk=12π∫−∞∞eikxdk=δ(x),{\displaystyle \lim _{b\to \infty }{\frac {\sin \left(b\pi x\right)}{\pi x}}=\lim _{b\to \infty }{\frac {1}{2\pi }}\int _{-b\pi }^{b\pi }e^{ikx}dk={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ikx}dk=\delta (x),} which makes clear the recovery of the delta as an infinite bandwidth limit of the integral. All sums in this section refer to the unnormalized sinc function. The sum ofsinc(n)over integernfrom 1 to∞equals⁠π− 1/2⁠: ∑n=1∞sinc⁡(n)=sinc⁡(1)+sinc⁡(2)+sinc⁡(3)+sinc⁡(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} (n)=\operatorname {sinc} (1)+\operatorname {sinc} (2)+\operatorname {sinc} (3)+\operatorname {sinc} (4)+\cdots ={\frac {\pi -1}{2}}.} The sum of the squares also equals⁠π− 1/2⁠:[10][11] ∑n=1∞sinc2⁡(n)=sinc2⁡(1)+sinc2⁡(2)+sinc2⁡(3)+sinc2⁡(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)+\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)+\operatorname {sinc} ^{2}(4)+\cdots ={\frac {\pi -1}{2}}.} When the signs of theaddendsalternate and begin with +, the sum equals⁠1/2⁠:∑n=1∞(−1)n+1sinc⁡(n)=sinc⁡(1)−sinc⁡(2)+sinc⁡(3)−sinc⁡(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} (n)=\operatorname {sinc} (1)-\operatorname {sinc} (2)+\operatorname {sinc} (3)-\operatorname {sinc} (4)+\cdots ={\frac {1}{2}}.} The alternating sums of the squares and cubes also equal⁠1/2⁠:[12]∑n=1∞(−1)n+1sinc2⁡(n)=sinc2⁡(1)−sinc2⁡(2)+sinc2⁡(3)−sinc2⁡(4)+⋯=12,{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)-\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)-\operatorname {sinc} ^{2}(4)+\cdots ={\frac {1}{2}},} ∑n=1∞(−1)n+1sinc3⁡(n)=sinc3⁡(1)−sinc3⁡(2)+sinc3⁡(3)−sinc3⁡(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{3}(n)=\operatorname {sinc} ^{3}(1)-\operatorname {sinc} ^{3}(2)+\operatorname {sinc} ^{3}(3)-\operatorname {sinc} ^{3}(4)+\cdots ={\frac {1}{2}}.} TheTaylor seriesof the unnormalizedsincfunction can be obtained from that of the sine (which also yields its value of 1 atx= 0):sin⁡xx=∑n=0∞(−1)nx2n(2n+1)!=1−x23!+x45!−x67!+⋯{\displaystyle {\frac {\sin x}{x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n+1)!}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots } The series converges for allx. The normalized version follows easily:sin⁡πxπx=1−π2x23!+π4x45!−π6x67!+⋯{\displaystyle {\frac {\sin \pi x}{\pi x}}=1-{\frac {\pi ^{2}x^{2}}{3!}}+{\frac {\pi ^{4}x^{4}}{5!}}-{\frac {\pi ^{6}x^{6}}{7!}}+\cdots } Eulerfamously compared this series to the expansion of the infinite product form to solve theBasel problem. The product of 1-D sinc functions readily provides amultivariatesinc function for the square Cartesian grid (lattice):sincC(x,y) = sinc(x) sinc(y), whoseFourier transformis theindicator functionof a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesianlattice(e.g.,hexagonal lattice) is a function whoseFourier transformis theindicator functionof theBrillouin zoneof that lattice. For example, the sinc function for the hexagonal lattice is a function whoseFourier transformis theindicator functionof the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for thehexagonal,body-centered cubic,face-centered cubicand other higher-dimensional lattices can be explicitly derived[13]using the geometric properties of Brillouin zones and their connection tozonotopes. For example, ahexagonal latticecan be generated by the (integer)linear spanof the vectorsu1=[1232]andu2=[12−32].{\displaystyle \mathbf {u} _{1}={\begin{bmatrix}{\frac {1}{2}}\\{\frac {\sqrt {3}}{2}}\end{bmatrix}}\quad {\text{and}}\quad \mathbf {u} _{2}={\begin{bmatrix}{\frac {1}{2}}\\-{\frac {\sqrt {3}}{2}}\end{bmatrix}}.} Denotingξ1=23u1,ξ2=23u2,ξ3=−23(u1+u2),x=[xy],{\displaystyle {\boldsymbol {\xi }}_{1}={\tfrac {2}{3}}\mathbf {u} _{1},\quad {\boldsymbol {\xi }}_{2}={\tfrac {2}{3}}\mathbf {u} _{2},\quad {\boldsymbol {\xi }}_{3}=-{\tfrac {2}{3}}(\mathbf {u} _{1}+\mathbf {u} _{2}),\quad \mathbf {x} ={\begin{bmatrix}x\\y\end{bmatrix}},}one can derive[13]the sinc function for this hexagonal lattice assincH⁡(x)=13(cos⁡(πξ1⋅x)sinc⁡(ξ2⋅x)sinc⁡(ξ3⋅x)+cos⁡(πξ2⋅x)sinc⁡(ξ3⋅x)sinc⁡(ξ1⋅x)+cos⁡(πξ3⋅x)sinc⁡(ξ1⋅x)sinc⁡(ξ2⋅x)).{\displaystyle {\begin{aligned}\operatorname {sinc} _{\text{H}}(\mathbf {x} )={\tfrac {1}{3}}{\big (}&\cos \left(\pi {\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right){\big )}.\end{aligned}}} This construction can be used to designLanczos windowfor general multidimensional lattices.[13] Some authors, by analogy, define the hyperbolic sine cardinal function.[14][15][16]
https://en.wikipedia.org/wiki/Sinc_function
Inmathematics, the Fouriersine and cosine transformsareintegral equationsthat decompose arbitrary functions into a sum ofsine wavesrepresenting theodd componentof the function plus cosine waves representing the even component of the function. The modernFourier transformconciselycontainsboth the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead ofcomplex exponentialsand don't requirecomplex numbersornegative frequency, they more closely correspond toJoseph Fourier's original transform equations and are still preferred in somesignal processingandstatisticsapplications and may be better suited as an introduction toFourier analysis. TheFourier sine transformoff(t){\displaystyle f(t)}is:[note 1] f^s(ξ)=∫−∞∞f(t)sin⁡(2πξt)dt.{\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }f(t)\sin(2\pi \xi t)\,dt.} Ift{\displaystyle t}meanstime, thenξ{\displaystyle \xi }isfrequencyin cycles per unit time,[note 2]but in the abstract, they can be any dual pair of variables (e.g.positionandspatial frequency). The sine transform is necessarily anodd functionof frequency, i.e. for allξ{\displaystyle \xi }: f^s(−ξ)=−f^s(ξ).{\displaystyle {\hat {f}}^{s}(-\xi )=-{\hat {f}}^{s}(\xi ).} TheFourier cosine transformoff(t){\displaystyle f(t)}is:[note 3] f^c(ξ)=∫−∞∞f(t)cos⁡(2πξt)dt.{\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }f(t)\cos(2\pi \xi t)\,dt.} The cosine transform is necessarily aneven functionof frequency, i.e. for allξ{\displaystyle \xi }: f^c(−ξ)=f^c(ξ).{\displaystyle {\hat {f}}^{c}(-\xi )={\hat {f}}^{c}(\xi ).} Themultiplication rules for even and odd functionsshown in the overbraces in the following equations dramatically simplify the integrands when transformingeven and odd functions. Some authors[1]even only define the cosine transform for even functionsfeven(t){\displaystyle f_{\text{even}}(t)}. Since cosine is an even function and because the integral of an even function from−∞{\displaystyle {-}\infty }to∞{\displaystyle \infty }is twice its integralfrom0{\displaystyle 0}to∞{\displaystyle \infty }, the cosine transform of any even function can be simplified to avoid negativet{\displaystyle t}: f^c(ξ)=∫−∞∞feven(t)⋅cos⁡(2πξt)⏞even·even=evendt=2∫0∞feven(t)cos⁡(2πξt)dt.{\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \cos(2\pi \xi t)} ^{\text{even·even=even}}\,dt=2\int _{0}^{\infty }f_{\text{even}}(t)\cos(2\pi \xi t)\,dt.} And because the integral from−∞{\displaystyle {-}\infty }to∞{\displaystyle \infty }ofany odd function is zero, the cosine transform of any odd function is simply zero: f^c(ξ)=∫−∞∞fodd(t)⋅cos⁡(2πξt)⏞odd·even=odddt=0.{\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \cos(2\pi \xi t)} ^{\text{odd·even=odd}}\,dt=0.} Similarly, because sin is odd, the sine transform of any odd functionfodd(t){\displaystyle f_{\text{odd}}(t)}also simplifies to avoid negativet{\displaystyle t}: f^s(ξ)=∫−∞∞fodd(t)⋅sin⁡(2πξt)⏞odd·odd=evendt=2∫0∞fodd(t)sin⁡(2πξt)dt{\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \sin(2\pi \xi t)} ^{\text{odd·odd=even}}\,dt=2\int _{0}^{\infty }f_{\text{odd}}(t)\sin(2\pi \xi t)\,dt} and the sine transform of any even function is simply zero: f^s(ξ)=∫−∞∞feven(t)⋅sin⁡(2πξt)⏞even·odd=odddt=0.{\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \sin(2\pi \xi t)} ^{\text{even·odd=odd}}\,dt=0.} The sine transform represents theodd part of a function, while the cosine transform represents the even part of a function. Just like the Fourier transform takes the form of different equations with different constant factors (seeFourier transform § Unitarity and definition for square integrable functionsfor discussion), other authors also define the cosine transform as[2]f^c(ξ)=2π∫0∞f(t)cos⁡(2πξt)dt{\displaystyle {\hat {f}}^{c}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\cos(2\pi \xi t)\,dt}and the sine transform asf^s(ξ)=2π∫0∞f(t)sin⁡(2πξt)dt.{\displaystyle {\hat {f}}^{s}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\sin(2\pi \xi t)\,dt.}Another convention defines the cosine transform as[3]Fc(α)=2π∫0∞f(x)cos⁡(αx)dx{\displaystyle F_{c}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\cos(\alpha x)\,dx}and the sine transform asFs(α)=2π∫0∞f(x)sin⁡(αx)dx{\displaystyle F_{s}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\sin(\alpha x)\,dx}usingα{\displaystyle \alpha }as the transformation variable. And whilet{\displaystyle t}is typically used to represent the time domain,x{\displaystyle x}is often instead used to represent a spatial domain when transforming to spatial frequencies. The original functionf{\displaystyle f}can be recovered from its sine and cosine transforms under the usual hypotheses[note 4]using the inversion formula:[4] f(t)=∫−∞∞f^s(ξ)sin⁡(2πξt)dξ⏟odd component off(t)+∫−∞∞f^c(ξ)cos⁡(2πξt)dξ⏟even component off(t).{\displaystyle f(t)=\underbrace {\int _{-\infty }^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi } _{{\text{odd component of }}f(t)}\,+\underbrace {\int _{-\infty }^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi } _{{\text{even component of }}f(t)}\,.} Note that since both integrands are even functions ofξ{\displaystyle \xi }, the concept of negative frequency can be avoided by doubling the result of integrating over non-negative frequencies: f(t)=2∫0∞f^s(ξ)sin⁡(2πξt)dξ+2∫0∞f^c(ξ)cos⁡(2πξt)dξ.{\displaystyle f(t)=2\int _{0}^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi \,+2\int _{0}^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi \,.} Also, iff{\displaystyle f}is anodd function, then the cosine transform is zero, so its inversion simplifies to:f(t)=∫−∞∞f^s(ξ)sin⁡(2πξt)dξ,only iff(t)is odd.{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is odd.}}} Likewise, if the original functionf{\displaystyle f}is aneven function, then the sine transform is zero, so its inversion also simplifies to: f(t)=∫−∞∞f^c(ξ)cos⁡(2πξt)dξ,only iff(t)is even.{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is even.}}} Remarkably, these last two simplified inversion formulas look identical to the original sine and cosine transforms, respectively, though witht{\displaystyle t}swapped withξ{\displaystyle \xi }(and withf{\displaystyle f}swapped withf^s{\displaystyle {\hat {f}}^{s}}orf^c{\displaystyle {\hat {f}}^{c}}). A consequence of this symmetry is that their inversion and transform processes still work when the two functions are swapped. Two such functions are calledtransform pairs.[note 5] Using the addition formula forcosine, the full inversion formula can also be rewritten asFourier's integral formula:[5][6]f(t)=∫−∞∞∫−∞∞f(x)cos⁡(2πξ(x−t))dxdξ.{\displaystyle f(t)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .}This theorem is often stated under different hypotheses, thatf{\displaystyle f}is integrable, and is ofbounded variationon an open interval containing the pointt{\displaystyle t}, in which case12limh→0(f(t+h)+f(t−h))=2∫0∞∫−∞∞f(x)cos⁡(2πξ(x−t))dxdξ.{\displaystyle {\tfrac {1}{2}}\lim _{h\to 0}\left(f(t+h)+f(t-h)\right)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .} This latter form is a useful intermediate step in proving the inverse formulae for the since and cosine transforms. One method of deriving it, due toCauchyis to insert ae−δξ{\displaystyle e^{-\delta \xi }}into the integral, whereδ>0{\displaystyle \delta >0}is fixed. Then2∫−∞∞∫0∞e−δξcos⁡(2πξ(x−t))dξf(x)dx=∫−∞∞f(x)2δδ2+4π2(x−t)2dx.{\displaystyle 2\int _{-\infty }^{\infty }\int _{0}^{\infty }e^{-\delta \xi }\cos(2\pi \xi (x-t))\,d\xi \,f(x)\,dx=\int _{-\infty }^{\infty }f(x){\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx.}Now whenδ→0{\displaystyle \delta \to 0}, the integrand tends to zero except atx=t{\displaystyle x=t}, so that formally the above isf(t)∫−∞∞2δδ2+4π2(x−t)2dx=f(t).{\displaystyle f(t)\int _{-\infty }^{\infty }{\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx=f(t).} The complex exponential form of theFourier transformused more often today is[7]f^(ξ)=∫−∞∞f(t)e−2πiξtdt{\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)e^{-2\pi i\xi t}\,dt\\\end{aligned}}\,}wherei{\displaystyle i}is thesquare root of negative one. By applyingEuler's formula(eix=cos⁡x+isin⁡x),{\textstyle (e^{ix}=\cos x+i\sin x),}it can be shown (for real-valued functions) that the Fourier transform's real component is the cosine transform (representing the even component of the original function) and the Fourier transform's imaginary component is the negative of the sine transform (representing the odd component of the original function):[8]f^(ξ)=∫−∞∞f(t)(cos⁡(2πξt)−isin⁡(2πξt))dtEuler's Formula=(∫−∞∞f(t)cos⁡(2πξt)dt)−i(∫−∞∞f(t)sin⁡(2πξt)dt)=f^c(ξ)−if^s(ξ).{\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)\left(\cos(2\pi \xi t)-i\,\sin(2\pi \xi t)\right)dt&&{\text{Euler's Formula}}\\&=\left(\int _{-\infty }^{\infty }f(t)\cos(2\pi \xi t)\,dt\right)-i\left(\int _{-\infty }^{\infty }f(t)\sin(2\pi \xi t)\,dt\right)\\&={\hat {f}}^{c}(\xi )-i\,{\hat {f}}^{s}(\xi )\,.\end{aligned}}}Because of this relationship, the cosine transform of functions whose Fourier transform is known (e.g. inFourier transform § Tables of important Fourier transforms) can be simply found by taking the real part of the Fourier transform:f^c(ξ)=Re[f^(ξ)]{\displaystyle {\hat {f}}^{c}(\xi )=\mathrm {Re} {[\;{\hat {f}}(\xi )\;]}}while the sine transform is simply thenegativeof the imaginary part of the Fourier transform:f^s(ξ)=−Im[f^(ξ)].{\displaystyle {\hat {f}}^{s}(\xi )=-\mathrm {Im} {[\;{\hat {f}}(\xi )\;]}\,.} An advantage of the modern Fourier transform is that while the sine and cosine transforms together are required to extract thephaseinformation of a frequency, the modern Fourier transform instead compactly packs both phaseandamplitude information inside its complex valued result. But a disadvantage is its requirement on understanding complex numbers, complex exponentials, and negative frequency. The sine and cosine transforms meanwhile have the advantage that all quantities are real. Since positive frequencies can fully express them, the non-trivial concept ofnegative frequencyneeded in the regular Fourier transform can be avoided. They may also be convenient when the original function is already even or odd or can be made even or odd, in which case only the cosine or the sine transform respectively is needed. For instance, even though an input may not be even or odd, adiscrete cosine transformmay start byassuming an even extensionof its input while adiscrete sine transformmay start byassuming an odd extensionof its input, to avoid having to compute the entirediscrete Fourier transform. Using standard methods of numerical evaluation for Fourier integrals, such as Gaussian or tanh-sinh quadrature, is likely to lead to completely incorrect results, as the quadrature sum is (for most integrands of interest) highly ill-conditioned. Special numerical methods which exploit the structure of the oscillation are required, an example of which is Ooura's method for Fourier integrals[9]This method attempts to evaluate the integrand at locations which asymptotically approach the zeros of the oscillation (either the sine or cosine), quickly reducing the magnitude of positive and negative terms which are summed.
https://en.wikipedia.org/wiki/Sine_and_cosine_transforms
Inmathematics,trigonometric integralsare afamilyofnonelementary integralsinvolvingtrigonometric functions. The differentsineintegral definitions areSi⁡(x)=∫0xsin⁡ttdt{\displaystyle \operatorname {Si} (x)=\int _{0}^{x}{\frac {\sin t}{t}}\,dt}si⁡(x)=−∫x∞sin⁡ttdt.{\displaystyle \operatorname {si} (x)=-\int _{x}^{\infty }{\frac {\sin t}{t}}\,dt~.} Note that the integrandsin⁡(t)t{\displaystyle {\frac {\sin(t)}{t}}}is thesinc function, and also the zerothspherical Bessel function. Sincesincis anevenentire function(holomorphicover the entire complex plane),Siis entire, odd, and the integral in its definition can be taken alongany pathconnecting the endpoints. By definition,Si(x)is theantiderivativeofsinx/xwhose value is zero atx= 0, andsi(x)is the antiderivative whose value is zero atx= ∞. Their difference is given by theDirichlet integral,Si⁡(x)−si⁡(x)=∫0∞sin⁡ttdt=π2orSi⁡(x)=π2+si⁡(x).{\displaystyle \operatorname {Si} (x)-\operatorname {si} (x)=\int _{0}^{\infty }{\frac {\sin t}{t}}\,dt={\frac {\pi }{2}}\quad {\text{ or }}\quad \operatorname {Si} (x)={\frac {\pi }{2}}+\operatorname {si} (x)~.} Insignal processing, the oscillations of the sine integral causeovershootandringing artifactswhen using thesinc filter, andfrequency domainringing if using a truncated sinc filter as alow-pass filter. Related is theGibbs phenomenon: If the sine integral is considered as theconvolutionof the sinc function with theHeaviside step function, this corresponds to truncating theFourier series, which is the cause of the Gibbs phenomenon. The differentcosineintegral definitions areCin⁡(x)≡∫0x1−cos⁡ttd⁡t.{\displaystyle \operatorname {Cin} (x)~\equiv ~\int _{0}^{x}{\frac {\ 1-\cos t\ }{t}}\ \operatorname {d} t~.} Cinis aneven,entire function. For that reason, some texts defineCinas the primary function, and deriveCiin terms ofCin . Ci⁡(x)≡−∫x∞cos⁡ttd⁡t{\displaystyle \operatorname {Ci} (x)~~\equiv ~-\int _{x}^{\infty }{\frac {\ \cos t\ }{t}}\ \operatorname {d} t~}=γ+ln⁡x−∫0x1−cos⁡ttd⁡t{\displaystyle ~~\qquad ~=~~\gamma ~+~\ln x~-~\int _{0}^{x}{\frac {\ 1-\cos t\ }{t}}\ \operatorname {d} t~} =γ+ln⁡x−Cin⁡x{\displaystyle ~~\qquad ~=~~\gamma ~+~\ln x~-~\operatorname {Cin} x~}for|Arg⁡(x)|<π,{\displaystyle ~{\Bigl |}\ \operatorname {Arg} (x)\ {\Bigr |}<\pi \ ,}whereγ≈ 0.57721566490 ...is theEuler–Mascheroni constant. Some texts useciinstead ofCi. The restriction onArg(x)is to avoid a discontinuity (shown as the orange vs blue area on the left half of theplot above) that arises because of abranch cutin the standardlogarithm function(ln). Ci(x)is the antiderivative of⁠cosx/x⁠(which vanishes asx→∞{\displaystyle \ x\to \infty \ }). The two definitions are related byCi⁡(x)=γ+ln⁡x−Cin⁡(x).{\displaystyle \operatorname {Ci} (x)=\gamma +\ln x-\operatorname {Cin} (x)~.} Thehyperbolic sineintegral is defined asShi⁡(x)=∫0xsinh⁡(t)tdt.{\displaystyle \operatorname {Shi} (x)=\int _{0}^{x}{\frac {\sinh(t)}{t}}\,dt.} It is related to the ordinary sine integral bySi⁡(ix)=iShi⁡(x).{\displaystyle \operatorname {Si} (ix)=i\operatorname {Shi} (x).} Thehyperbolic cosineintegral is Chi⁡(x)=γ+ln⁡x+∫0xcosh⁡t−1tdtfor|Arg⁡(x)|<π,{\displaystyle \operatorname {Chi} (x)=\gamma +\ln x+\int _{0}^{x}{\frac {\cosh t-1}{t}}\,dt\qquad ~{\text{ for }}~\left|\operatorname {Arg} (x)\right|<\pi ~,}whereγ{\displaystyle \gamma }is theEuler–Mascheroni constant. It has the series expansionChi⁡(x)=γ+ln⁡(x)+x24+x496+x64320+x8322560+x1036288000+O(x12).{\displaystyle \operatorname {Chi} (x)=\gamma +\ln(x)+{\frac {x^{2}}{4}}+{\frac {x^{4}}{96}}+{\frac {x^{6}}{4320}}+{\frac {x^{8}}{322560}}+{\frac {x^{10}}{36288000}}+O(x^{12}).} Trigonometric integrals can be understood in terms of the so-called "auxiliary functions"f(x)≡∫0∞sin⁡(t)t+xdt=∫0∞e−xtt2+1dt=Ci⁡(x)sin⁡(x)+[π2−Si⁡(x)]cos⁡(x),g(x)≡∫0∞cos⁡(t)t+xdt=∫0∞te−xtt2+1dt=−Ci⁡(x)cos⁡(x)+[π2−Si⁡(x)]sin⁡(x).{\displaystyle {\begin{array}{rcl}f(x)&\equiv &\int _{0}^{\infty }{\frac {\sin(t)}{t+x}}\,dt&=&\int _{0}^{\infty }{\frac {e^{-xt}}{t^{2}+1}}\,dt&=&\operatorname {Ci} (x)\sin(x)+\left[{\frac {\pi }{2}}-\operatorname {Si} (x)\right]\cos(x)~,\\g(x)&\equiv &\int _{0}^{\infty }{\frac {\cos(t)}{t+x}}\,dt&=&\int _{0}^{\infty }{\frac {te^{-xt}}{t^{2}+1}}\,dt&=&-\operatorname {Ci} (x)\cos(x)+\left[{\frac {\pi }{2}}-\operatorname {Si} (x)\right]\sin(x)~.\end{array}}}Using these functions, the trigonometric integrals may be re-expressed as (cf. Abramowitz & Stegun,p. 232)π2−Si⁡(x)=−si⁡(x)=f(x)cos⁡(x)+g(x)sin⁡(x),andCi⁡(x)=f(x)sin⁡(x)−g(x)cos⁡(x).{\displaystyle {\begin{array}{rcl}{\frac {\pi }{2}}-\operatorname {Si} (x)=-\operatorname {si} (x)&=&f(x)\cos(x)+g(x)\sin(x)~,\qquad {\text{ and }}\\\operatorname {Ci} (x)&=&f(x)\sin(x)-g(x)\cos(x)~.\\\end{array}}} Thespiralformed by parametric plot ofsi, ciis known as Nielsen's spiral.x(t)=a×ci⁡(t){\displaystyle x(t)=a\times \operatorname {ci} (t)}y(t)=a×si⁡(t){\displaystyle y(t)=a\times \operatorname {si} (t)} The spiral is closely related to theFresnel integralsand theEuler spiral. Nielsen's spiral has applications in vision processing, road and track construction and other areas.[1] Various expansions can be used for evaluation of trigonometric integrals, depending on the range of the argument. Si⁡(x)∼π2−cos⁡xx(1−2!x2+4!x4−6!x6⋯)−sin⁡xx(1x−3!x3+5!x5−7!x7⋯){\displaystyle \operatorname {Si} (x)\sim {\frac {\pi }{2}}-{\frac {\cos x}{x}}\left(1-{\frac {2!}{x^{2}}}+{\frac {4!}{x^{4}}}-{\frac {6!}{x^{6}}}\cdots \right)-{\frac {\sin x}{x}}\left({\frac {1}{x}}-{\frac {3!}{x^{3}}}+{\frac {5!}{x^{5}}}-{\frac {7!}{x^{7}}}\cdots \right)}Ci⁡(x)∼sin⁡xx(1−2!x2+4!x4−6!x6⋯)−cos⁡xx(1x−3!x3+5!x5−7!x7⋯).{\displaystyle \operatorname {Ci} (x)\sim {\frac {\sin x}{x}}\left(1-{\frac {2!}{x^{2}}}+{\frac {4!}{x^{4}}}-{\frac {6!}{x^{6}}}\cdots \right)-{\frac {\cos x}{x}}\left({\frac {1}{x}}-{\frac {3!}{x^{3}}}+{\frac {5!}{x^{5}}}-{\frac {7!}{x^{7}}}\cdots \right)~.} These series areasymptoticand divergent, although can be used for estimates and even precise evaluation atℜ(x) ≫ 1. Si⁡(x)=∑n=0∞(−1)nx2n+1(2n+1)(2n+1)!=x−x33!⋅3+x55!⋅5−x77!⋅7±⋯{\displaystyle \operatorname {Si} (x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n+1}}{(2n+1)(2n+1)!}}=x-{\frac {x^{3}}{3!\cdot 3}}+{\frac {x^{5}}{5!\cdot 5}}-{\frac {x^{7}}{7!\cdot 7}}\pm \cdots }Ci⁡(x)=γ+ln⁡x+∑n=1∞(−1)nx2n2n(2n)!=γ+ln⁡x−x22!⋅2+x44!⋅4∓⋯{\displaystyle \operatorname {Ci} (x)=\gamma +\ln x+\sum _{n=1}^{\infty }{\frac {(-1)^{n}x^{2n}}{2n(2n)!}}=\gamma +\ln x-{\frac {x^{2}}{2!\cdot 2}}+{\frac {x^{4}}{4!\cdot 4}}\mp \cdots } These series are convergent at any complexx, although for|x| ≫ 1, the series will converge slowly initially, requiring many terms for high precision. From the Maclaurin series expansion of sine:sinx=x−x33!+x55!−x77!+x99!−x1111!+⋯{\displaystyle \sin \,x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+{\frac {x^{9}}{9!}}-{\frac {x^{11}}{11!}}+\cdots }sinxx=1−x23!+x45!−x67!+x89!−x1011!+⋯{\displaystyle {\frac {\sin \,x}{x}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+{\frac {x^{8}}{9!}}-{\frac {x^{10}}{11!}}+\cdots }∴∫sinxxdx=x−x33!⋅3+x55!⋅5−x77!⋅7+x99!⋅9−x1111!⋅11+⋯{\displaystyle \therefore \int {\frac {\sin \,x}{x}}dx=x-{\frac {x^{3}}{3!\cdot 3}}+{\frac {x^{5}}{5!\cdot 5}}-{\frac {x^{7}}{7!\cdot 7}}+{\frac {x^{9}}{9!\cdot 9}}-{\frac {x^{11}}{11!\cdot 11}}+\cdots } The functionE1⁡(z)=∫1∞exp⁡(−zt)tdtforℜ(z)≥0{\displaystyle \operatorname {E} _{1}(z)=\int _{1}^{\infty }{\frac {\exp(-zt)}{t}}\,dt\qquad ~{\text{ for }}~\Re (z)\geq 0}is called theexponential integral. It is closely related toSiandCi,E1⁡(ix)=i(−π2+Si⁡(x))−Ci⁡(x)=isi⁡(x)−ci⁡(x)forx>0.{\displaystyle \operatorname {E} _{1}(ix)=i\left(-{\frac {\pi }{2}}+\operatorname {Si} (x)\right)-\operatorname {Ci} (x)=i\operatorname {si} (x)-\operatorname {ci} (x)\qquad ~{\text{ for }}~x>0~.} As each respective function is analytic except for the cut at negative values of the argument, the area of validity of the relation should be extended to (Outside this range, additional terms which are integer factors ofπappear in the expression.) Cases of imaginary argument of the generalized integro-exponential function are∫1∞cos⁡(ax)ln⁡xxdx=−π224+γ(γ2+ln⁡a)+ln2⁡a2+∑n≥1(−a2)n(2n)!(2n)2,{\displaystyle \int _{1}^{\infty }\cos(ax){\frac {\ln x}{x}}\,dx=-{\frac {\pi ^{2}}{24}}+\gamma \left({\frac {\gamma }{2}}+\ln a\right)+{\frac {\ln ^{2}a}{2}}+\sum _{n\geq 1}{\frac {(-a^{2})^{n}}{(2n)!(2n)^{2}}}~,}which is the real part of∫1∞eiaxln⁡xxdx=−π224+γ(γ2+ln⁡a)+ln2⁡a2−π2i(γ+ln⁡a)+∑n≥1(ia)nn!n2.{\displaystyle \int _{1}^{\infty }e^{iax}{\frac {\ln x}{x}}\,dx=-{\frac {\pi ^{2}}{24}}+\gamma \left({\frac {\gamma }{2}}+\ln a\right)+{\frac {\ln ^{2}a}{2}}-{\frac {\pi }{2}}i\left(\gamma +\ln a\right)+\sum _{n\geq 1}{\frac {(ia)^{n}}{n!n^{2}}}~.} Similarly∫1∞eiaxln⁡xx2dx=1+ia[−π224+γ(γ2+ln⁡a−1)+ln2⁡a2−ln⁡a+1]+πa2(γ+ln⁡a−1)+∑n≥1(ia)n+1(n+1)!n2.{\displaystyle \int _{1}^{\infty }e^{iax}{\frac {\ln x}{x^{2}}}\,dx=1+ia\left[-{\frac {\pi ^{2}}{24}}+\gamma \left({\frac {\gamma }{2}}+\ln a-1\right)+{\frac {\ln ^{2}a}{2}}-\ln a+1\right]+{\frac {\pi a}{2}}{\Bigl (}\gamma +\ln a-1{\Bigr )}+\sum _{n\geq 1}{\frac {(ia)^{n+1}}{(n+1)!n^{2}}}~.} Padé approximantsof the convergent Taylor series provide an efficient way to evaluate the functions for small arguments. The following formulae, given by Rowe et al. (2015),[2]are accurate to better than10−16for0 ≤x≤ 4,Si⁡(x)≈x⋅(1−4.54393409816329991⋅10−2⋅x2+1.15457225751016682⋅10−3⋅x4−1.41018536821330254⋅10−5⋅x6+9.43280809438713025⋅10−8⋅x8−3.53201978997168357⋅10−10⋅x10+7.08240282274875911⋅10−13⋅x12−6.05338212010422477⋅10−16⋅x141+1.01162145739225565⋅10−2⋅x2+4.99175116169755106⋅10−5⋅x4+1.55654986308745614⋅10−7⋅x6+3.28067571055789734⋅10−10⋅x8+4.5049097575386581⋅10−13⋅x10+3.21107051193712168⋅10−16⋅x12)Ci⁡(x)≈γ+ln⁡(x)+x2⋅(−0.25+7.51851524438898291⋅10−3⋅x2−1.27528342240267686⋅10−4⋅x4+1.05297363846239184⋅10−6⋅x6−4.68889508144848019⋅10−9⋅x8+1.06480802891189243⋅10−11⋅x10−9.93728488857585407⋅10−15⋅x121+1.1592605689110735⋅10−2⋅x2+6.72126800814254432⋅10−5⋅x4+2.55533277086129636⋅10−7⋅x6+6.97071295760958946⋅10−10⋅x8+1.38536352772778619⋅10−12⋅x10+1.89106054713059759⋅10−15⋅x12+1.39759616731376855⋅10−18⋅x14){\displaystyle {\begin{array}{rcl}\operatorname {Si} (x)&\approx &x\cdot \left({\frac {\begin{array}{l}1-4.54393409816329991\cdot 10^{-2}\cdot x^{2}+1.15457225751016682\cdot 10^{-3}\cdot x^{4}-1.41018536821330254\cdot 10^{-5}\cdot x^{6}\\~~~+9.43280809438713025\cdot 10^{-8}\cdot x^{8}-3.53201978997168357\cdot 10^{-10}\cdot x^{10}+7.08240282274875911\cdot 10^{-13}\cdot x^{12}\\~~~-6.05338212010422477\cdot 10^{-16}\cdot x^{14}\end{array}}{\begin{array}{l}1+1.01162145739225565\cdot 10^{-2}\cdot x^{2}+4.99175116169755106\cdot 10^{-5}\cdot x^{4}+1.55654986308745614\cdot 10^{-7}\cdot x^{6}\\~~~+3.28067571055789734\cdot 10^{-10}\cdot x^{8}+4.5049097575386581\cdot 10^{-13}\cdot x^{10}+3.21107051193712168\cdot 10^{-16}\cdot x^{12}\end{array}}}\right)\\&~&\\\operatorname {Ci} (x)&\approx &\gamma +\ln(x)+\\&&x^{2}\cdot \left({\frac {\begin{array}{l}-0.25+7.51851524438898291\cdot 10^{-3}\cdot x^{2}-1.27528342240267686\cdot 10^{-4}\cdot x^{4}+1.05297363846239184\cdot 10^{-6}\cdot x^{6}\\~~~-4.68889508144848019\cdot 10^{-9}\cdot x^{8}+1.06480802891189243\cdot 10^{-11}\cdot x^{10}-9.93728488857585407\cdot 10^{-15}\cdot x^{12}\\\end{array}}{\begin{array}{l}1+1.1592605689110735\cdot 10^{-2}\cdot x^{2}+6.72126800814254432\cdot 10^{-5}\cdot x^{4}+2.55533277086129636\cdot 10^{-7}\cdot x^{6}\\~~~+6.97071295760958946\cdot 10^{-10}\cdot x^{8}+1.38536352772778619\cdot 10^{-12}\cdot x^{10}+1.89106054713059759\cdot 10^{-15}\cdot x^{12}\\~~~+1.39759616731376855\cdot 10^{-18}\cdot x^{14}\\\end{array}}}\right)\end{array}}} The integrals may be evaluated indirectly viaauxiliary functionsf(x){\displaystyle f(x)}andg(x){\displaystyle g(x)}, which are defined by Forx≥4{\displaystyle x\geq 4}thePadé rational functionsgiven below approximatef(x){\displaystyle f(x)}andg(x){\displaystyle g(x)}with error less than 10−16:[2] f(x)≈1x⋅(1+7.44437068161936700618⋅102⋅x−2+1.96396372895146869801⋅105⋅x−4+2.37750310125431834034⋅107⋅x−6+1.43073403821274636888⋅109⋅x−8+4.33736238870432522765⋅1010⋅x−10+6.40533830574022022911⋅1011⋅x−12+4.20968180571076940208⋅1012⋅x−14+1.00795182980368574617⋅1013⋅x−16+4.94816688199951963482⋅1012⋅x−18−4.94701168645415959931⋅1011⋅x−201+7.46437068161927678031⋅102⋅x−2+1.97865247031583951450⋅105⋅x−4+2.41535670165126845144⋅107⋅x−6+1.47478952192985464958⋅109⋅x−8+4.58595115847765779830⋅1010⋅x−10+7.08501308149515401563⋅1011⋅x−12+5.06084464593475076774⋅1012⋅x−14+1.43468549171581016479⋅1013⋅x−16+1.11535493509914254097⋅1013⋅x−18)g(x)≈1x2⋅(1+8.1359520115168615⋅102⋅x−2+2.35239181626478200⋅105⋅x−4+3.12557570795778731⋅107⋅x−6+2.06297595146763354⋅109⋅x−8+6.83052205423625007⋅1010⋅x−10+1.09049528450362786⋅1012⋅x−12+7.57664583257834349⋅1012⋅x−14+1.81004487464664575⋅1013⋅x−16+6.43291613143049485⋅1012⋅x−18−1.36517137670871689⋅1012⋅x−201+8.19595201151451564⋅102⋅x−2+2.40036752835578777⋅105⋅x−4+3.26026661647090822⋅107⋅x−6+2.23355543278099360⋅109⋅x−8+7.87465017341829930⋅1010⋅x−10+1.39866710696414565⋅1012⋅x−12+1.17164723371736605⋅1013⋅x−14+4.01839087307656620⋅1013⋅x−16+3.99653257887490811⋅1013⋅x−18){\displaystyle {\begin{array}{rcl}f(x)&\approx &{\dfrac {1}{x}}\cdot \left({\frac {\begin{array}{l}1+7.44437068161936700618\cdot 10^{2}\cdot x^{-2}+1.96396372895146869801\cdot 10^{5}\cdot x^{-4}+2.37750310125431834034\cdot 10^{7}\cdot x^{-6}\\~~~+1.43073403821274636888\cdot 10^{9}\cdot x^{-8}+4.33736238870432522765\cdot 10^{10}\cdot x^{-10}+6.40533830574022022911\cdot 10^{11}\cdot x^{-12}\\~~~+4.20968180571076940208\cdot 10^{12}\cdot x^{-14}+1.00795182980368574617\cdot 10^{13}\cdot x^{-16}+4.94816688199951963482\cdot 10^{12}\cdot x^{-18}\\~~~-4.94701168645415959931\cdot 10^{11}\cdot x^{-20}\end{array}}{\begin{array}{l}1+7.46437068161927678031\cdot 10^{2}\cdot x^{-2}+1.97865247031583951450\cdot 10^{5}\cdot x^{-4}+2.41535670165126845144\cdot 10^{7}\cdot x^{-6}\\~~~+1.47478952192985464958\cdot 10^{9}\cdot x^{-8}+4.58595115847765779830\cdot 10^{10}\cdot x^{-10}+7.08501308149515401563\cdot 10^{11}\cdot x^{-12}\\~~~+5.06084464593475076774\cdot 10^{12}\cdot x^{-14}+1.43468549171581016479\cdot 10^{13}\cdot x^{-16}+1.11535493509914254097\cdot 10^{13}\cdot x^{-18}\end{array}}}\right)\\&&\\g(x)&\approx &{\dfrac {1}{x^{2}}}\cdot \left({\frac {\begin{array}{l}1+8.1359520115168615\cdot 10^{2}\cdot x^{-2}+2.35239181626478200\cdot 10^{5}\cdot x^{-4}+3.12557570795778731\cdot 10^{7}\cdot x^{-6}\\~~~+2.06297595146763354\cdot 10^{9}\cdot x^{-8}+6.83052205423625007\cdot 10^{10}\cdot x^{-10}+1.09049528450362786\cdot 10^{12}\cdot x^{-12}\\~~~+7.57664583257834349\cdot 10^{12}\cdot x^{-14}+1.81004487464664575\cdot 10^{13}\cdot x^{-16}+6.43291613143049485\cdot 10^{12}\cdot x^{-18}\\~~~-1.36517137670871689\cdot 10^{12}\cdot x^{-20}\end{array}}{\begin{array}{l}1+8.19595201151451564\cdot 10^{2}\cdot x^{-2}+2.40036752835578777\cdot 10^{5}\cdot x^{-4}+3.26026661647090822\cdot 10^{7}\cdot x^{-6}\\~~~+2.23355543278099360\cdot 10^{9}\cdot x^{-8}+7.87465017341829930\cdot 10^{10}\cdot x^{-10}+1.39866710696414565\cdot 10^{12}\cdot x^{-12}\\~~~+1.17164723371736605\cdot 10^{13}\cdot x^{-14}+4.01839087307656620\cdot 10^{13}\cdot x^{-16}+3.99653257887490811\cdot 10^{13}\cdot x^{-18}\end{array}}}\right)\\\end{array}}}
https://en.wikipedia.org/wiki/Sine_integral
Asine quadrant(Arabic:الربع المجيب‎,romanized:rub‘ul mujayyab), sometimes known as a "sinecal quadrant", was a type ofquadrantused by medievalArabic astronomers. Theinstrumentcould be used to measurecelestial angles, tell time, find directions, performtrigonometriccomputations, and determine the apparent positions of any celestial object for any time. The name is derived from the Arabicrubmeaning 'a quarter' andmujayyabmeaning 'marked with sine'.[1] The sine quadrant was described byMuhammad ibn Mūsā al-Khwārizmīin 9th-century Baghdad, and was used throughout the medieval Islamic period to determine the proper times forIslamic prayer.[2]These instruments, with poor angular resolution, were not principally intended to function with stars at night as an astronomical measuring device.[citation needed]It is impractical to sight a star through the front aperture unless it is on a fixed, stabilized mount relative to the half degree width of a very intense Sun. The instrument is a quarter of a circle made of wood or metal (usuallybrass) divided on itsarcside into 90 equal parts or degrees. The 90 divisions are gathered in 18 groups of five degrees each and are generally numbered both ways from the ends of the arc. That is, one set of numbers begins at the left end of the arc and counts up to 90 at the right end, while the other starts on the right and the 90 is at the left. This double numbering enables the instrument to measure eithercelestial altitudeorzenith distanceor both simultaneously. At the apex—where the two graduated straight sides of the grid pattern meet in a right angle—is a thin cord strung through a pin hole and weighted with a small bead. The cord is called akhaitand is used as aplumb linewhen measuring celestial altitudes. It is also used to indicate angles when doing calculations with the instrument; the sliding bead facilitates trigonometric calculations. This plumb line serves two functions: first, it indicates the angular orientation of the instrument, and second, it ensures the instrument is parallel to the vertical plane (perpendicular with the ground) when optically aligned with the target. Traditionally, the line from the beginning of the arc to the apex is called thejaibsand the line from the end of the arc to the apex is called thejaib tamams. Because the arc is numbered in both directions, these labels are not attached to one straight side or the other, but are instead relative to the measurement or calculation being performed. Like the arc, both thejaibsandjaib tamamsare divided into 60 equal units gathered in groups of five, numbered in both directions to and from the apex. The sixty lines parallel to thejaibsare calledsitheeniysorsixtys, and the sixty lines parallel to thejaib tamamsare calledjuyoobul mabsootah. The reason for sixty divisions along thejaibsandjaib tamamsis that the instrument uses thesexagesimalnumber system. It is graduated to the number base 60 and not to the base 10 (decimal system) presently used. Time, angular measurement, and geographical coordinate measurements are about the only holdovers from the Sumerian/Babylonian number system that are still used. On one of the straight edges of the non-maritime quadrant (solid sheet form) are two alignment plates calledhadafatani, each with a small centralaperture(pinhole). These two apertures form an optical axis through which the user sights an inclined object, such as a star at night. The maritime (navigation) version of these devices is skeletal in design rather than a solid sheet form, so as to limit buffeting or movement of the instrument from wind while in the operator's hand. During the day, the Sun'saltitudecan be determined by aligning the apertures such that sunlight passes through both and projects a bright illuminated dot onto a surface (such as the user's finger or the screen plate of a mariner'sbackstaff). The apertures are not viewing holes for sighting the Sun with the naked eye. The second aperture also attenuates (darkens) the incoming sunlight by masking anyannulus-shaped sunlight reflecting off the metal first aperture. This is similar to anirisin a camera lens reducing the light intensity. Typically, the instrument is orientated such that the user faces looking slightly down upon the scale, with the Sun at the user's left and the right hand placed in such a way that a finger functions as a projection screen. When the apertures are optically aligned with the Sun, the user reads the angular measurement of the point where the graduated arc is bisected by the hanging plumb line. A misconception by non-astronomers and non-navigators is that using the instrument requires two people: one to take the sight and one to read the plumb line's angular position.[citation needed]Actually, when measuring the Sun's altitude, the instrument is held flush (face on) and below eye level by a single user, meaning they can read the cord's angular position on the face of the instrument. However, it does help to have another person to write down the scale readings as they are taken; the device cannot be held sufficiently stable (retaining the optical alignment) with just one hand.
https://en.wikipedia.org/wiki/Sine_quadrant
Asine wave,sinusoidal wave, orsinusoid(symbol:∿) is aperiodic wavewhosewaveform(shape) is thetrigonometricsine function. Inmechanics, as a linearmotionover time, this issimple harmonic motion; asrotation, it corresponds touniform circular motion. Sine waves occur often inphysics, includingwind waves,soundwaves, andlightwaves, such asmonochromatic radiation. Inengineering,signal processing, andmathematics,Fourier analysisdecomposes general functions into a sum of sine waves of various frequencies, relative phases, and magnitudes. When any two sine waves of the samefrequency(but arbitraryphase) arelinearly combined, the result is another sine wave of the same frequency; this property is unique among periodic waves. Conversely, if some phase is chosen as a zero reference, a sine wave of arbitrary phase can be written as the linear combination of two sine waves with phases of zero and a quarter cycle, thesineandcosinecomponents, respectively. A sine wave represents a singlefrequencywith noharmonicsand is considered anacousticallypure tone. Adding sine waves of different frequencies results in a different waveform. Presence of higher harmonics in addition to thefundamentalcauses variation in thetimbre, which is the reason why the samemusical pitchplayed on different instruments sounds different. Sine waves of arbitrary phase and amplitude are calledsinusoidsand have the general form:[1]y(t)=Asin⁡(ωt+φ)=Asin⁡(2πft+φ){\displaystyle y(t)=A\sin(\omega t+\varphi )=A\sin(2\pi ft+\varphi )}where: Sinusoids that exist in both position and time also have: Depending on their direction of travel, they can take the form: Since sine waves propagate without changing form indistributed linear systems,[definition needed]they are often used to analyzewave propagation. When two waves with the sameamplitudeandfrequencytraveling in opposite directionssuperposeeach other, then astanding wavepattern is created. On a plucked string, the superimposing waves are the waves reflected from the fixed endpoints of the string. The string'sresonantfrequencies are the string's only possible standing waves, which only occur for wavelengths that are twice the string's length (corresponding to thefundamental frequency) and integer divisions of that (corresponding to higher harmonics). The earlier equation gives the displacementy{\displaystyle y}of the wave at a positionx{\displaystyle x}at timet{\displaystyle t}along a single line. This could, for example, be considered the value of a wave along a wire. In two or three spatial dimensions, the same equation describes a travellingplane waveif positionx{\displaystyle x}and wavenumberk{\displaystyle k}are interpreted as vectors, and their product as adot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed. French mathematicianJoseph Fourierdiscovered that sinusoidal waves can be summed as simple building blocks to approximate any periodic waveform, includingsquare waves. TheseFourier seriesare frequently used insignal processingand the statistical analysis oftime series. TheFourier transformthen extended Fourier series to handle general functions, and birthed the field ofFourier analysis. Differentiatingany sinusoid with respect to time can be viewed as multiplying its amplitude by its angular frequency and advancing it by a quarter cycle: ddt[Asin⁡(ωt+φ)]=Aωcos⁡(ωt+φ)=Aωsin⁡(ωt+φ+π2).{\displaystyle {\begin{aligned}{\frac {d}{dt}}[A\sin(\omega t+\varphi )]&=A\omega \cos(\omega t+\varphi )\\&=A\omega \sin(\omega t+\varphi +{\tfrac {\pi }{2}})\,.\end{aligned}}} Adifferentiatorhas azeroat the origin of thecomplex frequencyplane. Thegainof itsfrequency responseincreases at a rate of +20dBperdecadeof frequency (forroot-powerquantities), the same positive slope as a 1storderhigh-pass filter'sstopband, although a differentiator doesn't have acutoff frequencyor a flatpassband. A nth-order high-pass filter approximately applies the nthtime derivative ofsignalswhose frequency band is significantly lower than the filter's cutoff frequency. Integratingany sinusoid with respect to time can be viewed as dividing its amplitude by its angular frequency and delaying it a quarter cycle: ∫Asin⁡(ωt+φ)dt=−Aωcos⁡(ωt+φ)+C=−Aωsin⁡(ωt+φ+π2)+C=Aωsin⁡(ωt+φ−π2)+C.{\displaystyle {\begin{aligned}\int A\sin(\omega t+\varphi )dt&=-{\frac {A}{\omega }}\cos(\omega t+\varphi )+C\\&=-{\frac {A}{\omega }}\sin(\omega t+\varphi +{\tfrac {\pi }{2}})+C\\&={\frac {A}{\omega }}\sin(\omega t+\varphi -{\tfrac {\pi }{2}})+C\,.\end{aligned}}} Theconstant of integrationC{\displaystyle C}will be zero if thebounds of integrationis an integer multiple of the sinusoid's period. Anintegratorhas apoleat the origin of the complex frequency plane. The gain of its frequency response falls off at a rate of -20 dB per decade of frequency (for root-power quantities), the same negative slope as a 1storderlow-pass filter's stopband, although an integrator doesn't have a cutoff frequency or a flat passband. A nth-order low-pass filter approximately performs the nthtime integral of signals whose frequency band is significantly higher than the filter's cutoff frequency.
https://en.wikipedia.org/wiki/Sine_wave
Thesine-Gordon equationis a second-ordernonlinear partial differential equationfor a functionφ{\displaystyle \varphi }dependent on two variables typically denotedx{\displaystyle x}andt{\displaystyle t}, involving thewave operatorand thesineofφ{\displaystyle \varphi }. It was originally introduced byEdmond Bour(1862) in the course of study ofsurfaces of constant negative curvatureas theGauss–Codazzi equationfor surfaces of constantGaussian curvature−1 in3-dimensional space.[1]The equation was rediscovered by Frenkel and Kontorova (1939) in their study ofcrystal dislocationsknown as theFrenkel–Kontorova model.[2] This equation attracted a lot of attention in the 1970s due to the presence ofsolitonsolutions,[3]and is an example of anintegrable PDE. Among well-known integrable PDEs, the sine-Gordon equation is the onlyrelativisticsystem due to itsLorentz invariance. This is the first derivation of the equation, by Bour (1862). There are two equivalent forms of the sine-Gordon equation. In the (real)space-time coordinates, denoted(x,t){\displaystyle (x,t)}, the equation reads:[4] where partial derivatives are denoted by subscripts. Passing to thelight-cone coordinates(u,v), akin toasymptotic coordinateswhere the equation takes the form[5] This is the original form of the sine-Gordon equation, as it was considered in the 19th century in the course of investigation ofsurfacesof constantGaussian curvatureK= −1, also calledpseudospherical surfaces. Consider an arbitrary pseudospherical surface. Across every point on the surface there are twoasymptotic curves. This allows us to construct a distinguished coordinate system for such a surface, in whichu= constant,v= constant are the asymptotic lines, and the coordinates are incremented by thearc lengthon the surface. At every point on the surface, letφ{\displaystyle \varphi }be the angle between the asymptotic lines. Thefirst fundamental formof the surface is and thesecond fundamental formisL=N=0,M=sin⁡φ{\displaystyle L=N=0,M=\sin \varphi }and theGauss–Codazzi equationisφuv=sin⁡φ.{\displaystyle \varphi _{uv}=\sin \varphi .}Thus, any pseudospherical surface gives rise to a solution of the sine-Gordon equation, although with some caveats: if the surface is complete, it is necessarilysingulardue to theHilbert embedding theorem. In the simplest case,thepseudosphere, also known as the tractroid, corresponds to a static one-soliton, but the tractroid has a singular cusp at its equator. Conversely, one can start with a solution to the sine-Gordon equation to obtain a pseudosphere uniquely up torigid transformations. There is a theorem, sometimes called thefundamental theorem of surfaces, that if a pair of matrix-valued bilinear forms satisfy the Gauss–Codazzi equations, then they are the first and second fundamental forms of an embedded surface in 3-dimensional space. Solutions to the sine-Gordon equation can be used to construct such matrices by using the forms obtained above. The study of this equation and of the associated transformations of pseudospherical surfaces in the 19th century byBianchiandBäcklundled to the discovery ofBäcklund transformations. Another transformation of pseudospherical surfaces is theLie transformintroduced bySophus Liein 1879, which corresponds toLorentz boostsfor solutions of the sine-Gordon equation.[6] There are also some more straightforward ways to construct new solutions but which do not give new surfaces. Since the sine-Gordon equation is odd, the negative of any solution is another solution. However this does not give a new surface, as the sign-change comes down to a choice of direction for the normal to the surface. New solutions can be found by translating the solution: ifφ{\displaystyle \varphi }is a solution, then so isφ+2nπ{\displaystyle \varphi +2n\pi }forn{\displaystyle n}an integer. Consider a line of pendula, hanging on a straight line, in constant gravity. Connect the bobs of the pendula together by a string in constant tension. Let the angle of the pendulum at locationx{\displaystyle x}beφ{\displaystyle \varphi }, then schematically, the dynamics of the line of pendulum follows Newton's second law:mφtt⏟mass times acceleration=Tφxx⏟tension−mgsin⁡φ⏟gravity{\displaystyle \underbrace {m\varphi _{tt}} _{\text{mass times acceleration}}=\underbrace {T\varphi _{xx}} _{\text{tension}}-\underbrace {mg\sin \varphi } _{\text{gravity}}}and this is the sine-Gordon equation, after scaling time and distance appropriately. Note that this is not exactly correct, since the net force on a pendulum due to the tension is not preciselyTφxx{\displaystyle T\varphi _{xx}}, but more accuratelyTφxx(1+φx2)−3/2{\displaystyle T\varphi _{xx}(1+\varphi _{x}^{2})^{-3/2}}. However this does give an intuitive picture for the sine-gordon equation. One can produce exact mechanical realizations of the sine-gordon equation by more complex methods.[7] The name "sine-Gordon equation" is a pun on the well-knownKlein–Gordon equationin physics:[4] The sine-Gordon equation is theEuler–Lagrange equationof the field whoseLagrangian densityis given by Using theTaylor seriesexpansion of thecosinein the Lagrangian, it can be rewritten as theKlein–Gordon Lagrangianplus higher-order terms: An interesting feature of the sine-Gordon equation is the existence ofsolitonand multisoliton solutions. The sine-Gordon equation has the following 1-solitonsolutions: where and the slightly more general form of the equation is assumed: The 1-soliton solution for which we have chosen the positive root forγ{\displaystyle \gamma }is called akinkand represents a twist in the variableφ{\displaystyle \varphi }which takes the system from one constant solutionφ=0{\displaystyle \varphi =0}to an adjacent constant solutionφ=2π{\displaystyle \varphi =2\pi }. The statesφ≅2πn{\displaystyle \varphi \cong 2\pi n}are known as vacuum states, as they are constant solutions of zero energy. The 1-soliton solution in which we take the negative root forγ{\displaystyle \gamma }is called anantikink. The form of the 1-soliton solutions can be obtained through application of aBäcklund transformto the trivial (vacuum) solution and the integration of the resulting first-order differentials: for all time. The 1-soliton solutions can be visualized with the use of the elastic ribbon sine-Gordon model introduced by Julio Rubinstein in 1970.[8]Here we take a clockwise (left-handed) twist of the elastic ribbon to be a kink with topological chargeθK=−1{\displaystyle \theta _{\text{K}}=-1}. The alternative counterclockwise (right-handed) twist with topological chargeθAK=+1{\displaystyle \theta _{\text{AK}}=+1}will be an antikink. Multi-solitonsolutions can be obtained through continued application of theBäcklund transformto the 1-soliton solution, as prescribed by aBianchi latticerelating the transformed results.[10]The 2-soliton solutions of the sine-Gordon equation show some of the characteristic features of the solitons. The traveling sine-Gordon kinks and/or antikinks pass through each other as if perfectly permeable, and the only observed effect is aphase shift. Since the colliding solitons recover theirvelocityandshape, such an interaction is called anelastic collision. The kink-kink solution is given byφK/K(x,t)=4arctan⁡(vsinh⁡x1−v2cosh⁡vt1−v2){\displaystyle \varphi _{K/K}(x,t)=4\arctan \left({\frac {v\sinh {\frac {x}{\sqrt {1-v^{2}}}}}{\cosh {\frac {vt}{\sqrt {1-v^{2}}}}}}\right)} while the kink-antikink solution is given byφK/AK(x,t)=4arctan⁡(vcosh⁡x1−v2sinh⁡vt1−v2){\displaystyle \varphi _{K/AK}(x,t)=4\arctan \left({\frac {v\cosh {\frac {x}{\sqrt {1-v^{2}}}}}{\sinh {\frac {vt}{\sqrt {1-v^{2}}}}}}\right)} Another interesting 2-soliton solutions arise from the possibility of coupled kink-antikink behaviour known as abreather. There are known three types of breathers:standing breather,traveling large-amplitude breather, andtraveling small-amplitude breather.[11] The standing breather solution is given byφ(x,t)=4arctan⁡(1−ω2cos⁡(ωt)ωcosh⁡(1−ω2x)).{\displaystyle \varphi (x,t)=4\arctan \left({\frac {{\sqrt {1-\omega ^{2}}}\;\cos(\omega t)}{\omega \;\cosh({\sqrt {1-\omega ^{2}}}\;x)}}\right).} 3-soliton collisions between a traveling kink and a standing breather or a traveling antikink and a standing breather results in a phase shift of the standing breather. In the process of collision between a moving kink and a standing breather, the shift of the breatherΔB{\displaystyle \Delta _{\text{B}}}is given by wherevK{\displaystyle v_{\text{K}}}is the velocity of the kink, andω{\displaystyle \omega }is the breather's frequency.[11]If the old position of the standing breather isx0{\displaystyle x_{0}}, after the collision the new position will bex0+ΔB{\displaystyle x_{0}+\Delta _{\text{B}}}. Suppose thatφ{\displaystyle \varphi }is a solution of the sine-Gordon equation Then the system whereais an arbitrary parameter, is solvable for a functionψ{\displaystyle \psi }which will also satisfy the sine-Gordon equation. This is an example of an auto-Bäcklund transform, as bothφ{\displaystyle \varphi }andψ{\displaystyle \psi }are solutions to the same equation, that is, the sine-Gordon equation. By using a matrix system, it is also possible to find a linear Bäcklund transform for solutions of sine-Gordon equation. For example, ifφ{\displaystyle \varphi }is the trivial solutionφ≡0{\displaystyle \varphi \equiv 0}, thenψ{\displaystyle \psi }is the one-soliton solution witha{\displaystyle a}related to the boost applied to the soliton. Thetopological chargeorwinding numberof a solutionφ{\displaystyle \varphi }isN=12π∫Rdφ=12π[φ(x=∞,t)−φ(x=−∞,t)].{\displaystyle N={\frac {1}{2\pi }}\int _{\mathbb {R} }d\varphi ={\frac {1}{2\pi }}\left[\varphi (x=\infty ,t)-\varphi (x=-\infty ,t)\right].}Theenergyof a solutionφ{\displaystyle \varphi }isE=∫Rdx(12(φt2+φx2)+m2(1−cos⁡φ)){\displaystyle E=\int _{\mathbb {R} }dx\left({\frac {1}{2}}(\varphi _{t}^{2}+\varphi _{x}^{2})+m^{2}(1-\cos \varphi )\right)}where a constant energy density has been added so that the potential is non-negative. With it the first two terms in the Taylor expansion of the potential coincide with the potential of a massive scalar field, as mentioned in the naming section; the higher order terms can be thought of as interactions. The topological charge is conserved if the energy is finite. The topological charge does not determine the solution, even up to Lorentz boosts. Both the trivial solution and the soliton-antisoliton pair solution haveN=0{\displaystyle N=0}. The sine-Gordon equation is equivalent to thecurvatureof a particularsu(2){\displaystyle {\mathfrak {su}}(2)}-connectiononR2{\displaystyle \mathbb {R} ^{2}}being equal to zero.[12] Explicitly, with coordinates(u,v){\displaystyle (u,v)}onR2{\displaystyle \mathbb {R} ^{2}}, the connection componentsAμ{\displaystyle A_{\mu }}are given byAu=(iλi2φui2φu−iλ)=12φuiσ1+λiσ3,{\displaystyle A_{u}={\begin{pmatrix}i\lambda &{\frac {i}{2}}\varphi _{u}\\{\frac {i}{2}}\varphi _{u}&-i\lambda \end{pmatrix}}={\frac {1}{2}}\varphi _{u}i\sigma _{1}+\lambda i\sigma _{3},}Av=(−i4λcos⁡φ−14λsin⁡φ14λsin⁡φi4λcos⁡φ)=−14λisin⁡φσ2−14λicos⁡φσ3,{\displaystyle A_{v}={\begin{pmatrix}-{\frac {i}{4\lambda }}\cos \varphi &-{\frac {1}{4\lambda }}\sin \varphi \\{\frac {1}{4\lambda }}\sin \varphi &{\frac {i}{4\lambda }}\cos \varphi \end{pmatrix}}=-{\frac {1}{4\lambda }}i\sin \varphi \sigma _{2}-{\frac {1}{4\lambda }}i\cos \varphi \sigma _{3},}where theσi{\displaystyle \sigma _{i}}are thePauli matrices. Then the zero-curvature equation∂vAu−∂uAv+[Au,Av]=0{\displaystyle \partial _{v}A_{u}-\partial _{u}A_{v}+[A_{u},A_{v}]=0} is equivalent to the sine-Gordon equationφuv=sin⁡φ{\displaystyle \varphi _{uv}=\sin \varphi }. The zero-curvature equation is so named as it corresponds to the curvature being equal to zero if it is definedFμν=[∂μ−Aμ,∂ν−Aν]{\displaystyle F_{\mu \nu }=[\partial _{\mu }-A_{\mu },\partial _{\nu }-A_{\nu }]}. The pair of matricesAu{\displaystyle A_{u}}andAv{\displaystyle A_{v}}are also known as aLax pairfor the sine-Gordon equation, in the sense that the zero-curvature equation recovers the PDE rather than them satisfying Lax's equation. Thesinh-Gordon equationis given by[13] This is theEuler–Lagrange equationof theLagrangian Another closely related equation is theelliptic sine-Gordon equationorEuclidean sine-Gordon equation, given by whereφ{\displaystyle \varphi }is now a function of the variablesxandy. This is no longer a soliton equation, but it has many similar properties, as it is related to the sine-Gordon equation by theanalytic continuation(orWick rotation)y= it. Theelliptic sinh-Gordon equationmay be defined in a similar way. Another similar equation comes from the Euler–Lagrange equation forLiouville field theory φxx−φtt=2e2φ.{\displaystyle \varphi _{xx}-\varphi _{tt}=2e^{2\varphi }.} A generalization is given byToda field theory.[14]More precisely, Liouville field theory is the Toda field theory for the finiteKac–Moody algebrasl2{\displaystyle {\mathfrak {sl}}_{2}}, while sin(h)-Gordon is the Toda field theory for theaffine Kac–Moody algebrasl^2{\displaystyle {\hat {\mathfrak {sl}}}_{2}}. One can also consider the sine-Gordon model on a circle,[15]on a line segment, or on a half line.[16]It is possible to find boundary conditions which preserve the integrability of the model.[16]On a half line the spectrum containsboundary bound statesin addition to the solitons and breathers.[16] Inquantum field theorythe sine-Gordon model contains a parameter that can be identified with thePlanck constant. The particle spectrum consists of a soliton, an anti-soliton and a finite (possibly zero) number ofbreathers.[17][18][19]The number of breathers depends on the value of the parameter. Multiparticle production cancels on mass shell. Semi-classical quantization of the sine-Gordon model was done byLudwig FaddeevandVladimir Korepin.[20]The exact quantumscattering matrixwas discovered byAlexander Zamolodchikov.[21]This model isS-dualto theThirring model, as discovered byColeman.[22]This is sometimes known as the Coleman correspondence and serves as an example of boson-fermion correspondence in the interacting case. This article also showed that the constants appearing in the model behave nicely underrenormalization: there are three parametersα0,β{\displaystyle \alpha _{0},\beta }andγ0{\displaystyle \gamma _{0}}. Coleman showedα0{\displaystyle \alpha _{0}}receives only a multiplicative correction,γ0{\displaystyle \gamma _{0}}receives only an additive correction, andβ{\displaystyle \beta }is not renormalized. Further, for a critical, non-zero valueβ=4π{\displaystyle \beta ={\sqrt {4\pi }}}, the theory is in fact dual to afreemassiveDirac field theory. The quantum sine-Gordon equation should be modified so the exponentials becomevertex operators withVβ=:eiβφ:{\displaystyle V_{\beta }=:e^{i\beta \varphi }:}, where the semi-colons denotenormal ordering. A possible mass term is included. For different values of the parameterβ2{\displaystyle \beta ^{2}}, therenormalizabilityproperties of the sine-Gordon theory change.[23]The identification of these regimes is attributed toJürg Fröhlich. Thefinite regimeisβ2<4π{\displaystyle \beta ^{2}<4\pi }, where nocountertermsare needed to render the theory well-posed. Thesuper-renormalizable regimeis4π<β2<8π{\displaystyle 4\pi <\beta ^{2}<8\pi }, where a finite number of counterterms are needed to render the theory well-posed. More counterterms are needed for each thresholdnn+18π{\displaystyle {\frac {n}{n+1}}8\pi }passed.[24]Forβ2>8π{\displaystyle \beta ^{2}>8\pi }, the theory becomes ill-defined (Coleman1975). The boundary values areβ2=4π{\displaystyle \beta ^{2}=4\pi }andβ2=8π{\displaystyle \beta ^{2}=8\pi }, which are respectively the free fermion point, as the theory is dual to a free fermion via the Coleman correspondence, and the self-dual point, where the vertex operators form anaffine sl2subalgebra, and the theory becomes strictly renormalizable (renormalizable, but not super-renormalizable). Thestochasticordynamical sine-Gordon modelhas been studied byMartin Hairerand Hao Shen[25]allowing heuristic results from the quantum sine-Gordon theory to be proven in a statistical setting. The equation is∂tu=12Δu+csin⁡(βu+θ)+ξ,{\displaystyle \partial _{t}u={\frac {1}{2}}\Delta u+c\sin(\beta u+\theta )+\xi ,}wherec,β,θ{\displaystyle c,\beta ,\theta }are real-valued constants, andξ{\displaystyle \xi }is space-timewhite noise. The space dimension is fixed to 2. In the proof of existence of solutions, the thresholdsβ2=nn+18π{\displaystyle \beta ^{2}={\frac {n}{n+1}}8\pi }again play a role in determining convergence of certain terms. A supersymmetric extension of the sine-Gordon model also exists.[26]Integrability preserving boundary conditions for this extension can be found as well.[26] The sine-Gordon model arises as the continuum limit of theFrenkel–Kontorova modelwhich models crystal dislocations. Dynamics inlong Josephson junctionsare well-described by the sine-Gordon equations, and conversely provide a useful experimental system for studying the sine-Gordon model.[27] The sine-Gordon model is in the sameuniversality classas theeffective actionfor aCoulomb gasofvorticesand anti-vortices in the continuousclassical XY model, which is a model of magnetism.[28][29]TheKosterlitz–Thouless transitionfor vortices can therefore be derived from arenormalization groupanalysis of the sine-Gordon field theory.[30][31] The sine-Gordon equation also arises as the formal continuum limit of a different model of magnetism, thequantum Heisenberg model, in particular the XXZ model.[32]
https://en.wikipedia.org/wiki/Sine%E2%80%93Gordon_equation
Instatistics,signal processing, andtime series analysis, asinusoidal modelis used to approximate a sequenceYito asinefunction: whereCis constant defining ameanlevel, α is anamplitudefor thesine, ω is theangular frequency,Tiis a time variable, φ is thephase-shift, andEiis the error sequence. This sinusoidal model can be fit usingnonlinear least squares; to obtain a good fit, routines may require good starting values for the unknown parameters. Fitting a model with a single sinusoid is a special case ofspectral density estimationandleast-squares spectral analysis. A good starting value forCcan be obtained by calculating themeanof the data. If the data show atrend, i.e., the assumption of constant location is violated, one can replaceCwith a linear or quadraticleast squaresfit. That is, the model becomes or The starting value for the frequency can be obtained from the dominant frequency in aperiodogram. Acomplex demodulationphase plot can be used to refine this initial estimate for the frequency.[citation needed] Theroot mean squareof the detrended data can be scaled by the square root of two to obtain an estimate of the sinusoid amplitude. A complex demodulation amplitude plot can be used to find a good starting value for the amplitude. In addition, this plot can indicate whether or not the amplitude is constant over the entire range of the data or if it varies. If the plot is essentially flat, i.e., zero slope, then it is reasonable to assume a constant amplitude in the non-linear model. However, if the slope varies over the range of the plot, one may need to adjust the model to be: That is, one may replace α with a function of time. A linear fit is specified in the model above, but this can be replaced with a more elaborate function if needed. As with anystatistical model, the fit should be subjected to graphical and quantitative techniques ofmodel validation. For example, arun sequence plotto check for significant shifts in location, scale, start-up effects andoutliers. Alag plotcan be used to verify theresidualsare independent. The outliers also appear in the lag plot, and ahistogramandnormal probability plotto check for skewness or other non-normalityin the residuals. A different method consists in transforming the non-linear regression to a linear regression thanks to a convenient integral equation. Then, there is no need for initial guess and no need for iterative process : the fitting is directly obtained.[1] This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Sinusoidal_model
Intrigonometry, it is common to usemnemonicsto help remembertrigonometric identitiesand the relationships between the varioustrigonometric functions. Thesine,cosine, andtangentratios in a right triangle can be remembered by representing them as strings of letters, for instance SOH-CAH-TOA in English: One way to remember the letters is to sound them out phonetically (i.e./ˌsoʊkəˈtoʊə/SOH-kə-TOH-ə, similar toKrakatoa).[1] Another method is to expand the letters into a sentence, such as "Some Old Horses Chew Apples Happily Throughout Old Age", "Some Old Hippy Caught Another Hippy Tripping On Acid", or "Studying Our Homework Can Always Help To Obtain Achievement". The order may be switched, as in "Tommy On A Ship Of His Caught A Herring" (tangent, sine, cosine) or "The Old Army Colonel And His Son Often Hiccup" (tangent, cosine, sine) or "Come And Have Some Oranges Help To Overcome Amnesia" (cosine, sine, tangent).[2][3]Communities in Chinese circles may choose to remember it as TOA-CAH-SOH, which also means 'big-footed woman' (Chinese:大腳嫂;Pe̍h-ōe-jī:tōa-kha-só) inHokkien.[citation needed] An alternate way to remember the letters for Sin, Cos, and Tan is to memorize the syllables Oh, Ah, Oh-Ah (i.e./oʊəˈoʊ.ə/) for O/H, A/H, O/A.[4]Longer mnemonics for these letters include "Oscar Has A Hold On Angie" and "Oscar Had A Heap of Apples."[2] All StudentsTakeCalculus is amnemonicfor the sign of eachtrigonometric functionsin eachquadrantof the plane. The letters ASTC signify which of the trigonometric functions are positive, starting in the top right 1st quadrant and movingcounterclockwisethrough quadrants 2 to 4.[5] Other mnemonics include: Other easy-to-remember mnemonics are theACTSandCASTlaws. These have the disadvantages of not going sequentially from quadrants 1 to 4 and not reinforcing the numbering convention of the quadrants. Sines and cosines of common angles 0°, 30°, 45°, 60° and 90° follow the patternn2{\displaystyle {\frac {\sqrt {n}}{2}}}withn = 0, 1, ..., 4for sine andn = 4, 3, ..., 0for cosine, respectively:[9] Another mnemonic permits all of the basic identities to be read off quickly. The hexagonal chart can be constructed with a little thought:[10] Starting at any vertex of the resulting hexagon: Aside from the last bullet, the specific values for each identity are summarized in this table:
https://en.wikipedia.org/wiki/Mnemonics_in_trigonometry#SOH-CAH-TOA
Inmathematics, thetrigonometric functions(also calledcircular functions,angle functionsorgoniometric functions)[1]arereal functionswhich relate an angle of aright-angled triangleto ratios of two side lengths. They are widely used in all sciences that are related togeometry, such asnavigation,solid mechanics,celestial mechanics,geodesy, and many others. They are among the simplestperiodic functions, and as such are also widely used for studying periodic phenomena throughFourier analysis. The trigonometric functions most widely used in modern mathematics are thesine, thecosine, and thetangentfunctions. Theirreciprocalsare respectively thecosecant, thesecant, and thecotangentfunctions, which are less used. Each of these six trigonometric functions has a correspondinginverse function, and an analog among thehyperbolic functions. The oldest definitions of trigonometric functions, related to right-angle triangles, define them only foracute angles. To extend the sine and cosine functions to functions whosedomainis the wholereal line, geometrical definitions using the standardunit circle(i.e., a circle withradius1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions asinfinite seriesor as solutions ofdifferential equations. This allows extending the domain of sine and cosine functions to the wholecomplex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are "sin" for sine, "cos" for cosine, "tan" or "tg" for tangent, "sec" for secant, "csc" or "cosec" for cosecant, and "cot" or "ctg" for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particularline segmentsor their lengths related to anarcof an arbitrary circle, and later to indicate ratios of lengths, but as thefunction concept developedin the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written withfunctional notation, for examplesin(x). Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expressionsin⁡x+y{\displaystyle \sin x+y}would typically be interpreted to mean(sin⁡x)+y,{\displaystyle (\sin x)+y,}so parentheses are required to expresssin⁡(x+y).{\displaystyle \sin(x+y).} Apositive integerappearing as a superscript after the symbol of the function denotesexponentiation, notfunction composition. For examplesin2⁡x{\displaystyle \sin ^{2}x}andsin2⁡(x){\displaystyle \sin ^{2}(x)}denote(sin⁡x)2,{\displaystyle (\sin x)^{2},}notsin⁡(sin⁡x).{\displaystyle \sin(\sin x).}This differs from the (historically later) general functional notation in whichf2(x)=(f∘f)(x)=f(f(x)).{\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript−1{\displaystyle -1}is commonly used to denote theinverse function, not thereciprocal. For examplesin−1⁡x{\displaystyle \sin ^{-1}x}andsin−1⁡(x){\displaystyle \sin ^{-1}(x)}denote theinverse trigonometric functionalternatively writtenarcsin⁡x.{\displaystyle \arcsin x\,.}The equationθ=sin−1⁡x{\displaystyle \theta =\sin ^{-1}x}impliessin⁡θ=x,{\displaystyle \sin \theta =x,}notθ⋅sin⁡x=1.{\displaystyle \theta \cdot \sin x=1.}In this case, the superscriptcouldbe considered as denoting a composed oriterated function, but negative superscripts other than−1{\displaystyle {-1}}are not in common use. If the acute angleθis given, then any right triangles that have an angle ofθaresimilarto each other. This means that the ratio of any two side lengths depends only onθ. Thus these six ratios define six functions ofθ, which are the trigonometric functions. In the following definitions, thehypotenuseis the length of the side opposite the right angle,oppositerepresents the side opposite the given angleθ, andadjacentrepresents the side between the angleθand the right angle.[2][3] Various mnemonicscan be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is,90°or⁠π/2⁠radians. Thereforesin⁡(θ){\displaystyle \sin(\theta )}andcos⁡(90∘−θ){\displaystyle \cos(90^{\circ }-\theta )}represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. In geometric applications, the argument of a trigonometric function is generally the measure of anangle. For this purpose, anyangular unitis convenient. One common unit isdegrees, in which a right angle is 90° and a complete turn is 360° (particularly inelementary mathematics). However, incalculusandmathematical analysis, the trigonometric functions are generally regarded more abstractly as functions ofrealorcomplex numbers, rather than angles. In fact, the functionssinandcoscan be defined for all complex numbers in terms of theexponential function, via power series,[5]or as solutions todifferential equationsgiven particular initial values[6](see below), without reference to any geometric notions. The other four trigonometric functions (tan,cot,sec,csc) can be defined as quotients and reciprocals ofsinandcos, except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians.[5]Moreover, these definitions result in simple expressions for thederivativesandindefinite integralsfor the trigonometric functions.[7]Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. Whenradians(rad) are employed, the angle is given as the length of thearcof theunit circlesubtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°),[8]and a completeturn(360°) is an angle of 2π(≈ 6.28) rad.[9]For real numberx, the notationsinx,cosx, etc. refers to the value of the trigonometric functions evaluated at an angle ofxrad. If units of degrees are intended, the degree sign must be explicitly shown (sinx°,cosx°, etc.). Using this standard notation, the argumentxfor the trigonometric functions satisfies the relationshipx= (180x/π)°, so that, for example,sinπ= sin 180°when we takex=π. In this way, the degree symbol can be regarded as a mathematical constant such that 1° =π/180 ≈ 0.0175.[10] The six trigonometric functions can be defined ascoordinate valuesof points on theEuclidean planethat are related to theunit circle, which is thecircleof radius one centered at the originOof this coordinate system. Whileright-angled triangle definitionsallow for the definition of the trigonometric functions for angles between0andπ2{\textstyle {\frac {\pi }{2}}}radians(90°),the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. LetL{\displaystyle {\mathcal {L}}}be therayobtained by rotating by an angleθthe positive half of thex-axis (counterclockwiserotation forθ>0,{\displaystyle \theta >0,}and clockwise rotation forθ<0{\displaystyle \theta <0}). This ray intersects the unit circle at the pointA=(xA,yA).{\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).}The rayL,{\displaystyle {\mathcal {L}},}extended to alineif necessary, intersects the line of equationx=1{\displaystyle x=1}at pointB=(1,yB),{\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),}and the line of equationy=1{\displaystyle y=1}at pointC=(xC,1).{\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).}Thetangent lineto the unit circle at the pointA, isperpendiculartoL,{\displaystyle {\mathcal {L}},}and intersects they- andx-axes at pointsD=(0,yD){\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })}andE=(xE,0).{\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).}Thecoordinatesof these points give the values of all trigonometric functions for any arbitrary real value ofθin the following manner. The trigonometric functionscosandsinare defined, respectively, as thex- andy-coordinate values of pointA. That is, In the range0≤θ≤π/2{\displaystyle 0\leq \theta \leq \pi /2}, this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radiusOAashypotenuse. And since the equationx2+y2=1{\displaystyle x^{2}+y^{2}=1}holds for all pointsP=(x,y){\displaystyle \mathrm {P} =(x,y)}on the unit circle, this definition of cosine and sine also satisfies thePythagorean identity. The other trigonometric functions can be found along the unit circle as By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of±2π{\displaystyle \pm 2\pi }does not change the position or size of a shape, the pointsA,B,C,D, andEare the same for two angles whose difference is an integer multiple of2π{\displaystyle 2\pi }. Thus trigonometric functions areperiodic functionswith period2π{\displaystyle 2\pi }. That is, the equalities hold for any angleθand anyintegerk. The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that2π{\displaystyle 2\pi }is the smallest value for which they are periodic (i.e.,2π{\displaystyle 2\pi }is thefundamental periodof these functions). However, after a rotation by an angleπ{\displaystyle \pi }, the pointsBandCalready return to their original position, so that the tangent function and the cotangent function have a fundamental period ofπ{\displaystyle \pi }. That is, the equalities hold for any angleθand any integerk. Thealgebraic expressionsfor the most important angles are as follows: Writing the numerators assquare rootsof consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values.[13] Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. G. H. Hardynoted in his 1908 workA Course of Pure Mathematicsthat the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number.[14]Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Sine and cosine can be defined as the unique solution to theinitial value problem:[17] Differentiating again,d2dx2sin⁡x=ddxcos⁡x=−sin⁡x{\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x}andd2dx2cos⁡x=−ddxsin⁡x=−cos⁡x{\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x}, so both sine and cosine are solutions of the sameordinary differential equation Sine is the unique solution withy(0) = 0andy′(0) = 1; cosine is the unique solution withy(0) = 1andy′(0) = 0. One can then prove, as a theorem, that solutionscos,sin{\displaystyle \cos ,\sin }are periodic, having the same period. Writing this period as2π{\displaystyle 2\pi }is then a definition of the real numberπ{\displaystyle \pi }which is independent of geometry. Applying thequotient ruleto the tangenttan⁡x=sin⁡x/cos⁡x{\displaystyle \tan x=\sin x/\cos x}, so the tangent function satisfies the ordinary differential equation It is the unique solution withy(0) = 0. The basic trigonometric functions can be defined by the following power series expansions.[18]These series are also known as theTaylor seriesorMaclaurin seriesof these trigonometric functions: Theradius of convergenceof these series is infinite. Therefore, the sine and the cosine can be extended toentire functions(also called "sine" and "cosine"), which are (by definition)complex-valued functionsthat are defined andholomorphicon the wholecomplex plane. Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended tomeromorphic functions, that is functions that are holomorphic in the whole complex plane, except some isolated points calledpoles. Here, the poles are the numbers of the form(2k+1)π2{\textstyle (2k+1){\frac {\pi }{2}}}for the tangent and the secant, orkπ{\displaystyle k\pi }for the cotangent and the cosecant, wherekis an arbitrary integer. Recurrences relations may also be computed for the coefficients of theTaylor seriesof the other trigonometric functions. These series have a finiteradius of convergence. Their coefficients have acombinatorialinterpretation: they enumeratealternating permutationsof finite sets.[19] More precisely, defining one has the following series expansions:[20] The followingcontinued fractionsare valid in the whole complex plane: The last one was used in the historically firstproof that π is irrational.[21] There is a series representation aspartial fraction expansionwhere just translatedreciprocal functionsare summed up, such that thepolesof the cotangent function and the reciprocal functions match:[22] This identity can be proved with theHerglotztrick.[23]Combining the(–n)th with thenth term lead toabsolutely convergentseries: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: The following infinite product for the sine is due toLeonhard Euler, and is of great importance in complex analysis:[24] This may be obtained from the partial fraction decomposition ofcot⁡z{\displaystyle \cot z}given above, which is the logarithmic derivative ofsin⁡z{\displaystyle \sin z}.[25]From this, it can be deduced also that Euler's formularelates sine and cosine to theexponential function: This formula is commonly considered for real values ofx, but it remains true for all complex values. Proof: Letf1(x)=cos⁡x+isin⁡x,{\displaystyle f_{1}(x)=\cos x+i\sin x,}andf2(x)=eix.{\displaystyle f_{2}(x)=e^{ix}.}One hasdfj(x)/dx=ifj(x){\displaystyle df_{j}(x)/dx=if_{j}(x)}forj= 1, 2. Thequotient ruleimplies thus thatd/dx(f1(x)/f2(x))=0{\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0}. Therefore,f1(x)/f2(x){\displaystyle f_{1}(x)/f_{2}(x)}is a constant function, which equals1, asf1(0)=f2(0)=1.{\displaystyle f_{1}(0)=f_{2}(0)=1.}This proves the formula. One has Solving thislinear systemin sine and cosine, one can express them in terms of the exponential function: Whenxis real, this may be rewritten as Mosttrigonometric identitiescan be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identityea+b=eaeb{\displaystyle e^{a+b}=e^{a}e^{b}}for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language oftopological groups.[26]The setU{\displaystyle U}of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus groupR/Z{\displaystyle \mathbb {R} /\mathbb {Z} }, via an isomorphisme:R/Z→U.{\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.}In pedestrian termse(t)=exp⁡(2πit){\displaystyle e(t)=\exp(2\pi it)}, and this isomorphism is unique up to taking complex conjugates. For a nonzero real numbera{\displaystyle a}(thebase), the functiont↦e(t/a){\displaystyle t\mapsto e(t/a)}defines an isomorphism of the groupR/aZ→U{\displaystyle \mathbb {R} /a\mathbb {Z} \to U}. The real and imaginary parts ofe(t/a){\displaystyle e(t/a)}are the cosine and sine, wherea{\displaystyle a}is used as the base for measuring angles. For example, whena=2π{\displaystyle a=2\pi }, we get the measure in radians, and the usual trigonometric functions. Whena=360{\displaystyle a=360}, we get the sine and cosine of angles measured in degrees. Note thata=2π{\displaystyle a=2\pi }is the unique value at which the derivativeddte(t/a){\displaystyle {\frac {d}{dt}}e(t/a)}becomes aunit vectorwith positive imaginary part att=0{\displaystyle t=0}. This fact can, in turn, be used to define the constant2π{\displaystyle 2\pi }. Another way to define the trigonometric functions in analysis is using integration.[14][27]For a real numbert{\displaystyle t}, putθ(t)=∫0tdτ1+τ2=arctan⁡t{\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t}where this defines this inverse tangent function. Also,π{\displaystyle \pi }is defined by12π=∫0∞dτ1+τ2{\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}}a definition that goes back toKarl Weierstrass.[28] On the interval−π/2<θ<π/2{\displaystyle -\pi /2<\theta <\pi /2}, the trigonometric functions are defined by inverting the relationθ=arctan⁡t{\displaystyle \theta =\arctan t}. Thus we define the trigonometric functions bytan⁡θ=t,cos⁡θ=(1+t2)−1/2,sin⁡θ=t(1+t2)−1/2{\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}}where the point(t,θ){\displaystyle (t,\theta )}is on the graph ofθ=arctan⁡t{\displaystyle \theta =\arctan t}and the positive square root is taken. This defines the trigonometric functions on(−π/2,π/2){\displaystyle (-\pi /2,\pi /2)}. The definition can be extended to all real numbers by first observing that, asθ→π/2{\displaystyle \theta \to \pi /2},t→∞{\displaystyle t\to \infty }, and socos⁡θ=(1+t2)−1/2→0{\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0}andsin⁡θ=t(1+t2)−1/2→1{\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1}. Thuscos⁡θ{\displaystyle \cos \theta }andsin⁡θ{\displaystyle \sin \theta }are extended continuously so thatcos⁡(π/2)=0,sin⁡(π/2)=1{\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1}. Now the conditionscos⁡(θ+π)=−cos⁡(θ){\displaystyle \cos(\theta +\pi )=-\cos(\theta )}andsin⁡(θ+π)=−sin⁡(θ){\displaystyle \sin(\theta +\pi )=-\sin(\theta )}define the sine and cosine as periodic functions with period2π{\displaystyle 2\pi }, for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First,arctan⁡s+arctan⁡t=arctan⁡s+t1−st{\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}}holds, providedarctan⁡s+arctan⁡t∈(−π/2,π/2){\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)}, sincearctan⁡s+arctan⁡t=∫−stdτ1+τ2=∫0s+t1−stdτ1+τ2{\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}}after the substitutionτ→s+τ1−sτ{\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}}. In particular, the limiting case ass→∞{\displaystyle s\to \infty }givesarctan⁡t+π2=arctan⁡(−1/t),t∈(−∞,0).{\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).}Thus we havesin⁡(θ+π2)=−1t1+(−1/t)2=−11+t2=−cos⁡(θ){\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )}andcos⁡(θ+π2)=11+(−1/t)2=t1+t2=sin⁡(θ).{\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).}So the sine and cosine functions are related by translation over a quarter periodπ/2{\displaystyle \pi /2}. One can also define the trigonometric functions using variousfunctional equations. For example,[29]the sine and the cosine form the unique pair ofcontinuous functionsthat satisfy the difference formula and the added condition The sine and cosine of acomplex numberz=x+iy{\displaystyle z=x+iy}can be expressed in terms of real sines, cosines, andhyperbolic functionsas follows: By taking advantage ofdomain coloring, it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part ofz{\displaystyle z}becomes larger (since the color white represents infinity), and the fact that the functions contain simplezeros or polesis apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. sin⁡z{\displaystyle \sin z\,} cos⁡z{\displaystyle \cos z\,} tan⁡z{\displaystyle \tan z\,} cot⁡z{\displaystyle \cot z\,} sec⁡z{\displaystyle \sec z\,} csc⁡z{\displaystyle \csc z\,} The sine and cosine functions areperiodic, with period2π{\displaystyle 2\pi }, which is the smallest positive period:sin⁡(z+2π)=sin⁡(z),cos⁡(z+2π)=cos⁡(z).{\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).}Consequently, the cosecant and secant also have2π{\displaystyle 2\pi }as their period. The functions sine and cosine also have semiperiodsπ{\displaystyle \pi }, andsin⁡(z+π)=−sin⁡(z),cos⁡(z+π)=−cos⁡(z){\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)}and consequentlytan⁡(z+π)=tan⁡(z),cot⁡(z+π)=cot⁡(z).{\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).}Also,sin⁡(x+π/2)=cos⁡(x),cos⁡(x+π/2)=−sin⁡(x){\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)}(seeComplementary angles). The functionsin⁡(z){\displaystyle \sin(z)}has a unique zero (atz=0{\displaystyle z=0}) in the strip−π<ℜ(z)<π{\displaystyle -\pi <\Re (z)<\pi }. The functioncos⁡(z){\displaystyle \cos(z)}has the pair of zerosz=±π/2{\displaystyle z=\pm \pi /2}in the same strip. Because of the periodicity, the zeros of sine areπZ={…,−2π,−π,0,π,2π,…}⊂C.{\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .}There zeros of cosine areπ2+πZ={…,−3π2,−π2,π2,3π2,…}⊂C.{\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .}All of the zeros are simple zeros, and both functions have derivative±1{\displaystyle \pm 1}at each of the zeros. The tangent functiontan⁡(z)=sin⁡(z)/cos⁡(z){\displaystyle \tan(z)=\sin(z)/\cos(z)}has a simple zero atz=0{\displaystyle z=0}and vertical asymptotes atz=±π/2{\displaystyle z=\pm \pi /2}, where it has a simple pole of residue−1{\displaystyle -1}. Again, owing to the periodicity, the zeros are all the integer multiples ofπ{\displaystyle \pi }and the poles are odd multiples ofπ/2{\displaystyle \pi /2}, all having the same residue. The poles correspond to vertical asymptoteslimx→π−tan⁡(x)=+∞,limx→π+tan⁡(x)=−∞.{\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent functioncot⁡(z)=cos⁡(z)/sin⁡(z){\displaystyle \cot(z)=\cos(z)/\sin(z)}has a simple pole of residue 1 at the integer multiples ofπ{\displaystyle \pi }and simple zeros at odd multiples ofπ/2{\displaystyle \pi /2}. The poles correspond to vertical asymptoteslimx→0−cot⁡(x)=−∞,limx→0+cot⁡(x)=+∞.{\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} Manyidentitiesinterrelate the trigonometric functions. This section contains the most basic ones; for more identities, seeList of trigonometric identities. These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval[0,π/2], seeProofs of trigonometric identities). For non-geometrical proofs using only tools ofcalculus, one may use directly the differential equations, in a way that is similar to that of theabove proofof Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. The cosine and the secant areeven functions; the other trigonometric functions areodd functions. That is: All trigonometric functions areperiodic functionsof period2π. This is the smallest period, except for the tangent and the cotangent, which haveπas smallest period. This means that, for every integerk, one has SeePeriodicity and asymptotes. The Pythagorean identity, is the expression of thePythagorean theoremin terms of trigonometric functions. It is Dividing through by eithercos2⁡x{\displaystyle \cos ^{2}x}orsin2⁡x{\displaystyle \sin ^{2}x}gives and The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date toPtolemy(seeAngle sum and difference identities). One can also produce them algebraically usingEuler's formula. When the two angles are equal, the sum formulas reduce to simpler equations known as thedouble-angle formulae. These identities can be used to derive theproduct-to-sum identities. By settingt=tan⁡12θ,{\displaystyle t=\tan {\tfrac {1}{2}}\theta ,}all trigonometric functions ofθ{\displaystyle \theta }can be expressed asrational fractionsoft{\displaystyle t}: Together with this is thetangent half-angle substitution, which reduces the computation ofintegralsandantiderivativesof trigonometric functions to that of rational fractions. Thederivativesof trigonometric functions result from those of sine and cosine by applying thequotient rule. The values given for theantiderivativesin the following table can be verified by differentiating them. The numberCis aconstant of integration. Note: For0<x<π{\displaystyle 0<x<\pi }the integral ofcsc⁡x{\displaystyle \csc x}can also be written as−arsinh⁡(cot⁡x),{\displaystyle -\operatorname {arsinh} (\cot x),}and for the integral ofsec⁡x{\displaystyle \sec x}for−π/2<x<π/2{\displaystyle -\pi /2<x<\pi /2}asarsinh⁡(tan⁡x),{\displaystyle \operatorname {arsinh} (\tan x),}wherearsinh{\displaystyle \operatorname {arsinh} }is theinverse hyperbolic sine. Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: The trigonometric functions are periodic, and hence notinjective, so strictly speaking, they do not have aninverse function. However, on each interval on which a trigonometric function ismonotonic, one can define an inverse function, and this defines inverse trigonometric functions asmultivalued functions. To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thusbijectivefrom this interval to its image by the function. The common choice for this interval, called the set ofprincipal values, is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notationssin−1,cos−1, etc. are often used forarcsinandarccos, etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with "arcsecond". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms ofcomplex logarithms. In this sectionA,B,Cdenote the three (interior) angles of a triangle, anda,b,cdenote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. The law of sines states that for an arbitrary triangle with sidesa,b, andcand angles opposite those sidesA,BandC:sin⁡Aa=sin⁡Bb=sin⁡Cc=2Δabc,{\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},}whereΔis the area of the triangle, or, equivalently,asin⁡A=bsin⁡B=csin⁡C=2R,{\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,}whereRis the triangle'scircumradius. It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring intriangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. The law of cosines (also known as the cosine formula or cosine rule) is an extension of thePythagorean theorem:c2=a2+b2−2abcos⁡C,{\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,}or equivalently,cos⁡C=a2+b2−c22ab.{\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle atCis opposite to the sidec. This theorem can be proved by dividing the triangle into two right ones and using thePythagorean theorem. The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. The law of tangents says that: Ifsis the triangle's semiperimeter, (a+b+c)/2, andris the radius of the triangle'sincircle, thenrsis the triangle's area. ThereforeHeron's formulaimplies that: The law of cotangents says that:[30] It follows that The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describesimple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections ofuniform circular motion. Trigonometric functions also prove to be useful in the study of generalperiodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or lightwaves.[31] Under rather general conditions, a periodic functionf(x)can be expressed as a sum of sine waves or cosine waves in aFourier series.[32]Denoting the sine or cosinebasis functionsbyφk, the expansion of the periodic functionf(t)takes the form:f(t)=∑k=1∞ckφk(t).{\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, thesquare wavecan be written as theFourier seriesfsquare(t)=4π∑k=1∞sin⁡((2k−1)t)2k−1.{\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of asawtooth waveare shown underneath. While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. Thechordfunction was defined byHipparchusofNicaea(180–125 BCE) andPtolemyofRoman Egypt(90–165 CE). The functions of sine andversine(1 – cosine) are closely related to thejyāandkoti-jyāfunctions used inGupta periodIndian astronomy(Aryabhatiya,Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[33](SeeAryabhata's sine table.) All six trigonometric functions in current use were known inIslamic mathematicsby the 9th century, as was thelaw of sines, used insolving triangles.[34]Al-Khwārizmī(c. 780–850) produced tables of sines and cosines. Circa 860,Habash al-Hasib al-Marwazidefined the tangent and the cotangent, and produced their tables.[35][36]Muhammad ibn Jābir al-Harrānī al-Battānī(853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.[36]The trigonometric functions were later studied by mathematicians includingOmar Khayyám,Bhāskara II,Nasir al-Din al-Tusi,Jamshīd al-Kāshī(14th century),Ulugh Beg(14th century),Regiomontanus(1464),Rheticus, and Rheticus' studentValentinus Otho. Madhava of Sangamagrama(c. 1400) made early strides in theanalysisof trigonometric functions in terms ofinfinite series.[37](SeeMadhava seriesandMadhava's sine table.) The tangent function was brought to Europe byGiovanni Bianchiniin 1467 in trigonometry tables he created to support the calculation of stellar coordinates.[38] The termstangentandsecantwere first introduced by the Danish mathematicianThomas Finckein his bookGeometria rotundi(1583).[39] The 17th century French mathematicianAlbert Girardmade the first published use of the abbreviationssin,cos, andtanin his bookTrigonométrie.[40] In a paper published in 1682,Gottfried Leibnizproved thatsinxis not analgebraic functionofx.[41]Though defined as ratios of sides of aright triangle, and thus appearing to berational functions, Leibnitz result established that they are actuallytranscendental functionsof their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in hisIntroduction to the Analysis of the Infinite(1748). His method was to show that the sine and cosine functions arealternating seriesformed from the even and odd terms respectively of theexponential series. He presented "Euler's formula", as well as near-modern abbreviations (sin.,cos.,tang.,cot.,sec., andcosec.).[33] A few functions were common historically, but are now seldom used, such as thechord,versine(which appeared in the earliest tables[33]),haversine,coversine,[42]half-tangent (tangent of half an angle), andexsecant.List of trigonometric identitiesshows more relations between these functions. Historically, trigonometric functions were often combined withlogarithmsin compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent.[43][44][45][46] The wordsinederives[47]fromLatinsinus, meaning "bend; bay", and more specifically "the hanging fold of the upper part of atoga", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic wordjaib, meaning "pocket" or "fold" in the twelfth-century translations of works byAl-Battaniandal-KhwārizmīintoMedieval Latin.[48]The choice was based on a misreading of the Arabic written formj-y-b(جيب), which itself originated as atransliterationfrom Sanskritjīvā, which along with its synonymjyā(the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted fromAncient Greekχορδή"string".[49] The wordtangentcomes from Latintangensmeaning "touching", since the linetouchesthe circle of unit radius, whereassecantstems from Latinsecans—"cutting"—since the linecutsthe circle.[50] The prefix "co-" (in "cosine", "cotangent", "cosecant") is found inEdmund Gunter'sCanon triangulorum(1620), which defines thecosinusas an abbreviation of thesinus complementi(sine of thecomplementary angle) and proceeds to define thecotangenssimilarly.[51][52]
https://en.wikipedia.org/wiki/Trigonometric_functions
Inmathematics,trigonometric integralsare afamilyofnonelementary integralsinvolvingtrigonometric functions. The differentsineintegral definitions areSi⁡(x)=∫0xsin⁡ttdt{\displaystyle \operatorname {Si} (x)=\int _{0}^{x}{\frac {\sin t}{t}}\,dt}si⁡(x)=−∫x∞sin⁡ttdt.{\displaystyle \operatorname {si} (x)=-\int _{x}^{\infty }{\frac {\sin t}{t}}\,dt~.} Note that the integrandsin⁡(t)t{\displaystyle {\frac {\sin(t)}{t}}}is thesinc function, and also the zerothspherical Bessel function. Sincesincis anevenentire function(holomorphicover the entire complex plane),Siis entire, odd, and the integral in its definition can be taken alongany pathconnecting the endpoints. By definition,Si(x)is theantiderivativeofsinx/xwhose value is zero atx= 0, andsi(x)is the antiderivative whose value is zero atx= ∞. Their difference is given by theDirichlet integral,Si⁡(x)−si⁡(x)=∫0∞sin⁡ttdt=π2orSi⁡(x)=π2+si⁡(x).{\displaystyle \operatorname {Si} (x)-\operatorname {si} (x)=\int _{0}^{\infty }{\frac {\sin t}{t}}\,dt={\frac {\pi }{2}}\quad {\text{ or }}\quad \operatorname {Si} (x)={\frac {\pi }{2}}+\operatorname {si} (x)~.} Insignal processing, the oscillations of the sine integral causeovershootandringing artifactswhen using thesinc filter, andfrequency domainringing if using a truncated sinc filter as alow-pass filter. Related is theGibbs phenomenon: If the sine integral is considered as theconvolutionof the sinc function with theHeaviside step function, this corresponds to truncating theFourier series, which is the cause of the Gibbs phenomenon. The differentcosineintegral definitions areCin⁡(x)≡∫0x1−cos⁡ttd⁡t.{\displaystyle \operatorname {Cin} (x)~\equiv ~\int _{0}^{x}{\frac {\ 1-\cos t\ }{t}}\ \operatorname {d} t~.} Cinis aneven,entire function. For that reason, some texts defineCinas the primary function, and deriveCiin terms ofCin . Ci⁡(x)≡−∫x∞cos⁡ttd⁡t{\displaystyle \operatorname {Ci} (x)~~\equiv ~-\int _{x}^{\infty }{\frac {\ \cos t\ }{t}}\ \operatorname {d} t~}=γ+ln⁡x−∫0x1−cos⁡ttd⁡t{\displaystyle ~~\qquad ~=~~\gamma ~+~\ln x~-~\int _{0}^{x}{\frac {\ 1-\cos t\ }{t}}\ \operatorname {d} t~} =γ+ln⁡x−Cin⁡x{\displaystyle ~~\qquad ~=~~\gamma ~+~\ln x~-~\operatorname {Cin} x~}for|Arg⁡(x)|<π,{\displaystyle ~{\Bigl |}\ \operatorname {Arg} (x)\ {\Bigr |}<\pi \ ,}whereγ≈ 0.57721566490 ...is theEuler–Mascheroni constant. Some texts useciinstead ofCi. The restriction onArg(x)is to avoid a discontinuity (shown as the orange vs blue area on the left half of theplot above) that arises because of abranch cutin the standardlogarithm function(ln). Ci(x)is the antiderivative of⁠cosx/x⁠(which vanishes asx→∞{\displaystyle \ x\to \infty \ }). The two definitions are related byCi⁡(x)=γ+ln⁡x−Cin⁡(x).{\displaystyle \operatorname {Ci} (x)=\gamma +\ln x-\operatorname {Cin} (x)~.} Thehyperbolic sineintegral is defined asShi⁡(x)=∫0xsinh⁡(t)tdt.{\displaystyle \operatorname {Shi} (x)=\int _{0}^{x}{\frac {\sinh(t)}{t}}\,dt.} It is related to the ordinary sine integral bySi⁡(ix)=iShi⁡(x).{\displaystyle \operatorname {Si} (ix)=i\operatorname {Shi} (x).} Thehyperbolic cosineintegral is Chi⁡(x)=γ+ln⁡x+∫0xcosh⁡t−1tdtfor|Arg⁡(x)|<π,{\displaystyle \operatorname {Chi} (x)=\gamma +\ln x+\int _{0}^{x}{\frac {\cosh t-1}{t}}\,dt\qquad ~{\text{ for }}~\left|\operatorname {Arg} (x)\right|<\pi ~,}whereγ{\displaystyle \gamma }is theEuler–Mascheroni constant. It has the series expansionChi⁡(x)=γ+ln⁡(x)+x24+x496+x64320+x8322560+x1036288000+O(x12).{\displaystyle \operatorname {Chi} (x)=\gamma +\ln(x)+{\frac {x^{2}}{4}}+{\frac {x^{4}}{96}}+{\frac {x^{6}}{4320}}+{\frac {x^{8}}{322560}}+{\frac {x^{10}}{36288000}}+O(x^{12}).} Trigonometric integrals can be understood in terms of the so-called "auxiliary functions"f(x)≡∫0∞sin⁡(t)t+xdt=∫0∞e−xtt2+1dt=Ci⁡(x)sin⁡(x)+[π2−Si⁡(x)]cos⁡(x),g(x)≡∫0∞cos⁡(t)t+xdt=∫0∞te−xtt2+1dt=−Ci⁡(x)cos⁡(x)+[π2−Si⁡(x)]sin⁡(x).{\displaystyle {\begin{array}{rcl}f(x)&\equiv &\int _{0}^{\infty }{\frac {\sin(t)}{t+x}}\,dt&=&\int _{0}^{\infty }{\frac {e^{-xt}}{t^{2}+1}}\,dt&=&\operatorname {Ci} (x)\sin(x)+\left[{\frac {\pi }{2}}-\operatorname {Si} (x)\right]\cos(x)~,\\g(x)&\equiv &\int _{0}^{\infty }{\frac {\cos(t)}{t+x}}\,dt&=&\int _{0}^{\infty }{\frac {te^{-xt}}{t^{2}+1}}\,dt&=&-\operatorname {Ci} (x)\cos(x)+\left[{\frac {\pi }{2}}-\operatorname {Si} (x)\right]\sin(x)~.\end{array}}}Using these functions, the trigonometric integrals may be re-expressed as (cf. Abramowitz & Stegun,p. 232)π2−Si⁡(x)=−si⁡(x)=f(x)cos⁡(x)+g(x)sin⁡(x),andCi⁡(x)=f(x)sin⁡(x)−g(x)cos⁡(x).{\displaystyle {\begin{array}{rcl}{\frac {\pi }{2}}-\operatorname {Si} (x)=-\operatorname {si} (x)&=&f(x)\cos(x)+g(x)\sin(x)~,\qquad {\text{ and }}\\\operatorname {Ci} (x)&=&f(x)\sin(x)-g(x)\cos(x)~.\\\end{array}}} Thespiralformed by parametric plot ofsi, ciis known as Nielsen's spiral.x(t)=a×ci⁡(t){\displaystyle x(t)=a\times \operatorname {ci} (t)}y(t)=a×si⁡(t){\displaystyle y(t)=a\times \operatorname {si} (t)} The spiral is closely related to theFresnel integralsand theEuler spiral. Nielsen's spiral has applications in vision processing, road and track construction and other areas.[1] Various expansions can be used for evaluation of trigonometric integrals, depending on the range of the argument. Si⁡(x)∼π2−cos⁡xx(1−2!x2+4!x4−6!x6⋯)−sin⁡xx(1x−3!x3+5!x5−7!x7⋯){\displaystyle \operatorname {Si} (x)\sim {\frac {\pi }{2}}-{\frac {\cos x}{x}}\left(1-{\frac {2!}{x^{2}}}+{\frac {4!}{x^{4}}}-{\frac {6!}{x^{6}}}\cdots \right)-{\frac {\sin x}{x}}\left({\frac {1}{x}}-{\frac {3!}{x^{3}}}+{\frac {5!}{x^{5}}}-{\frac {7!}{x^{7}}}\cdots \right)}Ci⁡(x)∼sin⁡xx(1−2!x2+4!x4−6!x6⋯)−cos⁡xx(1x−3!x3+5!x5−7!x7⋯).{\displaystyle \operatorname {Ci} (x)\sim {\frac {\sin x}{x}}\left(1-{\frac {2!}{x^{2}}}+{\frac {4!}{x^{4}}}-{\frac {6!}{x^{6}}}\cdots \right)-{\frac {\cos x}{x}}\left({\frac {1}{x}}-{\frac {3!}{x^{3}}}+{\frac {5!}{x^{5}}}-{\frac {7!}{x^{7}}}\cdots \right)~.} These series areasymptoticand divergent, although can be used for estimates and even precise evaluation atℜ(x) ≫ 1. Si⁡(x)=∑n=0∞(−1)nx2n+1(2n+1)(2n+1)!=x−x33!⋅3+x55!⋅5−x77!⋅7±⋯{\displaystyle \operatorname {Si} (x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n+1}}{(2n+1)(2n+1)!}}=x-{\frac {x^{3}}{3!\cdot 3}}+{\frac {x^{5}}{5!\cdot 5}}-{\frac {x^{7}}{7!\cdot 7}}\pm \cdots }Ci⁡(x)=γ+ln⁡x+∑n=1∞(−1)nx2n2n(2n)!=γ+ln⁡x−x22!⋅2+x44!⋅4∓⋯{\displaystyle \operatorname {Ci} (x)=\gamma +\ln x+\sum _{n=1}^{\infty }{\frac {(-1)^{n}x^{2n}}{2n(2n)!}}=\gamma +\ln x-{\frac {x^{2}}{2!\cdot 2}}+{\frac {x^{4}}{4!\cdot 4}}\mp \cdots } These series are convergent at any complexx, although for|x| ≫ 1, the series will converge slowly initially, requiring many terms for high precision. From the Maclaurin series expansion of sine:sinx=x−x33!+x55!−x77!+x99!−x1111!+⋯{\displaystyle \sin \,x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+{\frac {x^{9}}{9!}}-{\frac {x^{11}}{11!}}+\cdots }sinxx=1−x23!+x45!−x67!+x89!−x1011!+⋯{\displaystyle {\frac {\sin \,x}{x}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+{\frac {x^{8}}{9!}}-{\frac {x^{10}}{11!}}+\cdots }∴∫sinxxdx=x−x33!⋅3+x55!⋅5−x77!⋅7+x99!⋅9−x1111!⋅11+⋯{\displaystyle \therefore \int {\frac {\sin \,x}{x}}dx=x-{\frac {x^{3}}{3!\cdot 3}}+{\frac {x^{5}}{5!\cdot 5}}-{\frac {x^{7}}{7!\cdot 7}}+{\frac {x^{9}}{9!\cdot 9}}-{\frac {x^{11}}{11!\cdot 11}}+\cdots } The functionE1⁡(z)=∫1∞exp⁡(−zt)tdtforℜ(z)≥0{\displaystyle \operatorname {E} _{1}(z)=\int _{1}^{\infty }{\frac {\exp(-zt)}{t}}\,dt\qquad ~{\text{ for }}~\Re (z)\geq 0}is called theexponential integral. It is closely related toSiandCi,E1⁡(ix)=i(−π2+Si⁡(x))−Ci⁡(x)=isi⁡(x)−ci⁡(x)forx>0.{\displaystyle \operatorname {E} _{1}(ix)=i\left(-{\frac {\pi }{2}}+\operatorname {Si} (x)\right)-\operatorname {Ci} (x)=i\operatorname {si} (x)-\operatorname {ci} (x)\qquad ~{\text{ for }}~x>0~.} As each respective function is analytic except for the cut at negative values of the argument, the area of validity of the relation should be extended to (Outside this range, additional terms which are integer factors ofπappear in the expression.) Cases of imaginary argument of the generalized integro-exponential function are∫1∞cos⁡(ax)ln⁡xxdx=−π224+γ(γ2+ln⁡a)+ln2⁡a2+∑n≥1(−a2)n(2n)!(2n)2,{\displaystyle \int _{1}^{\infty }\cos(ax){\frac {\ln x}{x}}\,dx=-{\frac {\pi ^{2}}{24}}+\gamma \left({\frac {\gamma }{2}}+\ln a\right)+{\frac {\ln ^{2}a}{2}}+\sum _{n\geq 1}{\frac {(-a^{2})^{n}}{(2n)!(2n)^{2}}}~,}which is the real part of∫1∞eiaxln⁡xxdx=−π224+γ(γ2+ln⁡a)+ln2⁡a2−π2i(γ+ln⁡a)+∑n≥1(ia)nn!n2.{\displaystyle \int _{1}^{\infty }e^{iax}{\frac {\ln x}{x}}\,dx=-{\frac {\pi ^{2}}{24}}+\gamma \left({\frac {\gamma }{2}}+\ln a\right)+{\frac {\ln ^{2}a}{2}}-{\frac {\pi }{2}}i\left(\gamma +\ln a\right)+\sum _{n\geq 1}{\frac {(ia)^{n}}{n!n^{2}}}~.} Similarly∫1∞eiaxln⁡xx2dx=1+ia[−π224+γ(γ2+ln⁡a−1)+ln2⁡a2−ln⁡a+1]+πa2(γ+ln⁡a−1)+∑n≥1(ia)n+1(n+1)!n2.{\displaystyle \int _{1}^{\infty }e^{iax}{\frac {\ln x}{x^{2}}}\,dx=1+ia\left[-{\frac {\pi ^{2}}{24}}+\gamma \left({\frac {\gamma }{2}}+\ln a-1\right)+{\frac {\ln ^{2}a}{2}}-\ln a+1\right]+{\frac {\pi a}{2}}{\Bigl (}\gamma +\ln a-1{\Bigr )}+\sum _{n\geq 1}{\frac {(ia)^{n+1}}{(n+1)!n^{2}}}~.} Padé approximantsof the convergent Taylor series provide an efficient way to evaluate the functions for small arguments. The following formulae, given by Rowe et al. (2015),[2]are accurate to better than10−16for0 ≤x≤ 4,Si⁡(x)≈x⋅(1−4.54393409816329991⋅10−2⋅x2+1.15457225751016682⋅10−3⋅x4−1.41018536821330254⋅10−5⋅x6+9.43280809438713025⋅10−8⋅x8−3.53201978997168357⋅10−10⋅x10+7.08240282274875911⋅10−13⋅x12−6.05338212010422477⋅10−16⋅x141+1.01162145739225565⋅10−2⋅x2+4.99175116169755106⋅10−5⋅x4+1.55654986308745614⋅10−7⋅x6+3.28067571055789734⋅10−10⋅x8+4.5049097575386581⋅10−13⋅x10+3.21107051193712168⋅10−16⋅x12)Ci⁡(x)≈γ+ln⁡(x)+x2⋅(−0.25+7.51851524438898291⋅10−3⋅x2−1.27528342240267686⋅10−4⋅x4+1.05297363846239184⋅10−6⋅x6−4.68889508144848019⋅10−9⋅x8+1.06480802891189243⋅10−11⋅x10−9.93728488857585407⋅10−15⋅x121+1.1592605689110735⋅10−2⋅x2+6.72126800814254432⋅10−5⋅x4+2.55533277086129636⋅10−7⋅x6+6.97071295760958946⋅10−10⋅x8+1.38536352772778619⋅10−12⋅x10+1.89106054713059759⋅10−15⋅x12+1.39759616731376855⋅10−18⋅x14){\displaystyle {\begin{array}{rcl}\operatorname {Si} (x)&\approx &x\cdot \left({\frac {\begin{array}{l}1-4.54393409816329991\cdot 10^{-2}\cdot x^{2}+1.15457225751016682\cdot 10^{-3}\cdot x^{4}-1.41018536821330254\cdot 10^{-5}\cdot x^{6}\\~~~+9.43280809438713025\cdot 10^{-8}\cdot x^{8}-3.53201978997168357\cdot 10^{-10}\cdot x^{10}+7.08240282274875911\cdot 10^{-13}\cdot x^{12}\\~~~-6.05338212010422477\cdot 10^{-16}\cdot x^{14}\end{array}}{\begin{array}{l}1+1.01162145739225565\cdot 10^{-2}\cdot x^{2}+4.99175116169755106\cdot 10^{-5}\cdot x^{4}+1.55654986308745614\cdot 10^{-7}\cdot x^{6}\\~~~+3.28067571055789734\cdot 10^{-10}\cdot x^{8}+4.5049097575386581\cdot 10^{-13}\cdot x^{10}+3.21107051193712168\cdot 10^{-16}\cdot x^{12}\end{array}}}\right)\\&~&\\\operatorname {Ci} (x)&\approx &\gamma +\ln(x)+\\&&x^{2}\cdot \left({\frac {\begin{array}{l}-0.25+7.51851524438898291\cdot 10^{-3}\cdot x^{2}-1.27528342240267686\cdot 10^{-4}\cdot x^{4}+1.05297363846239184\cdot 10^{-6}\cdot x^{6}\\~~~-4.68889508144848019\cdot 10^{-9}\cdot x^{8}+1.06480802891189243\cdot 10^{-11}\cdot x^{10}-9.93728488857585407\cdot 10^{-15}\cdot x^{12}\\\end{array}}{\begin{array}{l}1+1.1592605689110735\cdot 10^{-2}\cdot x^{2}+6.72126800814254432\cdot 10^{-5}\cdot x^{4}+2.55533277086129636\cdot 10^{-7}\cdot x^{6}\\~~~+6.97071295760958946\cdot 10^{-10}\cdot x^{8}+1.38536352772778619\cdot 10^{-12}\cdot x^{10}+1.89106054713059759\cdot 10^{-15}\cdot x^{12}\\~~~+1.39759616731376855\cdot 10^{-18}\cdot x^{14}\\\end{array}}}\right)\end{array}}} The integrals may be evaluated indirectly viaauxiliary functionsf(x){\displaystyle f(x)}andg(x){\displaystyle g(x)}, which are defined by Forx≥4{\displaystyle x\geq 4}thePadé rational functionsgiven below approximatef(x){\displaystyle f(x)}andg(x){\displaystyle g(x)}with error less than 10−16:[2] f(x)≈1x⋅(1+7.44437068161936700618⋅102⋅x−2+1.96396372895146869801⋅105⋅x−4+2.37750310125431834034⋅107⋅x−6+1.43073403821274636888⋅109⋅x−8+4.33736238870432522765⋅1010⋅x−10+6.40533830574022022911⋅1011⋅x−12+4.20968180571076940208⋅1012⋅x−14+1.00795182980368574617⋅1013⋅x−16+4.94816688199951963482⋅1012⋅x−18−4.94701168645415959931⋅1011⋅x−201+7.46437068161927678031⋅102⋅x−2+1.97865247031583951450⋅105⋅x−4+2.41535670165126845144⋅107⋅x−6+1.47478952192985464958⋅109⋅x−8+4.58595115847765779830⋅1010⋅x−10+7.08501308149515401563⋅1011⋅x−12+5.06084464593475076774⋅1012⋅x−14+1.43468549171581016479⋅1013⋅x−16+1.11535493509914254097⋅1013⋅x−18)g(x)≈1x2⋅(1+8.1359520115168615⋅102⋅x−2+2.35239181626478200⋅105⋅x−4+3.12557570795778731⋅107⋅x−6+2.06297595146763354⋅109⋅x−8+6.83052205423625007⋅1010⋅x−10+1.09049528450362786⋅1012⋅x−12+7.57664583257834349⋅1012⋅x−14+1.81004487464664575⋅1013⋅x−16+6.43291613143049485⋅1012⋅x−18−1.36517137670871689⋅1012⋅x−201+8.19595201151451564⋅102⋅x−2+2.40036752835578777⋅105⋅x−4+3.26026661647090822⋅107⋅x−6+2.23355543278099360⋅109⋅x−8+7.87465017341829930⋅1010⋅x−10+1.39866710696414565⋅1012⋅x−12+1.17164723371736605⋅1013⋅x−14+4.01839087307656620⋅1013⋅x−16+3.99653257887490811⋅1013⋅x−18){\displaystyle {\begin{array}{rcl}f(x)&\approx &{\dfrac {1}{x}}\cdot \left({\frac {\begin{array}{l}1+7.44437068161936700618\cdot 10^{2}\cdot x^{-2}+1.96396372895146869801\cdot 10^{5}\cdot x^{-4}+2.37750310125431834034\cdot 10^{7}\cdot x^{-6}\\~~~+1.43073403821274636888\cdot 10^{9}\cdot x^{-8}+4.33736238870432522765\cdot 10^{10}\cdot x^{-10}+6.40533830574022022911\cdot 10^{11}\cdot x^{-12}\\~~~+4.20968180571076940208\cdot 10^{12}\cdot x^{-14}+1.00795182980368574617\cdot 10^{13}\cdot x^{-16}+4.94816688199951963482\cdot 10^{12}\cdot x^{-18}\\~~~-4.94701168645415959931\cdot 10^{11}\cdot x^{-20}\end{array}}{\begin{array}{l}1+7.46437068161927678031\cdot 10^{2}\cdot x^{-2}+1.97865247031583951450\cdot 10^{5}\cdot x^{-4}+2.41535670165126845144\cdot 10^{7}\cdot x^{-6}\\~~~+1.47478952192985464958\cdot 10^{9}\cdot x^{-8}+4.58595115847765779830\cdot 10^{10}\cdot x^{-10}+7.08501308149515401563\cdot 10^{11}\cdot x^{-12}\\~~~+5.06084464593475076774\cdot 10^{12}\cdot x^{-14}+1.43468549171581016479\cdot 10^{13}\cdot x^{-16}+1.11535493509914254097\cdot 10^{13}\cdot x^{-18}\end{array}}}\right)\\&&\\g(x)&\approx &{\dfrac {1}{x^{2}}}\cdot \left({\frac {\begin{array}{l}1+8.1359520115168615\cdot 10^{2}\cdot x^{-2}+2.35239181626478200\cdot 10^{5}\cdot x^{-4}+3.12557570795778731\cdot 10^{7}\cdot x^{-6}\\~~~+2.06297595146763354\cdot 10^{9}\cdot x^{-8}+6.83052205423625007\cdot 10^{10}\cdot x^{-10}+1.09049528450362786\cdot 10^{12}\cdot x^{-12}\\~~~+7.57664583257834349\cdot 10^{12}\cdot x^{-14}+1.81004487464664575\cdot 10^{13}\cdot x^{-16}+6.43291613143049485\cdot 10^{12}\cdot x^{-18}\\~~~-1.36517137670871689\cdot 10^{12}\cdot x^{-20}\end{array}}{\begin{array}{l}1+8.19595201151451564\cdot 10^{2}\cdot x^{-2}+2.40036752835578777\cdot 10^{5}\cdot x^{-4}+3.26026661647090822\cdot 10^{7}\cdot x^{-6}\\~~~+2.23355543278099360\cdot 10^{9}\cdot x^{-8}+7.87465017341829930\cdot 10^{10}\cdot x^{-10}+1.39866710696414565\cdot 10^{12}\cdot x^{-12}\\~~~+1.17164723371736605\cdot 10^{13}\cdot x^{-14}+4.01839087307656620\cdot 10^{13}\cdot x^{-16}+3.99653257887490811\cdot 10^{13}\cdot x^{-18}\end{array}}}\right)\\\end{array}}}
https://en.wikipedia.org/wiki/Trigonometric_integral
Instatisticsand, in particular, in the fitting oflinearorlogistic regressionmodels, theelastic netis aregularizedregression method thatlinearly combinestheL1andL2penalties of thelassoandridgemethods. Nevertheless, elastic net regularization is typically more accurate than both methods with regard to reconstruction.[1] The elastic net method overcomes the limitations of theLASSO(least absolute shrinkage and selection operator) method which uses a penalty function based on Use of this penalty function has several limitations.[2]For example, in the "largep, smalln" case (high-dimensional data with few examples), the LASSO selects at mostnvariables before it saturates. Also if there is a group of highly correlated variables, then the LASSO tends to select one variable from a group and ignore the others. To overcome these limitations, the elastic net adds a quadratic part (‖β‖2{\displaystyle \|\beta \|^{2}}) to the penalty, which when used alone isridge regression(known also asTikhonov regularization). The estimates from the elastic net method are defined by The quadratic penalty term makes the loss function strongly convex, and it therefore has a unique minimum. The elastic net method includes the LASSO and ridge regression: in other words, each of them is a special case whereλ1=λ,λ2=0{\displaystyle \lambda _{1}=\lambda ,\lambda _{2}=0}orλ1=0,λ2=λ{\displaystyle \lambda _{1}=0,\lambda _{2}=\lambda }. Meanwhile, the naive version of elastic net method finds an estimator in a two-stage procedure : first for each fixedλ2{\displaystyle \lambda _{2}}it finds the ridge regression coefficients, and then does a LASSO type shrinkage. This kind of estimation incurs a double amount of shrinkage, which leads to increased bias and poor predictions. To improve the prediction performance, sometimes the coefficients of the naive version of elastic net is rescaled by multiplying the estimated coefficients by(1+λ2){\displaystyle (1+\lambda _{2})}.[2] Examples of where the elastic net method has been applied are: It was proven in 2014 that the elastic net can be reduced to the linearsupport vector machine.[7]A similar reduction was previously proven for the LASSO in 2014.[8]The authors showed that for every instance of the elastic net, an artificial binary classification problem can be constructed such that the hyper-plane solution of a linearsupport vector machine(SVM) is identical to the solutionβ{\displaystyle \beta }(after re-scaling). The reduction immediately enables the use of highly optimized SVM solvers for elastic net problems. It also enables the use ofGPUacceleration, which is often already used for large-scale SVM solvers.[9]The reduction is a simple transformation of the original data and regularization constants into new artificial data instances and a regularization constant that specify a binary classification problem and the SVM regularization constant Here,y2{\displaystyle y_{2}}consists of binary labels−1,1{\displaystyle {-1,1}}. When2p>n{\displaystyle 2p>n}it is typically faster to solve the linear SVM in the primal, whereas otherwise the dual formulation is faster. Some authors have referred to the transformation as Support Vector Elastic Net (SVEN), and provided the following MATLAB pseudo-code:
https://en.wikipedia.org/wiki/Elastic_net_regularization
Inapplied mathematics,antieigenvalue theorywas developed byKarl Gustafsonfrom 1966 to 1968. The theory is applicable tonumerical analysis,wavelets,statistics,quantum mechanics,financeandoptimization. The antieigenvectorsx{\displaystyle x}are the vectors most turned by a matrix or operatorA{\displaystyle A}, that is to say those for which the angle between the original vector and its transformed image is greatest. The corresponding antieigenvalueμ{\displaystyle \mu }is the cosine of the maximal turning angle. The maximal turning angle isϕ(A){\displaystyle \phi (A)}and is called the angle of the operator. Just like the eigenvalues which may be ordered as a spectrum from smallest to largest, the theory of antieigenvalues orders the antieigenvalues of an operator A from the smallest to the largest turning angles. Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Antieigenvalue_theory
In mathematics, aneigenoperator,A, of amatrixHis alinear operatorsuch that whereλ{\displaystyle \lambda }is a correspondingscalarcalled aneigenvalue.[1] This article aboutmatricesis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Eigenoperator
Inmathematics, aneigenplaneis a two-dimensionalinvariant subspacein a givenvector space. By analogy with the termeigenvectorfor a vector which, when operated on by alinear operatoris another vector which is ascalarmultiple of itself, the termeigenplanecan be used to describe a two-dimensionalplane(a2-plane), such that the operation of alinear operatoron a vector in the 2-plane always yields another vector in the same 2-plane. A particular case that has been studied is that in which the linear operator is anisometryMof thehypersphere(writtenS3) represented within four-dimensionalEuclidean space: wheresandtare four-dimensional column vectors and Λθis a two-dimensionaleigenrotationwithin theeigenplane. In the usualeigenvectorproblem, there is freedom to multiply an eigenvector by an arbitrary scalar; in this case there is freedom to multiply by an arbitrary non-zerorotation. This case is potentially physically interesting in the case that theshape of the universeis amultiply connected3-manifold, since finding theanglesof the eigenrotations of a candidate isometry fortopological lensingis a way tofalsifysuch hypotheses.
https://en.wikipedia.org/wiki/Eigenplane
EigenMoments[1]is a set oforthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitivemoments. Their application can be found insignal processingandcomputer visionas descriptors of the signal or image. The descriptors can later be used forclassificationpurposes. It is obtained by performingorthogonalization, viaeigen analysisongeometric moments.[2] EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizingsignal-to-noise ratioin the feature space in form ofRayleigh quotient. This approach has several benefits in Image processing applications: Assume that a signal vectors∈Rn{\displaystyle s\in {\mathcal {R}}^{n}}is taken from a certain distribution having correlationC∈Rn×n{\displaystyle C\in {\mathcal {R}}^{n\times n}}, i.e.C=E[ssT]{\displaystyle C=E[ss^{T}]}where E[.] denotes expected value. Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality. This is performed by a two-step linear transformation: q=WTXTs,{\displaystyle q=W^{T}X^{T}s,} whereq=[q1,...,qn]T∈Rk{\displaystyle q=[q_{1},...,q_{n}]^{T}\in {\mathcal {R}}^{k}}is the transformed signal,X=[x1,...,xn]T∈Rn×m{\displaystyle X=[x_{1},...,x_{n}]^{T}\in {\mathcal {R}}^{n\times m}}a fixedtransformation matrixwhich transforms the signal into the moment space, andW=[w1,...,wn]T∈Rm×k{\displaystyle W=[w_{1},...,w_{n}]^{T}\in {\mathcal {R}}^{m\times k}}the transformation matrix which we are going to determine by maximizing theSNRof the feature space resided byq{\displaystyle q}. For the case of Geometric Moments, X would be the monomials. Ifm=k=n{\displaystyle m=k=n}, a full rank transformation would result, however usually we havem≤n{\displaystyle m\leq n}andk≤m{\displaystyle k\leq m}. This is specially the case whenn{\displaystyle n}is of high dimensions. FindingW{\displaystyle W}that maximizes theSNRof the feature space: SNRtransform=wTXTCXwwTXTNXw,{\displaystyle SNR_{transform}={\frac {w^{T}X^{T}CXw}{w^{T}X^{T}NXw}},} where N is the correlation matrix of the noise signal. The problem can thus be formulated as w1,...,wk=argmaxwwTXTCXwwTXTNXw{\displaystyle {w_{1},...,w_{k}}=argmax_{w}{\frac {w^{T}X^{T}CXw}{w^{T}X^{T}NXw}}} subject to constraints: wiTXTNXwj=δij,{\displaystyle w_{i}^{T}X^{T}NXw_{j}=\delta _{ij},}whereδij{\displaystyle \delta _{ij}}is theKronecker delta. It can be observed that this maximization is Rayleigh quotient by lettingA=XTCX{\displaystyle A=X^{T}CX}andB=XTNX{\displaystyle B=X^{T}NX}and therefore can be written as: w1,...,wk=argmaxxwTAwwTBw{\displaystyle {w_{1},...,w_{k}}={\underset {x}{\operatorname {arg\,max} }}{\frac {w^{T}Aw}{w^{T}Bw}}},wiTBwj=δij{\displaystyle w_{i}^{T}Bw_{j}=\delta _{ij}} Optimization ofRayleigh quotient[3][4]has the form: maxwR(w)=maxwwTAwwTBw{\displaystyle \max _{w}R(w)=\max _{w}{\frac {w^{T}Aw}{w^{T}Bw}}} andA{\displaystyle A}andB{\displaystyle B}, both aresymmetricandB{\displaystyle B}ispositive definiteand thereforeinvertible. Scalingw{\displaystyle w}does not change the value of the object function and hence and additional scalar constraintwTBw=1{\displaystyle w^{T}Bw=1}can be imposed onw{\displaystyle w}and no solution would be lost when the objective function is optimized. This constraint optimization problem can be solved usingLagrangian multiplier: maxwwTAw{\displaystyle \max _{w}{w^{T}Aw}}subject towTBw=1{\displaystyle {w^{T}Bw}=1} maxwL(w)=maxw(wTAw−λwTBw){\displaystyle \max _{w}{\mathcal {L}}(w)=\max _{w}(w{T}Aw-\lambda w^{T}Bw)} equating first derivative to zero and we will have: Aw=λBw{\displaystyle Aw=\lambda Bw} which is an instance ofGeneralized Eigenvalue Problem(GEP). The GEP has the form: Aw=λBw{\displaystyle Aw=\lambda Bw} for any pair(w,λ){\displaystyle (w,\lambda )}that is a solution to above equation,w{\displaystyle w}is called ageneralized eigenvectorandλ{\displaystyle \lambda }is called ageneralized eigenvalue. Findingw{\displaystyle w}andλ{\displaystyle \lambda }that satisfies this equations would produce the result which optimizesRayleigh quotient. One way of maximizingRayleigh quotientis through solving theGeneralized Eigen Problem.Dimension reductioncan be performed by simply choosing the first componentswi{\displaystyle w_{i}},i=1,...,k{\displaystyle i=1,...,k}, with the highest values forR(w){\displaystyle R(w)}out of them{\displaystyle m}components, and discard the rest. Interpretation of this transformation isrotatingandscalingthe moment space, transforming it into a feature space with maximizedSNRand therefore, the firstk{\displaystyle k}components are the components with highestk{\displaystyle k}SNRvalues. The other method to look at this solution is to use the concept ofsimultaneous diagonalizationinstead ofGeneralized Eigen Problem. W=W1W2.{\displaystyle W=W_{1}W_{2}.} PTBP=DB{\displaystyle P^{T}BP=D_{B}}. WhereDB{\displaystyle D_{B}}is adiagonal matrixsorted in increasing order. SinceB{\displaystyle B}is positive definite, thusDB>0{\displaystyle D_{B}>0}. We can discard thoseeigenvaluesthat large and retain those close to 0, since this means the energy of the noise is close to 0 in this space, at this stage it is also possible to discard thoseeigenvectorsthat have largeeigenvalues. LetP^{\displaystyle {\hat {P}}}be the firstk{\displaystyle k}columns ofP{\displaystyle P}, nowPT^BP^=DB^{\displaystyle {\hat {P^{T}}}B{\hat {P}}={\hat {D_{B}}}}whereDB^{\displaystyle {\hat {D_{B}}}}is thek×k{\displaystyle k\times k}principal submatrix ofDB{\displaystyle D_{B}}. W1=P^DB^−1/2{\displaystyle W_{1}={\hat {P}}{\hat {D_{B}}}^{-1/2}} and hence: W1TBW1=(P^DB^−1/2)TB(P^DB^−1/2)=I{\displaystyle W_{1}^{T}BW_{1}=({\hat {P}}{\hat {D_{B}}}^{-1/2})^{T}B({\hat {P}}{\hat {D_{B}}}^{-1/2})=I}. W1{\displaystyle W_{1}}whitenB{\displaystyle B}and reduces the dimensionality fromm{\displaystyle m}tok{\displaystyle k}. The transformed space resided byq′=W1TXTs{\displaystyle q'=W_{1}^{T}X^{T}s}is called the noise space. W2TW1TAW1W2=DA{\displaystyle W_{2}^{T}W_{1}^{T}AW_{1}W_{2}=D_{A}}, whereW2TW2=I{\displaystyle W_{2}^{T}W_{2}=I}.DA{\displaystyle D_{A}}is the matrix witheigenvaluesofW1TAW1{\displaystyle W_{1}^{T}AW_{1}}on its diagonal. We may retain all theeigenvaluesand their correspondingeigenvectorssince most of the noise are already discarded in previous step. W=W1W2{\displaystyle W=W_{1}W_{2}} whereW{\displaystyle W}diagonalizesboth the numerator and denominator of theSNR, WTAW=DA{\displaystyle W^{T}AW=D_{A}},WTBW=I{\displaystyle W^{T}BW=I}and the transformation of signals{\displaystyle s}is defined asq=WTXTs=W2TW1TXTs{\displaystyle q=W^{T}X^{T}s=W_{2}^{T}W_{1}^{T}X^{T}s}. To find the information loss when we discard some of the eigenvalues and eigenvectors we can perform following analysis: η=1−trace(W1TAW1)trace(DB−1/2PTAPDB−1/2)=1−trace(DB^−1/2P^TAP^DB^−1/2)trace(DB−1/2PTAPDB−1/2){\displaystyle {\begin{array}{lll}\eta &=&1-{\frac {trace(W_{1}^{T}AW_{1})}{trace(D_{B}^{-1/2}P^{T}APD_{B}^{-1/2})}}\\&=&1-{\frac {trace({\hat {D_{B}}}^{-1/2}{\hat {P}}^{T}A{\hat {P}}{\hat {D_{B}}}^{-1/2})}{trace(D_{B}^{-1/2}P^{T}APD_{B}^{-1/2})}}\end{array}}} Eigenmoments are derived by applying the above framework on Geometric Moments. They can be derived for both 1D and 2D signals. If we letX=[1,x,x2,...,xm−1]{\displaystyle X=[1,x,x^{2},...,x^{m-1}]}, i.e. themonomials, after the transformationXT{\displaystyle X^{T}}we obtain Geometric Moments, denoted by vectorM{\displaystyle M}, of signals=[s(x)]{\displaystyle s=[s(x)]},i.e.M=XTs{\displaystyle M=X^{T}s}. In practice it is difficult to estimate the correlation signal due to insufficient number of samples, therefore parametric approaches are utilized. One such model can be defined as: r(x1,x2)=r(0,0)e−c(x1−x2)2{\displaystyle r(x_{1},x_{2})=r(0,0)e^{-c(x_{1}-x_{2})^{2}}}, wherer(0,0)=E[tr(ssT)]{\displaystyle r(0,0)=E[tr(ss^{T})]}. This model of correlation can be replaced by other models however this model covers general natural images. Sincer(0,0){\displaystyle r(0,0)}does not affect the maximization it can be dropped. A=XTCX=∫−11∫−11[x1jx2ie−c(x1−x2)2]i,j=0i,j=m−1dx1dx2{\displaystyle A=X^{T}CX=\int _{-1}^{1}\int _{-1}^{1}[x_{1}^{j}x_{2}^{i}e^{-c(x_{1}-x_{2})^{2}}]_{i,j=0}^{i,j=m-1}dx_{1}dx_{2}} The correlation of noise can be modelled asσn2δ(x1,x2){\displaystyle \sigma _{n}^{2}\delta (x_{1},x_{2})}, whereσn2{\displaystyle \sigma _{n}^{2}}is the energy of noise. Againσn2{\displaystyle \sigma _{n}^{2}}can be dropped because the constant does not have any effect on the maximization problem. B=XTNX=∫−11∫−11[x1jx2iδ(x1,x2)]i,j=0i,j=m−1dx1dx2{\displaystyle B=X^{T}NX=\int _{-1}^{1}\int _{-1}^{1}[x_{1}^{j}x_{2}^{i}\delta (x_{1},x_{2})]_{i,j=0}^{i,j=m-1}dx_{1}dx_{2}}B=XTNX=∫−11[x1j+i]i,j=0i,j=m−1dx1=XTX{\displaystyle B=X^{T}NX=\int _{-1}^{1}[x_{1}^{j+i}]_{i,j=0}^{i,j=m-1}dx_{1}=X^{T}X} Using the computed A and B and applying the algorithm discussed in previous section we findW{\displaystyle W}and set of transformedmonomialsΦ=[ϕ1,...,ϕk]=XW{\displaystyle \Phi =[\phi _{1},...,\phi _{k}]=XW}which produces the moment kernels of EM. The moment kernels of EM decorrelate the correlation in the image. ΦTCΦ=(XW)TC(XW)=DC{\displaystyle \Phi ^{T}C\Phi =(XW)^{T}C(XW)=D_{C}}, and are orthogonal: ΦTΦ=(XW)T(XW)=WTXTX=WTXTNXW=WTBW=I{\displaystyle {\begin{array}{lll}\Phi ^{T}\Phi &=&(XW)^{T}(XW)\\&=&W^{T}X^{T}X\\&=&W^{T}X^{T}NXW\\&=&W^{T}BW\\&=&I\\\end{array}}} Takingc=0.5{\displaystyle c=0.5}, the dimension of moment space asm=6{\displaystyle m=6}and the dimension of feature space ask=4{\displaystyle k=4}, we will have: W=(0.00−0.7745−0.89602.8669−4.46220.00.00.00.07.92722.4523−4.022520.65050.00.00.00.0−9.2789−0.1239−0.5092−18.45820.00.0){\displaystyle W=\left({\begin{array}{cccc}0.0&0&-0.7745&-0.8960\\2.8669&-4.4622&0.0&0.0\\0.0&0.0&7.9272&2.4523\\-4.0225&20.6505&0.0&0.0\\0.0&0.0&-9.2789&-0.1239\\-0.5092&-18.4582&0.0&0.0\end{array}}\right)} and ϕ1=2.8669x−4.0225x3−0.5092x5ϕ2=−4.4622x+20.6505x3−18.4582x5ϕ3=−0.7745+7.9272x2−9.2789x4ϕ4=−0.8960+2.4523x2−0.1239x4{\displaystyle {\begin{array}{lll}\phi _{1}&=&2.8669x-4.0225x^{3}-0.5092x^{5}\\\phi _{2}&=&-4.4622x+20.6505x^{3}-18.4582x^{5}\\\phi _{3}&=&-0.7745+7.9272x^{2}-9.2789x^{4}\\\phi _{4}&=&-0.8960+2.4523x^{2}-0.1239x^{4}\\\end{array}}} The derivation for 2D signal is the same as 1D signal except that conventionalGeometric Momentsare directly employed to obtain the set of 2D EigenMoments. The definition ofGeometric Momentsof order(p+q){\displaystyle (p+q)}for 2D image signal is: mpq=∫−11∫−11xpyqf(x,y)dxdy{\displaystyle m_{pq}=\int _{-1}^{1}\int _{-1}^{1}x^{p}y^{q}f(x,y)dxdy}. which can be denoted asM={mj,i}i,j=0i,j=m−1{\displaystyle M=\{m_{j,i}\}_{i,j=0}^{i,j=m-1}}. Then the set of 2D EigenMoments are: Ω=WTMW{\displaystyle \Omega =W^{T}MW}, whereΩ={Ωj,i}i,j=0i,j=k−1{\displaystyle \Omega =\{\Omega _{j,i}\}_{i,j=0}^{i,j=k-1}}is a matrix that contains the set of EigenMoments. Ωj,i=Σr=0m−1Σs=0m−1wr,jws,imr,s{\displaystyle \Omega _{j,i}=\Sigma _{r=0}^{m-1}\Sigma _{s=0}^{m-1}w_{r,j}w_{s,i}m_{r,s}}. In order to obtain a set of moment invariants we can usenormalized Geometric MomentsM^{\displaystyle {\hat {M}}}instead ofM{\displaystyle M}. Normalized Geometric Momentsare invariant to Rotation, Scaling and Transformation and defined by: m^pq=αp+q+2∫−11∫−11[(x−xc)cos(θ)+(y−yc)sin(θ)]p=×[−(x−xc)sin(θ)+(y−yc)cos(θ)]q=×f(x,y)dxdy,{\displaystyle {\begin{array}{lll}{\hat {m}}_{pq}&=&\alpha ^{p}+q+2\int _{-1}^{1}\int _{-1}^{1}[(x-x^{c})cos(\theta )+(y-y^{c})sin(\theta )]^{p}\\&=&\times [-(x-x^{c})sin(\theta )+(y-y^{c})cos(\theta )]^{q}\\&=&\times f(x,y)dxdy,\\\end{array}}} where:(xc,yc)=(m10/m00,m01/m00){\displaystyle (x^{c},y^{c})=(m_{10}/m_{00},m_{01}/m_{00})}is the centroid of the imagef(x,y){\displaystyle f(x,y)}and α=[m00S/m00]1/2θ=12tan−12m11m20−m02{\displaystyle {\begin{array}{lll}\alpha &=&[m_{00}^{S}/m_{00}]^{1/2}\\\theta &=&{\frac {1}{2}}tan^{-1}{\frac {2m_{11}}{m_{20}-m_{02}}}\end{array}}}. m00S{\displaystyle m_{00}^{S}}in this equation is a scaling factor depending on the image.m00S{\displaystyle m_{00}^{S}}is usually set to 1 for binary images.
https://en.wikipedia.org/wiki/Eigenmoments
Inquantum physics, aquantum stateis a mathematical entity that embodies the knowledge of a quantum system.Quantum mechanicsspecifies the construction, evolution, andmeasurementof a quantum state. The result is a prediction for the system represented by the state. Knowledge of the quantum state, and the rules for the system's evolution in time, exhausts all that can be known about a quantum system. Quantum states may be defined differently for different kinds of systems or problems. Two broad categories are Historical, educational, and application-focused problems typically feature wave functions; modern professional physics uses the abstract vector states. In both categories, quantum states divide intopureversusmixed states, or intocoherent statesand incoherent states. Categories with special properties includestationary statesfor time independence andquantum vacuum statesinquantum field theory. As a tool for physics, quantum states grew out of states inclassical mechanics. A classical dynamical state consists of a set of dynamical variables with well-definedrealvalues at each instant of time.[1]: 3For example, the state of a cannon ball would consist of its position and velocity. The state values evolve under equations of motion and thus remain strictly determined. If we know the position of a cannon and the exit velocity of its projectiles, then we can use equations containing the force of gravity to predict the trajectory of a cannon ball precisely. Similarly, quantum states consist of sets of dynamical variables that evolve under equations of motion. However, the values derived from quantum states arecomplex numbers, quantized, limited byuncertainty relations,[1]: 159and only provide aprobability distributionfor the outcomes for a system. These constraints alter the nature of quantum dynamic variables. For example, the quantum state of an electron in adouble-slit experimentwould consist of complex values over the detection region and, when squared, only predict the probability distribution of electron counts across the detector. The process of describing a quantum system with quantum mechanics begins with identifying a set of variables defining the quantum state of the system.[1]: 204The set will containcompatible and incompatible variables. Simultaneous measurement of acomplete set of compatible variablesprepares the system in a unique state. The state then evolves deterministically according to theequations of motion. Subsequent measurement of the state produces a sample from a probability distribution predicted by the quantum mechanicaloperatorcorresponding to the measurement. The fundamentally statistical or probabilisitic nature of quantum measurements changes the role of quantum states in quantum mechanics compared to classical states in classical mechanics. In classical mechanics, the initial state of one or more bodies is measured; the state evolves according to the equations of motion; measurements of the final state are compared to predictions. In quantum mechanics, ensembles of identically prepared quantum states evolve according to the equations of motion and many repeated measurements are compared to predicted probability distributions.[1]: 204 Measurements, macroscopic operations on quantum states, filter the state.[1]: 196Whatever the input quantum state might be, repeated identical measurements give consistent values. For this reason, measurements 'prepare' quantum states for experiments, placing the system in a partially defined state. Subsequent measurements may either further prepare the system – these are compatible measurements – or it may alter the state, redefining it – these are called incompatible or complementary measurements. For example, we may measure the momentum of a state along thex{\displaystyle x}axis any number of times and get the same result, but if we measure the position after once measuring the momentum, subsequent measurements of momentum are changed. The quantum state appears unavoidably altered by incompatible measurements. This is known as theuncertainty principle. The quantum state after a measurement is in aneigenstatecorresponding to that measurement and the value measured.[1]: 202Other aspects of the state may be unknown. Repeating the measurement will not alter the state. In some cases, compatible measurements can further refine the state, causing it to be an eigenstate corresponding to all these measurements.[2]A full set of compatible measurements produces apure state. Any state that is not pure is called amixed stateas discussed in more depthbelow.[1]: 204[3]: 73 The eigenstate solutions to theSchrödinger equationcan be formed into pure states. Experiments rarely produce pure states. Therefore statistical mixtures of solutions must be compared to experiments.[1]: 204 The same physical quantum state can be expressed mathematically in different ways calledrepresentations.[1]The position wave function is one representation often seen first in introductions to quantum mechanics. The equivalent momentum wave function is another wave function based representation. Representations are analogous to coordinate systems[1]: 244or similar mathematical devices likeparametric equations. Selecting a representation will make some aspects of a problem easier at the cost of making other things difficult. In formal quantum mechanics (see§ Formalism in quantum physicsbelow) the theory develops in terms of abstract 'vector space', avoiding any particular representation. This allows many elegant concepts of quantum mechanics to be expressed and to be applied even in cases where no classical analog exists.[1]: 244 Wave functionsrepresent quantum states, particularly when they are functions of position or ofmomentum. Historically, definitions of quantum states used wavefunctions before the more formal methods were developed.[4]: 268The wave function is a complex-valued function of any complete set of commuting or compatibledegrees of freedom. For example, one set could be thex,y,z{\displaystyle x,y,z}spatial coordinates of an electron. Preparing a system by measuring the complete set of compatible observables produces apure quantum state. More common, incomplete preparation produces amixed quantum state. Wave function solutions ofSchrödinger's equations of motionfor operators corresponding to measurements can readily be expressed as pure states; they must be combined with statistical weights matching experimental preparation to compute the expected probability distribution.[1]: 205 Numerical or analytic solutions in quantum mechanics can be expressed aspure states. These solution states, calledeigenstates, are labeled with quantized values, typicallyquantum numbers. For example, when dealing with theenergy spectrumof theelectronin ahydrogen atom, the relevant pure states are identified by theprincipal quantum numbern, theangular momentum quantum numberℓ, themagnetic quantum numberm, and thespinz-componentsz. For another example, if the spin of an electron is measured in any direction, e.g. with aStern–Gerlach experiment, there are two possible results: up or down. A pure state here is represented by a two-dimensionalcomplexvector(α,β){\displaystyle (\alpha ,\beta )}, with a length of one; that is, with|α|2+|β|2=1,{\displaystyle |\alpha |^{2}+|\beta |^{2}=1,}where|α|{\displaystyle |\alpha |}and|β|{\displaystyle |\beta |}are theabsolute valuesofα{\displaystyle \alpha }andβ{\displaystyle \beta }. Thepostulates of quantum mechanicsstate that pure states, at a given timet, correspond tovectorsin aseparablecomplexHilbert space, while each measurable physical quantity (such as the energy or momentum of aparticle) is associated with a mathematicaloperatorcalled theobservable. The operator serves as alinear functionthat acts on the states of the system. Theeigenvaluesof the operator correspond to the possible values of the observable. For example, it is possible to observe a particle with a momentum of 1 kg⋅m/s if and only if one of the eigenvalues of the momentum operator is 1 kg⋅m/s. The correspondingeigenvector(which physicists call aneigenstate) with eigenvalue 1 kg⋅m/s would be a quantum state with a definite, well-defined value of momentum of 1 kg⋅m/s, with noquantum uncertainty. If its momentum were measured, the result is guaranteed to be 1 kg⋅m/s. On the other hand, a pure state described as asuperpositionof multiple different eigenstatesdoesin general have quantum uncertainty for the given observable. Usingbra–ket notation, thislinear combinationof eigenstates can be represented as:[5]: 22, 171, 172|Ψ(t)⟩=∑nCn(t)|Φn⟩.{\displaystyle |\Psi (t)\rangle =\sum _{n}C_{n}(t)|\Phi _{n}\rangle .}The coefficient that corresponds to a particular state in the linear combination is a complex number, thus allowing interference effects between states. The coefficients are time dependent. How a quantum state changes in time is governed by thetime evolution operator. A mixed quantum state corresponds to a probabilistic mixture of pure states; however, different distributions of pure states can generate equivalent (i.e., physically indistinguishable) mixed states. Amixtureof quantum states is again a quantum state. A mixed state for electron spins, in the density-matrix formulation, has the structure of a2×2{\displaystyle 2\times 2}matrix that isHermitianand positive semi-definite, and hastrace1.[6]A more complicated case is given (inbra–ket notation) by thesinglet state, which exemplifiesquantum entanglement:|ψ⟩=12(|↑↓⟩−|↓↑⟩),{\displaystyle \left|\psi \right\rangle ={\frac {1}{\sqrt {2}}}{\bigl (}\left|\uparrow \downarrow \right\rangle -\left|\downarrow \uparrow \right\rangle {\bigr )},}which involvessuperpositionof joint spin states for two particles with spin 1/2. The singlet state satisfies the property that if the particles' spins are measured along the same direction then either the spin of the first particle is observed up and the spin of the second particle is observed down, or the first one is observed down and the second one is observed up, both possibilities occurring with equal probability. A pure quantum state can be represented by arayin aprojective Hilbert spaceover thecomplex numbers, while mixed states are represented bydensity matrices, which arepositive semidefinite operatorsthat act on Hilbert spaces.[7][3]TheSchrödinger–HJW theoremclassifies the multitude of ways to write a given mixed state as aconvex combinationof pure states.[8]Before a particularmeasurementis performed on a quantum system, the theory gives only aprobability distributionfor the outcome, and the form that this distribution takes is completely determined by the quantum state and thelinear operatorsdescribing the measurement. Probability distributions for different measurements exhibit tradeoffs exemplified by theuncertainty principle: a state that implies a narrow spread of possible outcomes for one experiment necessarily implies a wide spread of possible outcomes for another. Statistical mixtures of states are a different type of linear combination. A statistical mixture of states is astatistical ensembleof independent systems. Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically, a statistical mixture is not a combination using complex coefficients, but rather a combination using real-valued, positive probabilities of different statesΦn{\displaystyle \Phi _{n}}. A numberPn{\displaystyle P_{n}}represents the probability of a randomly selected system being in the stateΦn{\displaystyle \Phi _{n}}. Unlike the linear combination case each system is in a definite eigenstate.[9][10] The expectation value⟨A⟩σ{\displaystyle {\langle A\rangle }_{\sigma }}of an observableAis a statistical mean of measured values of the observable. It is this mean, and the distribution of probabilities, that is predicted by physical theories. There is no state that is simultaneously an eigenstate forallobservables. For example, we cannot prepare a state such that both the position measurementQ(t)and the momentum measurementP(t)(at the same timet) are known exactly; at least one of them will have a range of possible values.[a]This is the content of theHeisenberg uncertainty relation. Moreover, in contrast to classical mechanics, it is unavoidable thatperforming a measurement on the system generally changes its state.[11][12][13]: 4More precisely: After measuring an observableA, the system will be in an eigenstate ofA; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measureAtwice in the same run of the experiment, the measurements being directly consecutive in time,[b]then they will produce the same results. This has some strange consequences, however, as follows. Consider twoincompatible observables,AandB, whereAcorresponds to a measurement earlier in time thanB.[c]Suppose that the system is in an eigenstate ofBat the experiment's beginning. If we measure onlyB, all runs of the experiment will yield the same result. If we measure firstAand thenBin the same run of the experiment, the system will transfer to an eigenstate ofAafter the first measurement, and we will generally notice that the results ofBare statistical. Thus:Quantum mechanical measurements influence one another, and the order in which they are performed is important. Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, calledentangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, seeQuantum entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow us to distinguish between quantum theory and alternative classical (non-quantum) models. One can take the observables to be dependent on time, while the stateσwas fixed once at the beginning of the experiment. This approach is called theHeisenberg picture. (This approach was taken in the later part of the discussion above, with time-varying observablesP(t),Q(t).) One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as theSchrödinger picture. (This approach was taken in the earlier part of the discussion above, with a time-varying state|Ψ(t)⟩=∑nCn(t)|Φn⟩{\textstyle |\Psi (t)\rangle =\sum _{n}C_{n}(t)|\Phi _{n}\rangle }.) Conceptually (and mathematically), the two approaches are equivalent; choosing one of them is a matter of convention. Both viewpoints are used in quantum theory. While non-relativisticquantum mechanicsis usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, forquantum field theory. Compare withDirac picture.[14]:65 Quantum physics is most commonly formulated in terms oflinear algebra, as follows. Any given system is identified with some finite- or infinite-dimensionalHilbert space. The pure states correspond to vectors ofnorm1. Thus the set of all pure states corresponds to theunit spherein the Hilbert space, because the unit sphere is defined as the set of all vectors with norm 1. Multiplying a pure state by ascalaris physically inconsequential (as long as the state is considered by itself). If a vector in a complex Hilbert spaceH{\displaystyle H}can be obtained from another vector by multiplying by some non-zero complex number, the two vectors inH{\displaystyle H}are said to correspond to the samerayin theprojective Hilbert spaceP(H){\displaystyle \mathbf {P} (H)}ofH{\displaystyle H}. Note that although the wordrayis used, properly speaking, a point in the projective Hilbert space corresponds to alinepassing through the origin of the Hilbert space, rather than ahalf-line, orrayin thegeometrical sense. Theangular momentumhas the same dimension (M·L2·T−1) as thePlanck constantand, at quantum scale, behaves as adiscretedegree of freedom of a quantum system. Most particles possess a kind of intrinsic angular momentum that does not appear at all in classical mechanics and arises from Dirac's relativistic generalization of the theory. Mathematically it is described withspinors. In non-relativistic quantum mechanics thegroup representationsof theLie groupSU(2) are used to describe this additional freedom. For a given particle, the choice of representation (and hence the range of possible values of the spin observable) is specified by a non-negative numberSthat, in units of thereduced Planck constantħ, is either aninteger(0, 1, 2, ...) or ahalf-integer(1/2, 3/2, 5/2, ...). For amassiveparticle with spinS, itsspin quantum numbermalways assumes one of the2S+ 1possible values in the set{−S,−S+1,…,S−1,S}{\displaystyle \{-S,-S+1,\ldots ,S-1,S\}} As a consequence, the quantum state of a particle with spin is described by avector-valued wave function with values inC2S+1. Equivalently, it is represented by acomplex-valued functionof four variables: one discretequantum numbervariable (for the spin) is added to the usual three continuous variables (for the position in space). The quantum state of a system ofNparticles, each potentially with spin, is described by a complex-valued function with four variables per particle, corresponding to 3spatial coordinatesandspin, e.g.|ψ(r1,m1;…;rN,mN)⟩.{\displaystyle |\psi (\mathbf {r} _{1},\,m_{1};\;\dots ;\;\mathbf {r} _{N},\,m_{N})\rangle .} Here, the spin variablesmνassume values from the set{−Sν,−Sν+1,…,Sν−1,Sν}{\displaystyle \{-S_{\nu },\,-S_{\nu }+1,\,\ldots ,\,S_{\nu }-1,\,S_{\nu }\}}whereSν{\displaystyle S_{\nu }}is the spin ofνth particle.Sν=0{\displaystyle S_{\nu }=0}for a particle that does not exhibit spin. The treatment ofidentical particlesis very different forbosons(particles with integer spin) versusfermions(particles with half-integer spin). The aboveN-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) with respect to the particle numbers. If not allNparticles are identical, but some of them are, then the function must be (anti)symmetrized separately over the variables corresponding to each group of identical variables, according to its statistics (bosonic or fermionic). Electrons are fermions withS= 1/2,photons(quanta of light) are bosons withS= 1(although in thevacuumthey aremasslessand can't be described with Schrödinger mechanics). When symmetrization or anti-symmetrization is unnecessary,N-particle spaces of states can be obtained simply bytensor productsof one-particle spaces, to which we will return later. A state|ψ⟩{\displaystyle |\psi \rangle }belonging to aseparablecomplexHilbert spaceH{\displaystyle H}can always be expressed uniquely as alinear combinationof elements of anorthonormal basisofH{\displaystyle H}. Usingbra–ket notation, this means any state|ψ⟩{\displaystyle |\psi \rangle }can be written as|ψ⟩=∑ici|ki⟩,=∑i|ki⟩⟨ki|ψ⟩,{\displaystyle {\begin{aligned}|\psi \rangle &=\sum _{i}c_{i}|{k_{i}}\rangle ,\\&=\sum _{i}|{k_{i}}\rangle \langle k_{i}|\psi \rangle ,\end{aligned}}}withcomplexcoefficientsci=⟨ki|ψ⟩{\displaystyle c_{i}=\langle {k_{i}}|\psi \rangle }and basis elements|ki⟩{\displaystyle |k_{i}\rangle }. In this case, thenormalization conditiontranslates to⟨ψ|ψ⟩=∑i⟨ψ|ki⟩⟨ki|ψ⟩=∑i|ci|2=1.{\displaystyle \langle \psi |\psi \rangle =\sum _{i}\langle \psi |{k_{i}}\rangle \langle k_{i}|\psi \rangle =\sum _{i}\left|c_{i}\right|^{2}=1.}In physical terms,|ψ⟩{\displaystyle |\psi \rangle }has been expressed as aquantum superpositionof the "basis states"|ki⟩{\displaystyle |{k_{i}}\rangle }, i.e., theeigenstatesof an observable. In particular, if said observable is measured on the normalized state|ψ⟩{\displaystyle |\psi \rangle }, then|ci|2=|⟨ki|ψ⟩|2,{\displaystyle |c_{i}|^{2}=|\langle {k_{i}}|\psi \rangle |^{2},}is the probability that the result of the measurement iski{\displaystyle k_{i}}.[5]: 22 In general, the expression for probability always consist of a relation between the quantum state and aportion of the spectrumof the dynamical variable (i.e.random variable) being observed.[15]: 98[16]: 53For example, the situation above describes the discrete case aseigenvalueski{\displaystyle k_{i}}belong to thepoint spectrum. Likewise, thewave functionis just theeigenfunctionof theHamiltonian operatorwith corresponding eigenvalue(s)E{\displaystyle E}; the energy of the system. An example of the continuous case is given by theposition operator. The probability measure for a system in stateψ{\displaystyle \psi }is given by:[17]Pr(x∈B|ψ)=∫B⊂R|ψ(x)|2dx,{\displaystyle \mathrm {Pr} (x\in B|\psi )=\int _{B\subset \mathbb {R} }|\psi (x)|^{2}dx,}where|ψ(x)|2{\displaystyle |\psi (x)|^{2}}is the probability density function for finding a particle at a given position. These examples emphasize the distinction in charactertistics between the state and the observable. That is, whereasψ{\displaystyle \psi }is a pure state belonging toH{\displaystyle H}, the(generalized) eigenvectorsof the position operator donot.[18] Though closely related, pure states are not the same as bound states belonging to thepure point spectrumof an observable with no quantum uncertainty. A particle is said to be in abound stateif it remains localized in a bounded region of space for all times. A pure state|ϕ⟩{\displaystyle |\phi \rangle }is called a bound stateif and only iffor everyε>0{\displaystyle \varepsilon >0}there is acompact setK⊂R3{\displaystyle K\subset \mathbb {R} ^{3}}such that∫K|ϕ(r,t)|2d3r≥1−ε{\displaystyle \int _{K}|\phi (\mathbf {r} ,t)|^{2}\,\mathrm {d} ^{3}\mathbf {r} \geq 1-\varepsilon }for allt∈R{\displaystyle t\in \mathbb {R} }.[19]The integral represents the probability that a particle is found in a bounded regionK{\displaystyle K}at any timet{\displaystyle t}. If the probability remains arbitrarily close to1{\displaystyle 1}then the particle is said to remain inK{\displaystyle K}. For example,non-normalizablesolutions of thefree Schrödinger equationcan be expressed as functions that are normalizable, usingwave packets. These wave packets belong to the pure point spectrum of a correspondingprojection operatorwhich, mathematically speaking, constitutes an observable.[16]: 48However, they are not bound states. As mentioned above, quantum states may besuperposed. If|α⟩{\displaystyle |\alpha \rangle }and|β⟩{\displaystyle |\beta \rangle }are two kets corresponding to quantum states, the ketcα|α⟩+cβ|β⟩{\displaystyle c_{\alpha }|\alpha \rangle +c_{\beta }|\beta \rangle }is also a quantum state of the same system. Bothcα{\displaystyle c_{\alpha }}andcβ{\displaystyle c_{\beta }}can be complex numbers; their relative amplitude and relative phase will influence the resulting quantum state. Writing the superposed state usingcα=Aαeiθαcβ=Aβeiθβ{\displaystyle c_{\alpha }=A_{\alpha }e^{i\theta _{\alpha }}\ \ c_{\beta }=A_{\beta }e^{i\theta _{\beta }}}and defining the norm of the state as:|cα|2+|cβ|2=Aα2+Aβ2=1{\displaystyle |c_{\alpha }|^{2}+|c_{\beta }|^{2}=A_{\alpha }^{2}+A_{\beta }^{2}=1}and extracting the common factors gives:eiθα(Aα|α⟩+1−Aα2eiθβ−iθα|β⟩){\displaystyle e^{i\theta _{\alpha }}\left(A_{\alpha }|\alpha \rangle +{\sqrt {1-A_{\alpha }^{2}}}e^{i\theta _{\beta }-i\theta _{\alpha }}|\beta \rangle \right)}The overall phase factor in front has no physical effect.[20]: 108Only the relative phase affects the physical nature of the superposition. One example of superposition is thedouble-slit experiment, in which superposition leads toquantum interference. Another example of the importance of relative phase isRabi oscillations, where the relative phase of two states varies in time due to theSchrödinger equation. The resulting superposition ends up oscillating back and forth between two different states. Apure quantum stateis a state which can be described by a single ket vector, as described above. Amixed quantum stateis astatistical ensembleof pure states (seeQuantum statistical mechanics).[3]: 73 Mixed states arise in quantum mechanics in two different situations: first, when the preparation of the system is not fully known, and thus one must deal with astatistical ensembleof possible preparations; and second, when one wants to describe a physical system which isentangledwith another, as its state cannot be described by a pure state. In the first case, there could theoretically be another person who knows the full history of the system, and therefore describe the same system as a pure state; in this case, the density matrix is simply used to represent the limited knowledge of a quantum state. In the second case, however, the existence of quantum entanglement theoretically prevents the existence of complete knowledge about the subsystem, and it's impossible for any person to describe the subsystem of an entangled pair as a pure state. Mixed states inevitably arise from pure states when, for a composite quantum systemH1⊗H2{\displaystyle H_{1}\otimes H_{2}}with anentangledstate on it, the partH2{\displaystyle H_{2}}is inaccessible to the observer.[3]: 121–122The state of the partH1{\displaystyle H_{1}}is expressed then as thepartial traceoverH2{\displaystyle H_{2}}. A mixed statecannotbe described with a single ket vector.[21]: 691–692Instead, it is described by its associateddensity matrix(ordensity operator), usually denotedρ. Density matrices can describe both mixedandpure states, treating them on the same footing. Moreover, a mixed quantum state on a given quantum system described by a Hilbert spaceH{\displaystyle H}can be always represented as the partial trace of a pure quantum state (called apurification) on a larger bipartite systemH⊗K{\displaystyle H\otimes K}for a sufficiently large Hilbert spaceK{\displaystyle K}. The density matrix describing a mixed state is defined to be an operator of the formρ=∑sps|ψs⟩⟨ψs|{\displaystyle \rho =\sum _{s}p_{s}|\psi _{s}\rangle \langle \psi _{s}|}wherepsis the fraction of the ensemble in each pure state|ψs⟩.{\displaystyle |\psi _{s}\rangle .}The density matrix can be thought of as a way of using the one-particleformalismto describe the behavior of many similar particles by giving a probability distribution (or ensemble) of states that these particles can be found in. A simple criterion for checking whether a density matrix is describing a pure or mixed state is that thetraceofρ2is equal to 1 if the state is pure, and less than 1 if the state is mixed.[d][22]Another, equivalent, criterion is that thevon Neumann entropyis 0 for a pure state, and strictly positive for a mixed state. The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observableAis given by⟨A⟩=∑sps⟨ψs|A|ψs⟩=∑s∑ipsai|⟨αi|ψs⟩|2=tr⁡(ρA){\displaystyle \langle A\rangle =\sum _{s}p_{s}\langle \psi _{s}|A|\psi _{s}\rangle =\sum _{s}\sum _{i}p_{s}a_{i}|\langle \alpha _{i}|\psi _{s}\rangle |^{2}=\operatorname {tr} (\rho A)}where|αi⟩{\displaystyle |\alpha _{i}\rangle }andai{\displaystyle a_{i}}are eigenkets and eigenvalues, respectively, for the operatorA, and "tr" denotes trace.[3]: 73It is important to note that two types of averaging are occurring, one (overi{\displaystyle i}) being the usual expected value of the observable when the quantum is in state|ψs⟩{\displaystyle |\psi _{s}\rangle }, and the other (overs{\displaystyle s}) being a statistical (saidincoherent) average with the probabilitiespsthat the quantum is in those states. States can be formulated in terms of observables, rather than as vectors in a vector space. These arepositive normalized linear functionalson aC*-algebra, or sometimes other classes of algebras of observables. SeeState on a C*-algebraandGelfand–Naimark–Segal constructionfor more details. The concept of quantum states, in particular the content of the sectionFormalism in quantum physicsabove, is covered in most standard textbooks on quantum mechanics. For a discussion of conceptual aspects and a comparison with classical states, see: For a more detailed coverage of mathematical aspects, see: For a discussion of purifications of mixed quantum states, see Chapter 2 of John Preskill's lecture notes forPhysics 219at Caltech. For a discussion of geometric aspects see:
https://en.wikipedia.org/wiki/Quantum_states
Inlinear algebra, aJordan normal form, also known as aJordan canonical form,[1][2]is anupper triangular matrixof a particular form called aJordan matrixrepresenting alinear operatoron afinite-dimensionalvector spacewith respect to somebasis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on thesuperdiagonal), and with identical diagonal entries to the left and below them. LetVbe a vector space over afieldK. Then a basis with respect to which the matrix has the required form existsif and only ifalleigenvaluesof the matrix lie inK, or equivalently if thecharacteristic polynomialof the operator splits into linear factors overK. This condition is always satisfied ifKisalgebraically closed(for instance, if it is the field ofcomplex numbers). The diagonal entries of the normal form are the eigenvalues (of the operator), and the number of times each eigenvalue occurs is called thealgebraic multiplicityof the eigenvalue.[3][4][5] If the operator is originally given by asquare matrixM, then its Jordan normal form is also called the Jordan normal form ofM. Any square matrix has a Jordan normal form if the field of coefficients is extended to one containing all the eigenvalues of the matrix. In spite of its name, the normal form for a givenMis not entirely unique, as it is ablock diagonal matrixformed ofJordan blocks, the order of which is not fixed; it is conventional to group blocks for the same eigenvalue together, but no ordering is imposed among the eigenvalues, nor among the blocks for a given eigenvalue, although the latter could for instance be ordered by weakly decreasing size.[3][4][5] TheJordan–Chevalley decompositionis particularly simple with respect to a basis for which the operator takes its Jordan normal form. The diagonal form fordiagonalizablematrices, for instancenormal matrices, is a special case of the Jordan normal form.[6][7][8] The Jordan normal form is named afterCamille Jordan, who first stated the Jordan decomposition theorem in 1870.[9] Some textbooks have the ones on thesubdiagonal; that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal.[10][11] Ann×nmatrixAisdiagonalizableif and only if the sum of the dimensions of the eigenspaces isn. Or, equivalently, if and only ifAhasnlinearly independenteigenvectors. Not all matrices are diagonalizable; matrices that are not diagonalizable are calleddefectivematrices. Consider the following matrix: Including multiplicity, the eigenvalues ofAareλ= 1, 2, 4, 4. Thedimensionof the eigenspace corresponding to the eigenvalue 4 is 1 (and not 2), soAis not diagonalizable. However, there is an invertible matrixPsuch thatJ=P−1AP, where The matrixJ{\displaystyle J}is almost diagonal. This is the Jordan normal form ofA. The sectionExamplebelow fills in the details of the computation. In general, a square complex matrixAissimilarto ablock diagonal matrix where each blockJiis a square matrix of the form So there exists an invertible matrixPsuch thatP−1AP=Jis such that the only non-zero entries ofJare on the diagonal and the superdiagonal.Jis called theJordan normal formofA. EachJiis called aJordan blockofA. In a given Jordan block, every entry on the superdiagonal is 1. Assuming this result, we can deduce the following properties: Consider the matrixA{\displaystyle A}from the example in the previous section. The Jordan normal form is obtained by somesimilarity transformation: LetP{\displaystyle P}have column vectorspi{\displaystyle p_{i}},i=1,…,4{\displaystyle i=1,\ldots ,4}, then We see that Fori=1,2,3{\displaystyle i=1,2,3}we havepi∈ker⁡(A−λiI){\displaystyle p_{i}\in \ker(A-\lambda _{i}I)}, that is,pi{\displaystyle p_{i}}is an eigenvector ofA{\displaystyle A}corresponding to the eigenvalueλi{\displaystyle \lambda _{i}}. Fori=4{\displaystyle i=4}, multiplying both sides by(A−4I){\displaystyle (A-4I)}gives But(A−4I)p3=0{\displaystyle (A-4I)p_{3}=0}, so Thus,p4∈ker⁡(A−4I)2.{\displaystyle p_{4}\in \ker(A-4I)^{2}.} Vectors such asp4{\displaystyle p_{4}}are calledgeneralized eigenvectorsofA. This example shows how to calculate the Jordan normal form of a given matrix. Consider the matrix which is mentioned in the beginning of the article. Thecharacteristic polynomialofAis This shows that the eigenvalues are 1, 2, 4 and 4, according to algebraic multiplicity. The eigenspace corresponding to the eigenvalue 1 can be found by solving the equationAv= 1v.It is spanned by the column vectorv= (−1, 1, 0, 0)T. Similarly, the eigenspace corresponding to the eigenvalue 2 is spanned byw= (1, −1, 0, 1)T. Finally, the eigenspace corresponding to the eigenvalue 4 is also one-dimensional (even though this is a double eigenvalue) and is spanned byx= (1, 0, −1, 1)T. So, thegeometric multiplicity(that is, the dimension of the eigenspace of the given eigenvalue) of each of the three eigenvalues is one. Therefore, the two eigenvalues equal to 4 correspond to a single Jordan block, and the Jordan normal form of the matrixAis thedirect sum There are threeJordan chains. Two have length one: {v} and {w}, corresponding to the eigenvalues 1 and 2, respectively. There is one chain of length two corresponding to the eigenvalue 4. To find this chain, calculate whereIis the4 × 4identity matrix. Pick a vector in the above span that is not in the kernel ofA− 4I;for example,y= (1,0,0,0)T. Now,(A− 4I)y=xand(A− 4I)x= 0, so {y,x} is a chain of length two corresponding to the eigenvalue 4. The transition matrixPsuch thatP−1AP=Jis formed by putting these vectors next to each other as follows A computation shows that the equationP−1AP=Jindeed holds. If we had interchanged the order in which the chain vectors appeared, that is, changing the order ofv,wand {x,y} together, the Jordan blocks would be interchanged. However, the Jordan forms are equivalent Jordan forms. Given an eigenvalueλ, every corresponding Jordan block gives rise to aJordan chainof linearly independent vectorspi, i= 1, ...,b, wherebis the size of the Jordan block. Thegenerator, orlead vector,pbof the chain is a generalized eigenvector such that⁠(A−λI)bpb=0{\displaystyle (A-\lambda I)^{b}p_{b}=0}⁠. The vector⁠p1=(A−λI)b−1pb{\displaystyle p_{1}=(A-\lambda I)^{b-1}p_{b}}⁠is an ordinary eigenvector corresponding toλ. In general,piis a preimage ofpi−1under⁠A−λI{\displaystyle A-\lambda I}⁠. So the lead vector generates the chain via multiplication by⁠A−λI{\displaystyle A-\lambda I}⁠.[13][2]Therefore, the statement that every square matrixAcan be put in Jordan normal form is equivalent to the claim that the underlying vector space has a basis composed of Jordan chains. We give aproof by inductionthat any complex-valued square matrixAmay be put in Jordan normal form. Since the underlying vector space can be shown[14]to be the direct sum ofinvariant subspacesassociated with the eigenvalues,Acan be assumed to have just one eigenvalueλ. The 1 × 1 case is trivial. LetAbe ann×nmatrix. Therangeof⁠A−λI{\displaystyle A-\lambda I}⁠, denoted by⁠Ran⁡(A−λI){\displaystyle \operatorname {Ran} (A-\lambda I)}⁠, is an invariant subspace ofA. Also, sinceλis an eigenvalue ofA, the dimension of⁠Ran⁡(A−λI){\displaystyle \operatorname {Ran} (A-\lambda I)}⁠,r, is strictly less thann, so, by the inductive hypothesis,⁠Ran⁡(A−λI){\displaystyle \operatorname {Ran} (A-\lambda I)}⁠has abasis{p1, ...,pr} composed of Jordan chains. Next consider thekernel, that is, thesubspace⁠ker⁡(A−λI){\displaystyle \ker(A-\lambda I)}⁠. If the desired result follows immediately from therank–nullity theorem. (This would be the case, for example, ifAwereHermitian.) Otherwise, if let the dimension ofQbes≤r. Each vector inQis an eigenvector, so⁠Ran⁡(A−λI){\displaystyle \operatorname {Ran} (A-\lambda I)}⁠must containsJordan chains corresponding toslinearly independent eigenvectors. Therefore the basis {p1, ...,pr} must containsvectors, say {p1, ...,ps}, that are lead vectors of these Jordan chains. We can "extend the chains" by taking the preimages of these lead vectors. (This is the key step.) Letqibe such that Finally, we can pick any basis for and then lift to vectors {z1, ...,zt} in⁠ker⁡(A−λI){\displaystyle \ker(A-\lambda I)}⁠. Eachziforms a Jordan chain of length 1. We just need to show that the union of {p1, ...,pr}, {z1, ...,zt}, and {q1, ...,qs} forms a basis for the vector space. By the rank-nullity theorem,⁠dim⁡(ker⁡(A−λI)))=n−r{\displaystyle \dim(\ker(A-\lambda I)))=n-r}⁠, so⁠t=n−r−s{\displaystyle t=n-r-s}⁠, and so the number of vectors in the potential basis is equal to n. To show linear independence, suppose some linear combination of the vectors is 0. Applying⁠A−λI,{\displaystyle A-\lambda I,}⁠we get some linear combination ofpi, with theqibecoming lead vectors among thepi.From linear indepence ofpi,it follows that the coefficients of the vectorsqimust be zero. Furthermore, no non-trivial linear combination of thezican equal a linear combination ofpi, because then it would belong to⁠Ran⁡(A−λI){\displaystyle \operatorname {Ran} (A-\lambda I)}⁠and thusQ, which is impossible by the construction ofzi. Therefore the coefficients of theziwill also be 0. This leaves in the original linear combination just thepiterms, which are assumed to be linearly independent, and so their coefficients must be zero too. We have found a basis composed of Jordan chains, and this showsAcan be put in Jordan normal form. It can be shown that the Jordan normal form of a given matrixAis unique up to the order of the Jordan blocks. Knowing the algebraic and geometric multiplicities of the eigenvalues is not sufficient to determine the Jordan normal form ofA. Assuming the algebraic multiplicitym(λ) of an eigenvalueλis known, the structure of the Jordan form can be ascertained by analyzing the ranks of the powers(A−λI)m(λ). To see this, suppose ann×nmatrixAhas only one eigenvalueλ. Som(λ) =n. The smallest integerk1such that is the size of the largest Jordan block in the Jordan form ofA. (This numberk1is also called theindexofλ. See discussion in a following section.) The rank of is the number of Jordan blocks of sizek1. Similarly, the rank of is twice the number of Jordan blocks of sizek1plus the number of Jordan blocks of sizek1− 1. The general case is similar. This can be used to show the uniqueness of the Jordan form. LetJ1andJ2be two Jordan normal forms ofA. ThenJ1andJ2are similar and have the same spectrum, including algebraic multiplicities of the eigenvalues. The procedure outlined in the previous paragraph can be used to determine the structure of these matrices. Since the rank of a matrix is preserved by similarity transformation, there is a bijection between the Jordan blocks ofJ1andJ2. This proves the uniqueness part of the statement. IfAis a real matrix, its Jordan form can still be non-real. Instead of representing it with complex eigenvalues and ones on the superdiagonal, as discussed above, there exists a real invertible matrixPsuch thatP−1AP=Jis a realblock diagonal matrixwith each block being a real Jordan block.[15]A real Jordan block is either identical to a complex Jordan block (if the corresponding eigenvalueλi{\displaystyle \lambda _{i}}is real), or is a block matrix itself, consisting of 2×2 blocks (for non-real eigenvalueλi=ai+ibi{\displaystyle \lambda _{i}=a_{i}+ib_{i}}with given algebraic multiplicity) of the form and describe multiplication byλi{\displaystyle \lambda _{i}}in the complex plane. The superdiagonal blocks are 2×2 identity matrices and hence in this representation the matrix dimensions are larger than the complex Jordan form. The full real Jordan block is given by This real Jordan form is a consequence of the complex Jordan form. For a real matrix the nonreal eigenvectors and generalized eigenvectors can always be chosen to formcomplex conjugatepairs. Taking the real and imaginary part (linear combination of the vector and its conjugate), the matrix has this form with respect to the new basis. Jordan reduction can be extended to any square matrixMwhose entries lie in afieldK. The result states that anyMcan be written as a sumD+NwhereDissemisimple,Nisnilpotent, andDN=ND. This is called theJordan–Chevalley decomposition. WheneverKcontains the eigenvalues ofM, in particular whenKisalgebraically closed, the normal form can be expressed explicitly as thedirect sumof Jordan blocks. Similar to the case whenKis the complex numbers, knowing the dimensions of the kernels of(M−λI)kfor 1 ≤k≤m, wheremis thealgebraic multiplicityof the eigenvalueλ, allows one to determine the Jordan form ofM. We may view the underlying vector spaceVas aK[x]-moduleby regarding the action ofxonVas application ofMand extending byK-linearity. Then the polynomials(x−λ)kare the elementary divisors ofM, and the Jordan normal form is concerned with representingMin terms of blocks associated to the elementary divisors. The proof of the Jordan normal form is usually carried out as an application to theringK[x] of thestructure theorem for finitely generated modules over a principal ideal domain, of which it is a corollary. One can see that the Jordan normal form is essentially a classification result for square matrices, and as such several important results from linear algebra can be viewed as its consequences. Using the Jordan normal form, direct calculation gives a spectral mapping theorem for thepolynomial functional calculus: LetAbe ann×nmatrix with eigenvaluesλ1, ...,λn, then for any polynomialp,p(A) has eigenvaluesp(λ1), ...,p(λn). Thecharacteristic polynomialofAispA(λ)=det(λI−A){\displaystyle p_{A}(\lambda )=\det(\lambda I-A)}.Similar matriceshave the same characteristic polynomial. Therefore,pA(λ)=pJ(λ)=∏i(λ−λi)mi{\textstyle p_{A}(\lambda )=p_{J}(\lambda )=\prod _{i}(\lambda -\lambda _{i})^{m_{i}}}, whereλi{\displaystyle \lambda _{i}}is theith root ofpJ{\textstyle p_{J}}andmi{\displaystyle m_{i}}is its multiplicity, because this is clearly the characteristic polynomial of the Jordan form ofA. TheCayley–Hamilton theoremasserts that every matrixAsatisfies its characteristic equation: ifpis thecharacteristic polynomialofA, thenpA(A)=0{\displaystyle p_{A}(A)=0}. This can be shown via direct calculation in the Jordan form, since ifλi{\displaystyle \lambda _{i}}is an eigenvalue of multiplicitym{\displaystyle m}, then its Jordan blockJi{\displaystyle J_{i}}clearly satisfies(Ji−λiI)mi=0{\displaystyle (J_{i}-\lambda _{i}I)^{m_{i}}=0}. As the diagonal blocks do not affect each other, theith diagonal block of(A−λiI)mi{\displaystyle (A-\lambda _{i}I)^{m_{i}}}is(Ji−λiI)mi{\displaystyle (J_{i}-\lambda _{i}I)^{m_{i}}}; hencepA(A)=∏i(A−λiI)mi=0{\textstyle p_{A}(A)=\prod _{i}(A-\lambda _{i}I)^{m_{i}}=0}. The Jordan form can be assumed to exist over a field extending the base field of the matrix, for instance over thesplitting fieldofp; this field extension does not change the matrixp(A)in any way. Theminimal polynomialP of a square matrixAis the uniquemonic polynomialof least degree,m, such thatP(A) = 0. Alternatively, the set of polynomials that annihilate a givenAform an idealIinC[x], theprincipal ideal domainof polynomials with complex coefficients. The monic element that generatesIis preciselyP. Letλ1, ...,λqbe the distinct eigenvalues ofA, andsibe the size of the largest Jordan block corresponding toλi. It is clear from the Jordan normal form that the minimal polynomial ofAhas degreeΣsi. While the Jordan normal form determines the minimal polynomial, the converse is not true. This leads to the notion ofelementary divisors. The elementary divisors of a square matrixAare the characteristic polynomials of its Jordan blocks. The factors of the minimal polynomialmare the elementary divisors of the largest degree corresponding to distinct eigenvalues. The degree of an elementary divisor is the size of the corresponding Jordan block, therefore the dimension of the corresponding invariant subspace. If all elementary divisors are linear,Ais diagonalizable. The Jordan form of an×nmatrixAis block diagonal, and therefore gives a decomposition of thendimensional Euclidean space into invariant subspaces ofA. Every Jordan blockJicorresponds to an invariant subspaceXi. Symbolically, we put where eachXiis the span of the corresponding Jordan chain, andkis the number of Jordan chains. One can also obtain a slightly different decomposition via the Jordan form. Given an eigenvalueλi, the size of its largest corresponding Jordan blocksiis called theindexofλiand denoted byv(λi). (Therefore, the degree of the minimal polynomial is the sum of all indices.) Define a subspaceYiby This gives the decomposition wherelis the number of distinct eigenvalues ofA. Intuitively, we glob together the Jordan block invariant subspaces corresponding to the same eigenvalue. In the extreme case whereAis a multiple of the identity matrix we havek=nandl= 1. The projection ontoYiand along all the otherYj(j≠i) is calledthe spectral projection ofAatviand is usually denoted byP(λi;A). Spectral projections are mutually orthogonal in the sense thatP(λi;A)P(vj;A) = 0ifi≠j. Also they commute withAand their sum is the identity matrix. Replacing every viin the Jordan matrixJby one and zeroing all other entries givesP(vi;J), moreover ifU J U−1is the similarity transformation such thatA=U J U−1thenP(λi;A) =U P(λi;J)U−1. They are not confined to finite dimensions. See below for their application to compact operators, and inholomorphic functional calculusfor a more general discussion. Comparing the two decompositions, notice that, in general,l≤k. WhenAis normal, the subspacesXi's in the first decomposition are one-dimensional and mutually orthogonal. This is thespectral theoremfor normal operators. The second decomposition generalizes more easily for general compact operators on Banach spaces. It might be of interest here to note some properties of the index,ν(λ). More generally, for a complex numberλ, its index can be defined as the least non-negative integerν(λ)such that Soν(v) > 0if and only ifλis an eigenvalue ofA. In the finite-dimensional case,ν(v) ≤the algebraic multiplicity ofv. The Jordan form is used to find a normal form of matrices up to conjugacy such that normal matrices make up an algebraic variety of a low fixed degree in the ambient matrix space. Sets of representatives of matrix conjugacy classes for Jordan normal form orrational canonical formsin general do not constitute linear or affine subspaces in the ambient matrix spaces. Vladimir Arnoldposed[16]a problem: Find a canonical form of matrices over a field for which the set of representatives of matrix conjugacy classes is a union of affine linear subspaces (flats). In other words, map the set of matrix conjugacy classes injectively back into the initial set of matrices so that the image of this embedding—the set of all normal matrices, has the lowest possible degree—it is a union of shifted linear subspaces. It was solved for algebraically closed fields by Peteris Daugulis.[17]The construction of a uniquely definedplane normal formof a matrix starts by considering its Jordan normal form. Iteration of the Jordan chain motivates various extensions to more abstract settings. For finite matrices, one gets matrix functions; this can be extended to compact operators and the holomorphic functional calculus, as described further below. The Jordan normal form is the most convenient for computation of the matrix functions (though it may be not the best choice for computer computations). Letf(z) be an analytical function of a complex argument. Applying the function on an×nJordan blockJwith eigenvalueλresults in an upper triangular matrix: so that the elements of thek-th superdiagonal of the resulting matrix aref(k)(λ)k!{\displaystyle {\tfrac {f^{(k)}(\lambda )}{k!}}}. For a matrix of general Jordan normal form the above expression shall be applied to each Jordan block. The following example shows the application to the power functionf(z) =zn: where the binomial coefficients are defined as(nk)=∏i=1kn+1−ii{\textstyle {\binom {n}{k}}=\prod _{i=1}^{k}{\frac {n+1-i}{i}}}. For integer positivenit reduces to standard definition of the coefficients. For negativenthe identity(−nk)=(−1)k(n+k−1k){\textstyle {\binom {-n}{k}}=(-1)^{k}{\binom {n+k-1}{k}}}may be of use. A result analogous to the Jordan normal form holds forcompact operatorson aBanach space. One restricts to compact operators because every pointxin the spectrum of a compact operatorTis an eigenvalue; The only exception is whenxis the limit point of the spectrum. This is not true for bounded operators in general. To give some idea of this generalization, we first reformulate the Jordan decomposition in the language offunctional analysis. LetXbe a Banach space,L(X) be the bounded operators onX, andσ(T) denote thespectrumofT∈L(X). Theholomorphic functional calculusis defined as follows: Fix a bounded operatorT. Consider the family Hol(T) of complex functions that isholomorphicon some open setGcontainingσ(T). Let Γ = {γi} be a finite collection ofJordan curvessuch thatσ(T) lies in theinsideof Γ, we definef(T) by The open setGcould vary withfand need not be connected. The integral is defined as the limit of the Riemann sums, as in the scalar case. Although the integral makes sense for continuousf, we restrict to holomorphic functions to apply the machinery from classical function theory (for example, the Cauchy integral formula). The assumption thatσ(T) lie in the inside of Γ ensuresf(T) is well defined; it does not depend on the choice of Γ. The functional calculus is the mapping Φ from Hol(T) toL(X) given by We will require the following properties of this functional calculus: In the finite-dimensional case,σ(T) = {λi} is a finite discrete set in the complex plane. Leteibe the function that is 1 in some open neighborhood ofλiand 0 elsewhere. By property 3 of the functional calculus, the operator is a projection. Moreover, letνibe the index ofλiand The spectral mapping theorem tells us has spectrum {0}. By property 1,f(T) can be directly computed in the Jordan form, and by inspection, we see that the operatorf(T)ei(T) is the zero matrix. By property 3,f(T)ei(T) =ei(T)f(T). Soei(T) is precisely the projection onto the subspace The relation implies where the indexiruns through the distinct eigenvalues ofT. This is the invariant subspace decomposition given in a previous section. Eachei(T) is the projection onto the subspace spanned by the Jordan chains corresponding toλiand along the subspaces spanned by the Jordan chains corresponding to vjforj≠i. In other words,ei(T) =P(λi;T). This explicit identification of the operatorsei(T) in turn gives an explicit form of holomorphic functional calculus for matrices: Notice that the expression off(T) is a finite sum because, on each neighborhood of vi, we have chosen theTaylor seriesexpansion offcentered at vi. LetTbe a bounded operatorλbe an isolated point ofσ(T). (As stated above, whenTis compact, every point in its spectrum is an isolated point, except possibly the limit point 0.) The pointλis called apoleof operatorTwith orderνif theresolventfunctionRTdefined by has apoleof orderνatλ. We will show that, in the finite-dimensional case, the order of an eigenvalue coincides with its index. The result also holds for compact operators. Consider the annular regionAcentered at the eigenvalueλwith sufficiently small radiusεsuch that the intersection of the open discBε(λ) andσ(T) is {λ}. The resolvent functionRTis holomorphic onA. Extending a result from classical function theory,RThas aLaurent seriesrepresentation onA: where By the previous discussion on the functional calculus, But we have shown that the smallest positive integermsuch that is precisely the index ofλ,ν(λ). In other words, the functionRThas a pole of orderν(λ) atλ. If the matrixAhas multiple eigenvalues, or is close to a matrix with multiple eigenvalues, then its Jordan normal form is very sensitive to perturbations. Consider for instance the matrix Ifε= 0, then the Jordan normal form is simply However, forε≠ 0, the Jordan normal form is Thisill conditioningmakes it very hard to develop a robust numerical algorithm for the Jordan normal form, as the result depends critically on whether two eigenvalues are deemed to be equal. For this reason, the Jordan normal form is usually avoided innumerical analysis; the stableSchur decomposition[18]orpseudospectra[19]are better alternatives.
https://en.wikipedia.org/wiki/Jordan_normal_form
Inmathematics, anonlinear eigenproblem, sometimesnonlinear eigenvalue problem, is a generalization of the (ordinary)eigenvalue problemto equations that dependnonlinearlyon the eigenvalue. Specifically, it refers to equations of the form wherex≠0{\displaystyle x\neq 0}is avector, andM{\displaystyle M}is amatrix-valuedfunctionof the numberλ{\displaystyle \lambda }. The numberλ{\displaystyle \lambda }is known as the (nonlinear)eigenvalue, the vectorx{\displaystyle x}as the (nonlinear)eigenvector, and(λ,x){\displaystyle (\lambda ,x)}as theeigenpair. The matrixM(λ){\displaystyle M(\lambda )}is singular at an eigenvalueλ{\displaystyle \lambda }. In the discipline ofnumerical linear algebrathe following definition is typically used.[1][2][3][4] LetΩ⊆C{\displaystyle \Omega \subseteq \mathbb {C} }, and letM:Ω→Cn×n{\displaystyle M:\Omega \rightarrow \mathbb {C} ^{n\times n}}be a function that maps scalars to matrices. A scalarλ∈C{\displaystyle \lambda \in \mathbb {C} }is called aneigenvalue, and a nonzero vectorx∈Cn{\displaystyle x\in \mathbb {C} ^{n}}is called aright eigevectorifM(λ)x=0{\displaystyle M(\lambda )x=0}. Moreover, a nonzero vectory∈Cn{\displaystyle y\in \mathbb {C} ^{n}}is called aleft eigevectorifyHM(λ)=0H{\displaystyle y^{H}M(\lambda )=0^{H}}, where the superscriptH{\displaystyle ^{H}}denotes theHermitian transpose. The definition of the eigenvalue is equivalent todet(M(λ))=0{\displaystyle \det(M(\lambda ))=0}, wheredet(){\displaystyle \det()}denotes thedeterminant.[1] The functionM{\displaystyle M}is usually required to be aholomorphic functionofλ{\displaystyle \lambda }(in somedomainΩ{\displaystyle \Omega }). In general,M(λ){\displaystyle M(\lambda )}could be alinear map, but most commonly it is a finite-dimensional, usually square, matrix. Definition:The problem is said to beregularif there exists az∈Ω{\displaystyle z\in \Omega }such thatdet(M(z))≠0{\displaystyle \det(M(z))\neq 0}. Otherwise it is said to besingular.[1][4] Definition:An eigenvalueλ{\displaystyle \lambda }is said to havealgebraicmultiplicityk{\displaystyle k}ifk{\displaystyle k}is the smallest integer such that thek{\displaystyle k}thderivativeofdet(M(z)){\displaystyle \det(M(z))}with respect toz{\displaystyle z}, inλ{\displaystyle \lambda }is nonzero. In formulas thatdkdet(M(z))dzk|z=λ≠0{\displaystyle \left.{\frac {d^{k}\det(M(z))}{dz^{k}}}\right|_{z=\lambda }\neq 0}butdℓdet(M(z))dzℓ|z=λ=0{\displaystyle \left.{\frac {d^{\ell }\det(M(z))}{dz^{\ell }}}\right|_{z=\lambda }=0}forℓ=0,1,2,…,k−1{\displaystyle \ell =0,1,2,\dots ,k-1}.[1][4] Definition:Thegeometric multiplicityof an eigenvalueλ{\displaystyle \lambda }is the dimension of thenullspaceofM(λ){\displaystyle M(\lambda )}.[1][4] The following examples are special cases of the nonlinear eigenproblem. Definition:Let(λ0,x0){\displaystyle (\lambda _{0},x_{0})}be an eigenpair. A tuple of vectors(x0,x1,…,xr−1)∈Cn×Cn×⋯×Cn{\displaystyle (x_{0},x_{1},\dots ,x_{r-1})\in \mathbb {C} ^{n}\times \mathbb {C} ^{n}\times \dots \times \mathbb {C} ^{n}}is called aJordan chainif∑k=0ℓM(k)(λ0)xℓ−k=0,{\displaystyle \sum _{k=0}^{\ell }M^{(k)}(\lambda _{0})x_{\ell -k}=0,}forℓ=0,1,…,r−1{\displaystyle \ell =0,1,\dots ,r-1}, whereM(k)(λ0){\displaystyle M^{(k)}(\lambda _{0})}denotes thek{\displaystyle k}th derivative ofM{\displaystyle M}with respect toλ{\displaystyle \lambda }and evaluated inλ=λ0{\displaystyle \lambda =\lambda _{0}}. The vectorsx0,x1,…,xr−1{\displaystyle x_{0},x_{1},\dots ,x_{r-1}}are calledgeneralized eigenvectors,r{\displaystyle r}is called thelengthof the Jordan chain, and the maximal length a Jordan chain starting withx0{\displaystyle x_{0}}is called therankofx0{\displaystyle x_{0}}.[1][4] Theorem:[1]A tuple of vectors(x0,x1,…,xr−1)∈Cn×Cn×⋯×Cn{\displaystyle (x_{0},x_{1},\dots ,x_{r-1})\in \mathbb {C} ^{n}\times \mathbb {C} ^{n}\times \dots \times \mathbb {C} ^{n}}is a Jordan chain if and only if the functionM(λ)χℓ(λ){\displaystyle M(\lambda )\chi _{\ell }(\lambda )}has arootinλ=λ0{\displaystyle \lambda =\lambda _{0}}and the root is ofmultiplicityat leastℓ{\displaystyle \ell }forℓ=0,1,…,r−1{\displaystyle \ell =0,1,\dots ,r-1}, where the vector valued functionχℓ(λ){\displaystyle \chi _{\ell }(\lambda )}is defined asχℓ(λ)=∑k=0ℓxk(λ−λ0)k.{\displaystyle \chi _{\ell }(\lambda )=\sum _{k=0}^{\ell }x_{k}(\lambda -\lambda _{0})^{k}.} Eigenvector nonlinearities is a related, but different, form of nonlinearity that is sometimes studied. In this case the functionM{\displaystyle M}maps vectors to matrices, or sometimeshermitian matricesto hermitian matrices.[13][14]
https://en.wikipedia.org/wiki/Nonlinear_eigenproblem
In mathematics, specifically inspectral theory, aneigenvalueof aclosed linear operatoris callednormalif the space admits a decomposition into a direct sum of a finite-dimensionalgeneralized eigenspaceand aninvariant subspacewhereA−λI{\displaystyle A-\lambda I}has a bounded inverse. The set of normal eigenvalues coincides with thediscrete spectrum. LetB{\displaystyle {\mathfrak {B}}}be aBanach space. Theroot linealLλ(A){\displaystyle {\mathfrak {L}}_{\lambda }(A)}of a linear operatorA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}with domainD(A){\displaystyle {\mathfrak {D}}(A)}corresponding to the eigenvalueλ∈σp(A){\displaystyle \lambda \in \sigma _{p}(A)}is defined as whereIB{\displaystyle I_{\mathfrak {B}}}is the identity operator inB{\displaystyle {\mathfrak {B}}}. This set is alinear manifoldbut not necessarily avector space, since it is not necessarily closed inB{\displaystyle {\mathfrak {B}}}. If this set is closed (for example, when it is finite-dimensional), it is called thegeneralized eigenspaceofA{\displaystyle A}corresponding to the eigenvalueλ{\displaystyle \lambda }. Aneigenvalueλ∈σp(A){\displaystyle \lambda \in \sigma _{p}(A)}of aclosed linear operatorA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}in theBanach spaceB{\displaystyle {\mathfrak {B}}}withdomainD(A)⊂B{\displaystyle {\mathfrak {D}}(A)\subset {\mathfrak {B}}}is callednormal(in the original terminology,λ{\displaystyle \lambda }corresponds to a normally splitting finite-dimensional root subspace), if the following two conditions are satisfied: That is, the restrictionA2{\displaystyle A_{2}}ofA{\displaystyle A}ontoNλ{\displaystyle {\mathfrak {N}}_{\lambda }}is an operator with domainD(A2)=Nλ∩D(A){\displaystyle {\mathfrak {D}}(A_{2})={\mathfrak {N}}_{\lambda }\cap {\mathfrak {D}}(A)}and with the rangeR(A2−λI)⊂Nλ{\displaystyle {\mathfrak {R}}(A_{2}-\lambda I)\subset {\mathfrak {N}}_{\lambda }}which has a bounded inverse.[1][2][3] LetA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}be a closed lineardensely defined operatorin the Banach spaceB{\displaystyle {\mathfrak {B}}}. The following statements are equivalent[4](Theorem III.88): Ifλ{\displaystyle \lambda }is a normal eigenvalue, then the root linealLλ(A){\displaystyle {\mathfrak {L}}_{\lambda }(A)}coincides with the range of the Riesz projector,R(Pλ){\displaystyle {\mathfrak {R}}(P_{\lambda })}.[3] The above equivalence shows that the set of normal eigenvalues coincides with thediscrete spectrum, defined as the set of isolated points of the spectrum with finite rank of the corresponding Riesz projector.[5] The spectrum of a closed operatorA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}in the Banach spaceB{\displaystyle {\mathfrak {B}}}can be decomposed into the union of two disjoint sets, the set of normal eigenvalues and the fifth type of theessential spectrum:
https://en.wikipedia.org/wiki/Normal_eigenvalue
Inmathematics, thequadratic eigenvalue problem[1](QEP), is to findscalareigenvaluesλ{\displaystyle \lambda }, lefteigenvectorsy{\displaystyle y}and right eigenvectorsx{\displaystyle x}such that whereQ(λ)=λ2M+λC+K{\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K}, with matrix coefficientsM,C,K∈Cn×n{\displaystyle M,\,C,K\in \mathbb {C} ^{n\times n}}and we require thatM≠0{\displaystyle M\,\neq 0}, (so that we have a nonzero leading coefficient). There are2n{\displaystyle 2n}eigenvalues that may beinfiniteor finite, and possibly zero. This is a special case of anonlinear eigenproblem.Q(λ){\displaystyle Q(\lambda )}is also known as a quadraticpolynomial matrix. A QEP is said to beregularifdet(Q(λ))≢0{\displaystyle {\text{det}}(Q(\lambda ))\not \equiv 0}identically. The coefficient of theλ2n{\displaystyle \lambda ^{2n}}term indet(Q(λ)){\displaystyle {\text{det}}(Q(\lambda ))}isdet(M){\displaystyle {\text{det}}(M)}, implying that the QEP is regular ifM{\displaystyle M}is nonsingular. Eigenvalues at infinity and eigenvalues at 0 may be exchanged by considering the reversed polynomial,λ2Q(λ−1)=λ2K+λC+M{\displaystyle \lambda ^{2}Q(\lambda ^{-1})=\lambda ^{2}K+\lambda C+M}. As there are2n{\displaystyle 2n}eigenvectors in an{\displaystyle n}dimensional space, the eigenvectors cannot be orthogonal. It is possible to have the same eigenvector attached to different eigenvalues. Quadratic eigenvalue problems arise naturally in the solution of systems of second orderlinear differential equationswithout forcing: Whereq(t)∈Rn{\displaystyle q(t)\in \mathbb {R} ^{n}}, andM,C,K∈Rn×n{\displaystyle M,C,K\in \mathbb {R} ^{n\times n}}. If all quadratic eigenvalues ofQ(λ)=λ2M+λC+K{\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K}are distinct, then the solution can be written in terms of the quadratic eigenvalues and right quadratic eigenvectors as WhereΛ=Diag([λ1,…,λ2n])∈R2n×2n{\displaystyle \Lambda ={\text{Diag}}([\lambda _{1},\ldots ,\lambda _{2n}])\in \mathbb {R} ^{2n\times 2n}}are the quadratic eigenvalues,X=[x1,…,x2n]∈Rn×2n{\displaystyle X=[x_{1},\ldots ,x_{2n}]\in \mathbb {R} ^{n\times 2n}}are the2n{\displaystyle 2n}right quadratic eigenvectors, andα=[α1,⋯,α2n]⊤∈R2n{\displaystyle \alpha =[\alpha _{1},\cdots ,\alpha _{2n}]^{\top }\in \mathbb {R} ^{2n}}is a parameter vector determined from the initial conditions onq{\displaystyle q}andq′{\displaystyle q'}.Stability theoryfor linear systems can now be applied, as the behavior of a solution depends explicitly on the (quadratic) eigenvalues. A QEP can result in part of the dynamic analysis of structuresdiscretizedby thefinite element method. In this case the quadratic,Q(λ){\displaystyle Q(\lambda )}has the formQ(λ)=λ2M+λC+K{\displaystyle Q(\lambda )=\lambda ^{2}M+\lambda C+K}, whereM{\displaystyle M}is themass matrix,C{\displaystyle C}is thedamping matrixandK{\displaystyle K}is thestiffness matrix. Other applications include vibro-acoustics andfluid dynamics. Direct methods for solving the standard orgeneralized eigenvalue problemsAx=λx{\displaystyle Ax=\lambda x}andAx=λBx{\displaystyle Ax=\lambda Bx}are based on transforming the problem toSchurorGeneralized Schurform. However, there is no analogous form for quadratic matrix polynomials. One approach is to transform the quadraticmatrix polynomialto a linearmatrix pencil(A−λB{\displaystyle A-\lambda B}), and solve a generalized eigenvalue problem. Once eigenvalues and eigenvectors of the linear problem have been determined, eigenvectors and eigenvalues of the quadratic can be determined. The most common linearization is the firstcompanionlinearization with corresponding eigenvector For convenience, one often takesN{\displaystyle N}to be then×n{\displaystyle n\times n}identity matrix. We solveL(λ)z=0{\displaystyle L(\lambda )z=0}forλ{\displaystyle \lambda }andz{\displaystyle z}, for example by computing the Generalized Schur form. We can then take the firstn{\displaystyle n}components ofz{\displaystyle z}as the eigenvectorx{\displaystyle x}of the original quadraticQ(λ){\displaystyle Q(\lambda )}. Another common linearization is given by In the case when eitherA{\displaystyle A}orB{\displaystyle B}is aHamiltonian matrixand the other is askew-Hamiltonian matrix, the following linearizations can be used. Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Quadratic_eigenvalue_problem
Inmathematics, thespectrumof amatrixis thesetof itseigenvalues.[1][2][3]More generally, ifT:V→V{\displaystyle T\colon V\to V}is alinear operatoron anyfinite-dimensionalvector space, its spectrum is the set of scalarsλ{\displaystyle \lambda }such thatT−λI{\displaystyle T-\lambda I}is notinvertible. Thedeterminantof the matrix equals the product of its eigenvalues. Similarly, thetraceof the matrix equals the sum of its eigenvalues.[4][5][6]From this point of view, we can define thepseudo-determinantfor asingular matrixto be the product of its nonzero eigenvalues (the density ofmultivariate normal distributionwill need this quantity). In many applications, such asPageRank, one is interested in the dominant eigenvalue, i.e. that which is largest inabsolute value. In other applications, the smallest eigenvalue is important, but in general, the whole spectrum provides valuable information about a matrix. LetVbe a finite-dimensionalvector spaceover somefieldKand supposeT:V→Vis a linear map. ThespectrumofT, denoted σT, is themultisetofrootsof thecharacteristic polynomialofT. Thus the elements of the spectrum are precisely the eigenvalues ofT, and the multiplicity of an eigenvalueλin the spectrum equals the dimension of thegeneralized eigenspaceofTforλ(also called thealgebraic multiplicityofλ). Now, fix abasisBofVoverKand supposeM∈ MatK(V) is a matrix. Define the linear mapT:V→Vpointwise byTx=Mx, where on the right-hand sidexis interpreted as a column vector andMacts onxbymatrix multiplication. We now say thatx∈Vis aneigenvectorofMifxis an eigenvector ofT. Similarly, λ ∈Kis an eigenvalue ofMif it is an eigenvalue ofT, and with the same multiplicity, and the spectrum ofM, written σM, is the multiset of all such eigenvalues. Theeigendecomposition(or spectral decomposition) of adiagonalizable matrixis adecompositionof a diagonalizable matrix into a specific canonical form whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Thespectral radiusof asquare matrixis the largest absolute value of its eigenvalues. Inspectral theory, the spectral radius of abounded linear operatoris thesupremumof the absolute values of the elements in the spectrum of that operator. Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Spectrum_of_a_matrix
Incomputer science, aball tree,balltree[1]ormetric tree, is aspace partitioningdata structurefor organizing points in a multi-dimensional space. A ball tree partitions data points into a nested set ofballs. The resulting data structure has characteristics that make it useful for a number of applications, most notablynearest neighbor search. A ball tree is abinary treein which every node defines a D-dimensional ball containing a subset of the points to be searched. Each internal node of the tree partitions the data points into twodisjoint setswhich are associated with different balls. While the balls themselves may intersect, each point is assigned to one or the other ball in the partition according to its distance from the ball's center. Each leaf node in the tree defines a ball and enumerates all data points inside that ball. Each node in the tree defines the smallest ball that contains all data points in its subtree. This gives rise to the useful property that, for a given test pointtoutside the ball, the distance to any point in a ballBin the tree is greater than or equal to the distance fromtto the surface of the ball. Formally:[2] WhereDB(t){\displaystyle D^{B}(t)}is the minimum possible distance from any point in the ballBto some pointt. Ball-trees are related to theM-tree, but only support binary splits, whereas in the M-tree each level splitsm{\displaystyle m}to2m{\displaystyle 2m}fold, thus leading to a shallower tree structure, therefore need fewer distance computations, which usually yields faster queries. Furthermore, M-trees can better be storedon disk, which is organized inpages. The M-tree also keeps the distances from the parent node precomputed to speed up queries. Vantage-point treesare also similar, but they binary split into one ball, and the remaining data, instead of using two balls. A number of ball tree construction algorithms are available.[1]The goal of such an algorithm is to produce a tree that will efficiently support queries of the desired type (e.g. nearest-neighbor) in the average case. The specific criteria of an ideal tree will depend on the type of question being answered and the distribution of the underlying data. However, a generally applicable measure of an efficient tree is one that minimizes the total volume of its internal nodes. Given the varied distributions of real-world data sets, this is a difficult task, but there are several heuristics that partition the data well in practice. In general, there is a tradeoff between the cost of constructing a tree and the efficiency achieved by this metric.[2] This section briefly describes the simplest of these algorithms. A more in-depth discussion of five algorithms was given by Stephen Omohundro.[1] The simplest such procedure is termed the "k-d Construction Algorithm", by analogy with the process used to constructk-d trees. This is anoffline algorithm, that is, an algorithm that operates on the entire data set at once. The tree is built top-down by recursively splitting the data points into two sets. Splits are chosen along the single dimension with the greatest spread of points, with the sets partitioned by the median value of all points along that dimension. Finding the split for each internal node requires linear time in the number of samples contained in that node, yielding an algorithm withtime complexityO(nlogn){\displaystyle O(n\,\log \,n)}, wherenis the number of data points. An important application of ball trees is expeditingnearest neighbor searchqueries, in which the objective is to find the k points in the tree that are closest to a given test point by some distance metric (e.g.Euclidean distance). A simple search algorithm, sometimes called KNS1, exploits the distance property of the ball tree. In particular, if the algorithm is searching the data structure with a test pointt, and has already seen some pointpthat is closest totamong the points encountered so far, then any subtree whose ball is further fromtthanpcan be ignored for the rest of the search. The ball tree nearest-neighbor algorithm examines nodes in depth-first order, starting at the root. During the search, the algorithm maintains a max-firstpriority queue(often implemented with aheap), denotedQhere, of the k nearest points encountered so far. At each nodeB, it may perform one of three operations, before finally returning an updated version of the priority queue: Performing the recursive search in the order described in point 3 above increases likelihood that the further child will be pruned entirely during the search. In comparison with several other data structures, ball trees have been shown to perform fairly well on the nearest-neighbor search problem, particularly as their number of dimensions grows.[3][4]However, the best nearest-neighbor data structure for a given application will depend on the dimensionality, number of data points, and underlying structure of the data.
https://en.wikipedia.org/wiki/Ball_tree
Theclosest pair of points problemorclosest pair problemis a problem ofcomputational geometry: givenn{\displaystyle n}points inmetric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane[1]was among the first geometric problems that were treated at the origins of the systematic study of thecomputational complexityof geometric algorithms. Randomized algorithms that solve the problem inlinear timeare known, inEuclidean spaceswhose dimension is treated as a constant for the purposes ofasymptotic analysis.[2][3][4]This is significantly faster than theO(n2){\displaystyle O(n^{2})}time (expressed here inbig O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest. It is also possible to solve the problem without randomization, inrandom-access machinemodels of computationwith unlimited memory that allow the use of thefloor function, in near-linearO(nlog⁡log⁡n){\displaystyle O(n\log \log n)}time.[5]In even more restricted models of computation, such as thealgebraic decision tree, the problem can be solved in the somewhat slowerO(nlog⁡n){\displaystyle O(n\log n)}time bound,[6]and this is optimal for this model, by a reduction from theelement uniqueness problem. Bothsweep line algorithmsanddivide-and-conquer algorithmswith this slower time bound are commonly taught as examples of these algorithm design techniques.[7][8] A linearexpected timerandomized algorithm ofRabin (1976), modified slightly byRichard Liptonto make its analysis easier, proceeds as follows, on an input setS{\displaystyle S}consisting ofn{\displaystyle n}points in ak{\displaystyle k}-dimensional Euclidean space: The algorithm will always correctly determine the closest pair, because it maps any pair closer than distanced{\displaystyle d}to the same grid point or to adjacent grid points. The uniform sampling of pairs in the first step of the algorithm (compared to a different method of Rabin for sampling a similar number of pairs) simplifies the proof that the expected number of distances computed by the algorithm is linear.[4] Instead, a different algorithmKhuller & Matias (1995)goes through two phases: a random iterated filtering process that approximates the closest distance to within anapproximation ratioof2k{\displaystyle 2{\sqrt {k}}}, together with a finishing step that turns this approximate distance into the exact closest distance. The filtering process repeat the following steps, untilS{\displaystyle S}becomes empty: The approximate distance found by this filtering process is the final value ofd{\displaystyle d}, computed in the step beforeS{\displaystyle S}becomes empty. Each step removes all points whose closest neighbor is at distanced{\displaystyle d}or greater, at least half of the points in expectation, from which it follows that the total expected time for filtering is linear. Once an approximate value ofd{\displaystyle d}is known, it can be used for the final steps of Rabin's algorithm; in these steps each grid point has a constant number of inputs rounded to it, so again the time is linear.[3] Thedynamic versionfor the closest-pair problem is stated as follows: If thebounding boxfor all points is known in advance and the constant-time floor function is available, then the expectedO(n){\displaystyle O(n)}-space data structure was suggested that supports expected-timeO(log⁡n){\displaystyle O(\log n)}insertions and deletions and constant query time. When modified for the algebraic decision tree model, insertions and deletions would requireO(log2⁡n){\displaystyle O(\log ^{2}n)}expected time.[9]The complexity of the dynamic closest pair algorithm cited above is exponential in the dimensiond{\displaystyle d}, and therefore such an algorithm becomes less suitable for high-dimensional problems. An algorithm for the dynamic closest-pair problem ind{\displaystyle d}dimensional space was developed by Sergey Bespamyatnikh in 1998.[10]Points can be inserted and deleted inO(log⁡n){\displaystyle O(\log n)}time per point (in the worst case).
https://en.wikipedia.org/wiki/Closest_pair_of_points_problem
Content-based image retrieval, also known asquery by image content(QBIC) andcontent-based visual information retrieval(CBVIR), is the application ofcomputer visiontechniques to theimage retrievalproblem, that is, the problem of searching fordigital imagesin largedatabases(see this survey[1]for a scientific overview of the CBIR field). Content-based image retrieval is opposed to traditionalconcept-based approaches(seeConcept-based image indexing). "Content-based" means that the search analyzes the contents of the image rather than themetadatasuch as keywords, tags, or descriptions associated with the image. The term "content" in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. CBIR is desirable because searches that rely purely on metadata are dependent onannotationquality and completeness. Animage meta searchrequires humans to have manually annotated images by entering keywords or metadata in a large database, which can be time-consuming and may not capture the keywords desired to describe the image. The evaluation of the effectiveness of keyword image search is subjective and has not been well-defined. In the same regard, CBIR systems have similar challenges in defining success.[2]"Keywords also limit the scope of queries to the set of predetermined criteria." and, "having been set up" are less reliable than using the content itself.[3] The term "content-based image retrieval" seems to have originated in 1992 when it was used by JapaneseElectrotechnical Laboratoryengineer Toshikazu Kato to describe experiments into automatic retrieval of images from a database, based on the colors and shapes present.[2][4]Since then, the term has been used to describe the process of retrieving desired images from a large collection on the basis of syntactical image features. The techniques, tools, and algorithms that are used originate from fields such as statistics, pattern recognition, signal processing, and computer vision.[1] The earliest commercial CBIR system was developed by IBM and was calledQBIC(QueryByImageContent).[5][6]Recent network- and graph-based approaches have presented a simple and attractive alternative to existing methods.[7] While the storing of multiple images as part of a single entity preceded the termBLOB(BinaryLargeOBject),[8]the ability to fully search by content, rather than by description, had to await IBM's QBIC.[3] VisualRankis a system forfindingand ranking images by analysing and comparing their content, rather than searching image names, Web links or other text.Googlescientists made their VisualRank work public in a paper describing applyingPageRankto Google image search at the International World Wide Web Conference inBeijingin 2008. The interest in CBIR has grown because of the limitations inherent in metadata-based systems, as well as the large range of possible uses for efficient image retrieval. Textual information about images can be easily searched using existing technology, but this requires humans to manually describe each image in the database. This can be impractical for very large databases or for images that are generated automatically, e.g. those fromsurveillance cameras. It is also possible to miss images that use different synonyms in their descriptions. Systems based on categorizing images in semantic classes like "cat" as a subclass of "animal" can avoid the miscategorization problem, but will require more effort by a user to find images that might be "cats", but are only classified as an "animal". Many standards have been developed to categorize images, but all still face scaling and miscategorization issues.[2] Initial CBIR systems were developed to search databases based on image color, texture, and shape properties. After these systems were developed, the need for user-friendly interfaces became apparent. Therefore, efforts in the CBIR field started to include human-centered design that tried to meet the needs of the user performing the search. This typically means inclusion of: query methods that may allow descriptive semantics, queries that may involve user feedback, systems that may include machine learning, and systems that may understand user satisfaction levels.[1] Many CBIR systems have been developed, but as of 2006[update], the problem of retrieving images on the basis of their pixel content remains largely unsolved.[1][needs update] Different query techniques and implementations of CBIR make use of different types of user queries. QBE(QueryByExample) is a query technique[11]that involves providing the CBIR system with an example image that it will then base its search upon. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example.[12] Options for providing example images to the system include: This query technique removes the difficulties that can arise when trying to describe images with words. Semanticretrieval starts with a user making a request like "find pictures of Abraham Lincoln". This type of open-ended task is very difficult for computers to perform - Lincoln may not always be facing the camera or in the samepose. Many CBIR systems therefore generally make use of lower-level features like texture, color, and shape. These features are either used in combination with interfaces that allow easier input of the criteria or with databases that have already been trained to match features (such as faces, fingerprints, or shape matching). However, in general, image retrieval requires human feedback in order to identify higher-level concepts.[6] Combining CBIR search techniques available with the wide range of potential users and their intent can be a difficult task. An aspect of making CBIR successful relies entirely on the ability to understand the user intent.[13]CBIR systems can make use ofrelevance feedback, where the user progressively refines the search results by marking images in the results as "relevant", "not relevant", or "neutral" to the search query, then repeating the search with the new information. Examples of this type of interface have been developed.[14] Machine learningand application of iterative techniques are becoming more common in CBIR.[15] Other query methods include browsing for example images, navigating customized/hierarchical categories, querying by image region (rather than the entire image), querying by multiple example images, querying by visual sketch, querying by direct specification of image features, andmultimodalqueries (e.g. combining touch, voice, etc.)[16] The most common method for comparing two images in content-based image retrieval (typically an example image and an image from the database) is using an image distance measure. An image distance measure compares thesimilarityof two images in various dimensions such as color, texture, shape, and others. For example, a distance of 0 signifies an exact match with the query, with respect to the dimensions that were considered. As one may intuitively gather, a value greater than 0 indicates various degrees of similarities between the images. Search results then can be sorted based on their distance to the queried image.[12]Many measures of image distance (Similarity Models) have been developed.[17] Computing distance measures based on color similarity is achieved by computing acolor histogramfor each image that identifies the proportion of pixels within an image holding specific values.[2]Examining images based on the colors they contain is one of the most widely used techniques because it can be completed without regard to image size or orientation.[6]However, research has also attempted to segment color proportion by region and by spatial relationship among several color regions.[16] Texturemeasures look for visual patterns in images and how they are spatially defined. Textures are represented bytexelswhich are then placed into a number of sets, depending on how many textures are detected in the image. These sets not only define the texture, but also where in the image the texture is located.[12] Texture is a difficult concept to represent. The identification of specific textures in an image is achieved primarily by modeling texture as a two-dimensional gray level variation. The relative brightness of pairs of pixels is computed such that degree of contrast, regularity, coarseness and directionality may be estimated.[6][18]The problem is in identifying patterns of co-pixel variation and associating them with particular classes of textures such assilky, orrough. Other methods of classifying textures include: Shape does not refer to the shape of an image but to the shape of a particular region that is being sought out. Shapes will often be determined first applyingsegmentationoredge detectionto an image. Other methods use shape filters to identify given shapes of an image.[19]Shape descriptors may also need to be invariant to translation, rotation, and scale.[6] Some shape descriptors include:[6] Like other tasks incomputer visionsuch as recognition and detection, recent neural network based retrieval algorithms are susceptible toadversarial attacks, both as candidate and the query attacks.[20]It is shown that retrieved ranking could be dramatically altered with only small perturbations imperceptible to human beings. In addition, model-agnostic transferable adversarial examples are also possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations.[20][21] Conversely, the resistance to such attacks can be improved via adversarial defenses such as the Madry defense.[22] Measures of image retrieval can be defined in terms ofprecision and recall. However, there are other methods being considered.[23] An image is retrieved in CBIR system by adopting several techniques simultaneously such as Integrating Pixel Cluster Indexing, histogram intersection and discrete wavelet transform methods.[24] Potential uses for CBIR include:[2] Commercial Systems that have been developed include:[2] Experimental Systems include:[2]
https://en.wikipedia.org/wiki/Content-based_image_retrieval
Dimensionality reduction, ordimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to itsintrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are oftensparseas a consequence of thecurse of dimensionality, and analyzing the data is usuallycomputationally intractable. Dimensionality reduction is common in fields that deal with large numbers of observations and/or large numbers of variables, such assignal processing,speech recognition,neuroinformatics, andbioinformatics.[1] Methods are commonly divided into linear and nonlinear approaches.[1]Linear approaches can be further divided intofeature selectionandfeature extraction.[2]Dimensionality reduction can be used fornoise reduction,data visualization,cluster analysis, or as an intermediate step to facilitate other analyses. The process offeature selectionaims to find a suitable subset of the input variables (features, orattributes) for the task at hand. The three strategies are: thefilterstrategy (e.g.,information gain), thewrapperstrategy (e.g., accuracy-guided search), and theembeddedstrategy (features are added or removed while building the model based on prediction errors). Data analysissuch asregressionorclassificationcan be done in the reduced space more accurately than in the original space.[3] Feature projection (also called feature extraction) transforms the data from thehigh-dimensional spaceto a space of fewer dimensions. The data transformation may be linear, as inprincipal component analysis(PCA), but manynonlinear dimensionality reductiontechniques also exist.[4][5]For multidimensional data,tensor representationcan be used in dimensionality reduction throughmultilinear subspace learning.[6] The main linear technique for dimensionality reduction, principal component analysis, performs a linear mapping of the data to a lower-dimensional space in such a way that the variance of the data in the low-dimensional representation is maximized. In practice, thecovariance(and sometimes thecorrelation)matrixof the data is constructed and theeigenvectorson this matrix are computed. The eigenvectors that correspond to the largest eigenvalues (the principal components) can now be used to reconstruct a large fraction of the variance of the original data. Moreover, the first few eigenvectors can often be interpreted in terms of the large-scale physical behavior of the system, because they often contribute the vast majority of the system's energy, especially in low-dimensional systems. Still, this must be proved on a case-by-case basis as not all systems exhibit this behavior. The original space (with dimension of the number of points) has been reduced (with data loss, but hopefully retaining the most important variance) to the space spanned by a few eigenvectors.[citation needed] NMF decomposes a non-negative matrix to the product of two non-negative ones, which has been a promising tool in fields where only non-negative signals exist,[7][8]such as astronomy.[9][10]NMF is well known since the multiplicative update rule by Lee & Seung,[7]which has been continuously developed: the inclusion of uncertainties,[9]the consideration of missing data and parallel computation,[11]sequential construction[11]which leads to the stability and linearity of NMF,[10]as well as otherupdatesincluding handling missing data indigital image processing.[12] With a stable component basis during construction, and a linear modeling process,sequential NMF[11]is able to preserve the flux in direct imaging of circumstellar structures in astronomy,[10]as one of themethods of detecting exoplanets, especially for the direct imaging ofcircumstellar discs. In comparison with PCA, NMF does not remove the mean of the matrices, which leads to physical non-negative fluxes; therefore NMF is able to preserve more information than PCA as demonstrated by Ren et al.[10] Principal component analysis can be employed in a nonlinear way by means of thekernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is calledkernel PCA. Other prominent nonlinear techniques includemanifold learningtechniques such asIsomap,locally linear embedding(LLE),[13]Hessian LLE, Laplacian eigenmaps, and methods based on tangent space analysis.[14]These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA. More recently, techniques have been proposed that, instead of defining a fixed kernel, try to learn the kernel usingsemidefinite programming. The most prominent example of such a technique ismaximum variance unfolding(MVU). The central idea of MVU is to exactly preserve all pairwise distances between nearest neighbors (in the inner product space) while maximizing the distances between points that are not nearest neighbors. An alternative approach to neighborhood preservation is through the minimization of a cost function that measures differences between distances in the input and output spaces. Important examples of such techniques include: classicalmultidimensional scaling, which is identical to PCA;Isomap, which uses geodesic distances in the data space;diffusion maps, which use diffusion distances in the data space;t-distributed stochastic neighbor embedding(t-SNE), which minimizes the divergence between distributions over pairs of points; and curvilinear component analysis. A different approach to nonlinear dimensionality reduction is through the use ofautoencoders, a special kind offeedforward neural networkswith a bottleneck hidden layer.[15]The training of deep encoders is typically performed using a greedy layer-wise pre-training (e.g., using a stack ofrestricted Boltzmann machines) that is followed by a finetuning stage based onbackpropagation. Linear discriminant analysis (LDA) is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. GDA deals with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to thesupport-vector machines(SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space.[16][17]Similar to LDA, the objective of GDA is to find a projection for the features into a lower dimensional space by maximizing the ratio of between-class scatter to within-class scatter. Autoencoders can be used to learn nonlinear dimension reduction functions and codings together with an inverse function from the coding to the original representation. T-distributed Stochastic Neighbor Embedding (t-SNE) is a nonlinear dimensionality reduction technique useful for the visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well.[18] Uniform manifold approximation and projection(UMAP) is a nonlinear dimensionality reduction technique. Visually, it is similar to t-SNE, but it assumes that the data is uniformly distributed on alocally connectedRiemannian manifoldand that theRiemannian metricis locally constant or approximately locally constant. For high-dimensional datasets, dimension reduction is usually performed prior to applying ak-nearest neighbors(k-NN) algorithm in order to mitigate thecurse of dimensionality.[19] Feature extractionand dimension reduction can be combined in one step, usingprincipal component analysis(PCA),linear discriminant analysis(LDA),canonical correlation analysis(CCA), ornon-negative matrix factorization(NMF) techniques to pre-process the data, followed by clustering viak-NN onfeature vectorsin a reduced-dimension space. Inmachine learning, this process is also called low-dimensionalembedding.[20] For high-dimensional datasets (e.g., when performing similarity search on live video streams, DNA data, or high-dimensionaltime series), running a fastapproximatek-NN search usinglocality-sensitive hashing,random projection,[21]"sketches",[22]or other high-dimensional similarity search techniques from theVLDB conferencetoolbox may be the only feasible option. A dimensionality reduction technique that is sometimes used inneuroscienceismaximally informative dimensions,[23]which finds a lower-dimensional representation of a dataset such that as muchinformationas possible about the original data is preserved.
https://en.wikipedia.org/wiki/Dimension_reduction
Incomputational geometry, thefixed-radius near neighbor problemis a variant of thenearest neighbor searchproblem. In the fixed-radius near neighbor problem, one is given as input a set of points ind-dimensionalEuclidean spaceand a fixed distance Δ. One must design a data structure that, given a query pointq, efficiently reports the points of the data structure that are within distance Δ ofq. The problem has long been studied;Bentley (1975)cites a 1966 paper by Levinthal that uses this technique as part of a system for visualizing molecular structures, and it has many other applications.[1] One method for solving the problem is to round the points to aninteger lattice, scaled so that the distance between grid points is the desired distance Δ. Ahash tablecan be used to find, for each input point, the other inputs that are mapped to nearby grid points, which can then be tested for whether their unrounded positions are actually within distance Δ. The number of pairs of points tested by this procedure, and the time for the procedure, is linear in the combined input and output size when the dimension is a fixed constant. However, theconstant of proportionalityin the linear time boundgrows exponentiallyas a function of the dimension.[2]Using this method, it is possible to constructindifference graphsandunit disk graphsfrom geometric data in linear time. Modern parallel methods for GPU are able to efficiently compute all pairs fixed-radius NNS. For finite domains, the method of Green[3]shows the problem can be solved by sorting on a uniform grid, finding all neighbors of all particles in O(kn) time, where k is proportional to the average number of neighbors. Hoetzlein[4]improves this further on modern hardware with counting sorting and atomic operations. The fixed-radius near neighbors problem arises in continuous Lagrangian simulations (such as smoothed particle hydrodynamics), computational geometry, and point cloud problems (surface reconstructions).
https://en.wikipedia.org/wiki/Fixed-radius_near_neighbors
Inmachine learning,instance-based learning(sometimes calledmemory-based learning[1]) is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have been stored in memory. Because computation is postponed until a new instance is observed, these algorithms are sometimes referred to as "lazy."[2] It is called instance-based because it constructs hypotheses directly from the training instances themselves.[3]This means that the hypothesis complexity can grow with the data:[3]in the worst case, a hypothesis is a list ofntraining items and the computational complexity ofclassifyinga single new instance isO(n). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Instance-based learners may simply store a new instance or throw an old instance away. Examples of instance-based learning algorithms are thek-nearest neighbors algorithm,kernel machinesandRBF networks.[2]: ch. 8These store (a subset of) their training set; when predicting a value/class for a new instance, they compute distances or similarities between this instance and the training instances to make a decision. To battle the memory complexity of storing all training instances, as well as the risk ofoverfittingto noise in the training set,instance reductionalgorithms have been proposed.[4] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Instance-based_learning
Instatistics, thek-nearest neighbors algorithm(k-NN) is anon-parametricsupervised learningmethod. It was first developed byEvelyn FixandJoseph Hodgesin 1951,[1]and later expanded byThomas Cover.[2]Most often, it is used forclassification, as ak-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among itsknearest neighbors (kis a positiveinteger, typically small). Ifk= 1, then the object is simply assigned to the class of that single nearest neighbor. Thek-NN algorithm can also be generalized forregression. Ink-NN regression, also known asnearest neighbor smoothing, the output is the property value for the object. This value is the average of the values ofknearest neighbors. Ifk= 1, then the output is simply assigned to the value of that single nearest neighbor, also known asnearest neighbor interpolation. For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, wheredis the distance to the neighbor.[3] The input consists of thekclosest training examples in adata set. The neighbors are taken from a set of objects for which the class (fork-NN classification) or the object property value (fork-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. A peculiarity (sometimes even a disadvantage) of thek-NN algorithm is its sensitivity to the local structure of the data. Ink-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wisenormalizingof the training data can greatly improve its accuracy.[4] Suppose we have pairs(X1,Y1),(X2,Y2),…,(Xn,Yn){\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\dots ,(X_{n},Y_{n})}taking values inRd×{1,2}{\displaystyle \mathbb {R} ^{d}\times \{1,2\}}, whereYis the class label ofX, so thatX|Y=r∼Pr{\displaystyle X|Y=r\sim P_{r}}forr=1,2{\displaystyle r=1,2}(and probability distributionsPr{\displaystyle P_{r}}). Given some norm‖⋅‖{\displaystyle \|\cdot \|}onRd{\displaystyle \mathbb {R} ^{d}}and a pointx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}, let(X(1),Y(1)),…,(X(n),Y(n)){\displaystyle (X_{(1)},Y_{(1)}),\dots ,(X_{(n)},Y_{(n)})}be a reordering of the training data such that‖X(1)−x‖≤⋯≤‖X(n)−x‖{\displaystyle \|X_{(1)}-x\|\leq \dots \leq \|X_{(n)}-x\|}. The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing thefeature vectorsand class labels of the training samples. In the classification phase,kis a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among thektraining samples nearest to that query point. A commonly used distance metric forcontinuous variablesisEuclidean distance. For discrete variables, such as for text classification, another metric can be used, such as theoverlap metric(orHamming distance). In the context of gene expression microarray data, for example,k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric.[5]Often, the classification accuracy ofk-NN can be improved significantly if the distance metric is learned with specialized algorithms such asLarge Margin Nearest NeighbororNeighbourhood components analysis. A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among theknearest neighbors due to their large number.[7]One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of itsknearest neighbors. The class (or value, in regression problems) of each of theknearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in aself-organizing map(SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data.K-NN can then be applied to the SOM. The best choice ofkdepends upon the data; generally, larger values ofkreduces effect of the noise on the classification,[8]but make boundaries between classes less distinct. A goodkcan be selected by variousheuristictechniques (seehyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. whenk= 1) is called the nearest neighbor algorithm. The accuracy of thek-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put intoselectingorscalingfeatures to improve classification. A particularly popular[citation needed]approach is the use ofevolutionary algorithmsto optimize feature scaling.[9]Another popular approach is to scale features by themutual informationof the training data with the training classes.[citation needed] In binary (two class) classification problems, it is helpful to choosekto be an odd number as this avoids tied votes. One popular way of choosing the empirically optimalkin this setting is via bootstrap method.[10] The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a pointxto the class of its closest neighbour in the feature space, that isCn1nn(x)=Y(1){\displaystyle C_{n}^{1nn}(x)=Y_{(1)}}. As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice theBayes error rate(the minimum achievable error rate given the distribution of the data). Thek-nearest neighbour classifier can be viewed as assigning theknearest neighbours a weight1/k{\displaystyle 1/k}and all others0weight. This can be generalised to weighted nearest neighbour classifiers. That is, where theith nearest neighbour is assigned a weightwni{\displaystyle w_{ni}}, with∑i=1nwni=1{\textstyle \sum _{i=1}^{n}w_{ni}=1}. An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds.[11] LetCnwnn{\displaystyle C_{n}^{wnn}}denote the weighted nearest classifier with weights{wni}i=1n{\displaystyle \{w_{ni}\}_{i=1}^{n}}. Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion[12]RR(Cnwnn)−RR(CBayes)=(B1sn2+B2tn2){1+o(1)},{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{wnn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left(B_{1}s_{n}^{2}+B_{2}t_{n}^{2}\right)\{1+o(1)\},}for constantsB1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}wheresn2=∑i=1nwni2{\displaystyle s_{n}^{2}=\sum _{i=1}^{n}w_{ni}^{2}}andtn=n−2/d∑i=1nwni{i1+2/d−(i−1)1+2/d}{\displaystyle t_{n}=n^{-2/d}\sum _{i=1}^{n}w_{ni}\left\{i^{1+2/d}-(i-1)^{1+2/d}\right\}}. The optimal weighting scheme{wni∗}i=1n{\displaystyle \{w_{ni}^{*}\}_{i=1}^{n}}, that balances the two terms in the display above, is given as follows: setk∗=⌊Bn4d+4⌋{\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor },wni∗=1k∗[1+d2−d2k∗2/d{i1+2/d−(i−1)1+2/d}]{\displaystyle w_{ni}^{*}={\frac {1}{k^{*}}}\left[1+{\frac {d}{2}}-{\frac {d}{2{k^{*}}^{2/d}}}\{i^{1+2/d}-(i-1)^{1+2/d}\}\right]}fori=1,2,…,k∗{\displaystyle i=1,2,\dots ,k^{*}}andwni∗=0{\displaystyle w_{ni}^{*}=0}fori=k∗+1,…,n{\displaystyle i=k^{*}+1,\dots ,n}. With optimal weights the dominant term in the asymptotic expansion of the excess risk isO(n−4d+4){\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})}. Similar results are true when using abagged nearest neighbour classifier. k-NN is a special case of avariable-bandwidth, kernel density "balloon" estimatorwith a uniformkernel.[13][14] The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximatenearest neighbor searchalgorithm makesk-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed. k-NN has some strongconsistencyresults. As the amount of data approaches infinity, the two-classk-NN algorithm is guaranteed to yield an error rate no worse than twice theBayes error rate(the minimum achievable error rate given the distribution of the data).[2]Various improvements to thek-NN speed are possible by using proximity graphs.[15] For multi-classk-NN classification,CoverandHart(1967) prove an upper bound error rate ofR∗≤RkNN≤R∗(2−MR∗M−1){\displaystyle R^{*}\ \leq \ R_{k\mathrm {NN} }\ \leq \ R^{*}\left(2-{\frac {MR^{*}}{M-1}}\right)}whereR∗{\displaystyle R^{*}}is the Bayes error rate (which is the minimal error rate possible),RkNN{\displaystyle R_{kNN}}is the asymptotick-NN error rate, andMis the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution.[16]ForM=2{\displaystyle M=2}and as the Bayesian error rateR∗{\displaystyle R^{*}}approaches zero, this limit reduces to "not more than twice the Bayesian error rate". There are many results on the error rate of theknearest neighbour classifiers.[17]Thek-nearest neighbour classifier is strongly (that is for any joint distribution on(X,Y){\displaystyle (X,Y)})consistentprovidedk:=kn{\displaystyle k:=k_{n}}diverges andkn/n{\displaystyle k_{n}/n}converges to zero asn→∞{\displaystyle n\to \infty }. LetCnknn{\displaystyle C_{n}^{knn}}denote theknearest neighbour classifier based on a training set of sizen. Under certain regularity conditions, theexcess riskyields the following asymptotic expansion[12]RR(Cnknn)−RR(CBayes)={B11k+B2(kn)4/d}{1+o(1)},{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{knn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left\{B_{1}{\frac {1}{k}}+B_{2}\left({\frac {k}{n}}\right)^{4/d}\right\}\{1+o(1)\},}for some constantsB1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}. The choicek∗=⌊Bn4d+4⌋{\displaystyle k^{*}=\left\lfloor Bn^{\frac {4}{d+4}}\right\rfloor }offers a trade off between the two terms in the above display, for which thek∗{\displaystyle k^{*}}-nearest neighbour error converges to the Bayes error at the optimal (minimax) rateO(n−4d+4){\displaystyle {\mathcal {O}}\left(n^{-{\frac {4}{d+4}}}\right)}. The K-nearest neighbor classification performance can often be significantly improved through (supervised) metric learning. Popular algorithms areneighbourhood components analysisandlarge margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a newmetricorpseudo-metric. When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is calledfeature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applyingk-NN algorithm on the transformed data infeature space. An example of a typicalcomputer visioncomputation pipeline forface recognitionusingk-NN including feature extraction and dimension reduction pre-processing steps (usually implemented withOpenCV): For high-dimensional data (e.g., with number of dimensions more than 10)dimension reductionis usually performed prior to applying thek-NN algorithm in order to avoid the effects of thecurse of dimensionality.[18] Thecurse of dimensionalityin thek-NN context basically means thatEuclidean distanceis unhelpful in high dimensions because all vectors are almost equidistant to the search query vector (imagine multiple points lying more or less on a circle with the query point at the center; the distance from the query to all data points in the search space is almost the same). Feature extractionand dimension reduction can be combined in one step usingprincipal component analysis(PCA),linear discriminant analysis(LDA), orcanonical correlation analysis(CCA) techniques as a pre-processing step, followed by clustering byk-NN onfeature vectorsin reduced-dimension space. This process is also called low-dimensionalembedding.[19] For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensionaltime series) running a fastapproximatek-NN search usinglocality sensitive hashing, "random projections",[20]"sketches"[21]or other high-dimensional similarity search techniques from theVLDBtoolbox might be the only feasible option. Nearest neighbor rules in effect implicitly compute thedecision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity.[22] Data reductionis one of the most important problems for work with huge data sets. Usually, only some of the data points are needed for accurate classification. Those data are called theprototypesand can be found as follows: A training example surrounded by examples of other classes is called a class outlier. Causes of class outliers include: Class outliers withk-NN produce noise. They can be detected and separated for future analysis. Given two natural numbers,k>r>0, a training example is called a (k,r)NN class-outlier if itsknearest neighbors include more thanrexamples of other classes. Condensed nearest neighbor (CNN, theHartalgorithm) is an algorithm designed to reduce the data set fork-NN classification.[23]It selects the set of prototypesUfrom the training data, such that 1NN withUcan classify the examples almost as accurately as 1NN does with the whole data set. Given a training setX, CNN works iteratively: UseUinstead ofXfor classification. The examples that are not prototypes are called "absorbed" points. It is efficient to scan the training examples in order of decreasing border ratio.[24]The border ratio of a training examplexis defined as where‖x-y‖is the distance to the closest exampleyhaving a different color thanx, and‖x'-y‖is the distance fromyto its closest examplex'with the same label asx. The border ratio is in the interval [0,1] because‖x'-y‖never exceeds‖x-y‖. This ordering gives preference to the borders of the classes for inclusion in the set of prototypesU. A point of a different label thanxis called external tox. The calculation of the border ratio is illustrated by the figure on the right. The data points are labeled by colors: the initial point isxand its label is red. External points are blue and green. The closest toxexternal point isy. The closest toyred point isx'. The border ratioa(x) = ‖x'-y‖ / ‖x-y‖is the attribute of the initial pointx. Below is an illustration of CNN in a series of figures. There are three classes (red, green and blue). Fig. 1: initially there are 60 points in each class. Fig. 2 shows the 1NN classification map: each pixel is classified by 1NN using all the data. Fig. 3 shows the 5NN classification map. White areas correspond to the unclassified regions, where 5NN voting is tied (for example, if there are two green, two red and one blue points among 5 nearest neighbors). Fig. 4 shows the reduced data set. The crosses are the class-outliers selected by the (3,2)NN rule (all the three nearest neighbors of these instances belong to other classes); the squares are the prototypes, and the empty circles are the absorbed points. The left bottom corner shows the numbers of the class-outliers, prototypes and absorbed points for all three classes. The number of prototypes varies from 15% to 20% for different classes in this example. Fig. 5 shows that the 1NN classification map with the prototypes is very similar to that with the initial data set. The figures were produced using the Mirkes applet.[24] Ink-NN regression, also known ask-NN smoothing, thek-NN algorithm is used for estimatingcontinuous variables.[citation needed]One such algorithm uses a weighted average of theknearest neighbors, weighted by the inverse of their distance. This algorithm works as follows: The distance to thekth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score inanomaly detection. The larger the distance to thek-NN, the lower the local density, the more likely the query point is an outlier.[25]Although quite simple, this outlier model, along with another classic data mining method,local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis.[26] Aconfusion matrixor "matching matrix" is often used as a tool to validate the accuracy ofk-NN classification. More robust statistical methods such aslikelihood-ratio testcan also be applied.[how?]
https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm
Incomputer science,locality-sensitive hashing(LSH) is afuzzy hashingtechnique that hashes similar input items into the same "buckets" with high probability.[1](The number of buckets is much smaller than the universe of possible input items.)[1]Since similar items end up in the same buckets, this technique can be used fordata clusteringandnearest neighbor search. It differs fromconventional hashing techniquesin thathash collisionsare maximized, not minimized. Alternatively, the technique can be seen as a way toreduce the dimensionalityof high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items. Hashing-based approximatenearest-neighbor searchalgorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).[2][3] Locality-preserving hashing was initially devised as a way to facilitatedata pipeliningin implementations ofmassively parallelalgorithms that userandomized routinganduniversal hashingto reduce memorycontentionandnetwork congestion.[4][5] A finite familyF{\displaystyle {\mathcal {F}}}of functionsh:M→S{\displaystyle h\colon M\to S}is defined to be anLSH family[1][6][7]for if it satisfies the following condition. For any two pointsa,b∈M{\displaystyle a,b\in M}and a hash functionh{\displaystyle h}chosen uniformly at random fromF{\displaystyle {\mathcal {F}}}: Such a familyF{\displaystyle {\mathcal {F}}}is called(r,cr,p1,p2){\displaystyle (r,cr,p_{1},p_{2})}-sensitive. Alternatively[8]it is possible to define an LSH family on a universe of itemsUendowed with a similarity functionϕ:U×U→[0,1]{\displaystyle \phi \colon U\times U\to [0,1]}. In this setting, a LSH scheme is a family ofhash functionsHcoupled with aprobability distributionDoverHsuch that a functionh∈H{\displaystyle h\in H}chosen according toDsatisfiesPr[h(a)=h(b)]=ϕ(a,b){\displaystyle Pr[h(a)=h(b)]=\phi (a,b)}for eacha,b∈U{\displaystyle a,b\in U}. Given a(d1,d2,p1,p2){\displaystyle (d_{1},d_{2},p_{1},p_{2})}-sensitive familyF{\displaystyle {\mathcal {F}}}, we can construct new familiesG{\displaystyle {\mathcal {G}}}by either the AND-construction or OR-construction ofF{\displaystyle {\mathcal {F}}}.[1] To create an AND-construction, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis constructed fromkrandom functionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}. We then say that for a hash functiong∈G{\displaystyle g\in {\mathcal {G}}},g(x)=g(y){\displaystyle g(x)=g(y)}if and only if allhi(x)=hi(y){\displaystyle h_{i}(x)=h_{i}(y)}fori=1,2,…,k{\displaystyle i=1,2,\ldots ,k}. Since the members ofF{\displaystyle {\mathcal {F}}}are independently chosen for anyg∈G{\displaystyle g\in {\mathcal {G}}},G{\displaystyle {\mathcal {G}}}is a(d1,d2,p1k,p2k){\displaystyle (d_{1},d_{2},p_{1}^{k},p_{2}^{k})}-sensitive family. To create an OR-construction, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis constructed fromkrandom functionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}. We then say that for a hash functiong∈G{\displaystyle g\in {\mathcal {G}}},g(x)=g(y){\displaystyle g(x)=g(y)}if and only ifhi(x)=hi(y){\displaystyle h_{i}(x)=h_{i}(y)}for one or more values ofi. Since the members ofF{\displaystyle {\mathcal {F}}}are independently chosen for anyg∈G{\displaystyle g\in {\mathcal {G}}},G{\displaystyle {\mathcal {G}}}is a(d1,d2,1−(1−p1)k,1−(1−p2)k){\displaystyle (d_{1},d_{2},1-(1-p_{1})^{k},1-(1-p_{2})^{k})}-sensitive family. LSH has been applied to several problem domains, including: One of the easiest ways to construct an LSH family is by bit sampling.[7]This approach works for theHamming distanceoverd-dimensional vectors{0,1}d{\displaystyle \{0,1\}^{d}}. Here, the familyF{\displaystyle {\mathcal {F}}}of hash functions is simply the family of all the projections of points on one of thed{\displaystyle d}coordinates, i.e.,F={h:{0,1}d→{0,1}∣h(x)=xifor somei∈{1,…,d}}{\displaystyle {\mathcal {F}}=\{h\colon \{0,1\}^{d}\to \{0,1\}\mid h(x)=x_{i}{\text{ for some }}i\in \{1,\ldots ,d\}\}}, wherexi{\displaystyle x_{i}}is thei{\displaystyle i}th coordinate ofx{\displaystyle x}. A random functionh{\displaystyle h}fromF{\displaystyle {\mathcal {F}}}simply selects a random bit from the input point. This family has the following parameters:P1=1−R/d{\displaystyle P_{1}=1-R/d},P2=1−cR/d{\displaystyle P_{2}=1-cR/d}. That is, any two vectorsx,y{\displaystyle x,y}with Hamming distance at mostR{\displaystyle R}collide under a randomh{\displaystyle h}with probability at leastP1{\displaystyle P_{1}}. Anyx,y{\displaystyle x,y}with Hamming distance at leastcR{\displaystyle cR}collide with probability at mostP2{\displaystyle P_{2}}. SupposeUis composed of subsets of some ground set of enumerable itemsSand the similarity function of interest is theJaccard indexJ. Ifπis a permutation on the indices ofS, forA⊆S{\displaystyle A\subseteq S}leth(A)=mina∈A{π(a)}{\displaystyle h(A)=\min _{a\in A}\{\pi (a)\}}. Each possible choice ofπdefines a single hash functionhmapping input sets to elements ofS. Define the function familyHto be the set of all such functions and letDbe theuniform distribution. Given two setsA,B⊆S{\displaystyle A,B\subseteq S}the event thath(A)=h(B){\displaystyle h(A)=h(B)}corresponds exactly to the event that the minimizer ofπoverA∪B{\displaystyle A\cup B}lies insideA∩B{\displaystyle A\cap B}. Ashwas chosen uniformly at random,Pr[h(A)=h(B)]=J(A,B){\displaystyle Pr[h(A)=h(B)]=J(A,B)\,}and(H,D){\displaystyle (H,D)\,}define an LSH scheme for the Jaccard index. Because thesymmetric grouponnelements has sizen!, choosing a trulyrandom permutationfrom the full symmetric group is infeasible for even moderately sizedn. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosenπ. It has been established that a min-wise independent family of permutations is at least of sizelcm⁡{1,2,…,n}≥en−o(n){\displaystyle \operatorname {lcm} \{\,1,2,\ldots ,n\,\}\geq e^{n-o(n)}},[20]and that this bound is tight.[21] Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at mostk.[22]Approximate min-wise independence differs from the property by at most a fixedε.[23] Nilsimsais a locality-sensitive hashing algorithm used inanti-spamefforts.[24]The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements: Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.[25] TLSHis locality-sensitive hashing algorithm designed for a range of security and digital forensic applications.[18]The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar. An implementation of TLSH is available asopen-source software.[26] The random projection method of LSH due toMoses Charikar[8]calledSimHash(also sometimes called arccos[27]) uses an approximation of thecosine distancebetween vectors. The technique was used to approximate the NP-completemax-cutproblem.[8] The basic idea of this technique is to choose a randomhyperplane(defined by a normal unit vectorr) at the outset and use the hyperplane to hash input vectors. Given an input vectorvand a hyperplane defined byr, we leth(v)=sgn⁡(v⋅r){\displaystyle h(v)=\operatorname {sgn}(v\cdot r)}. That is,h(v)=±1{\displaystyle h(v)=\pm 1}depending on which side of the hyperplanevlies. This way, each possible choice of a random hyperplanercan be interpreted as a hash functionh(v){\displaystyle h(v)}. For two vectorsu,vwith angleθ(u,v){\displaystyle \theta (u,v)}between them, it can be shown that Since the ratio betweenθ(u,v)π{\displaystyle {\frac {\theta (u,v)}{\pi }}}and1−cos⁡(θ(u,v)){\displaystyle 1-\cos(\theta (u,v))}is at least 0.439 whenθ(u,v)∈[0,π]{\displaystyle \theta (u,v)\in [0,\pi ]},[8][28]the probability of two vectors being on different sides of the random hyperplane is approximately proportional to thecosine distancebetween them. The hash function[29]ha,b(υ):Rd→N{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }}):{\mathcal {R}}^{d}\to {\mathcal {N}}}maps ad-dimensional vectorυ{\displaystyle {\boldsymbol {\upsilon }}}onto the set of integers. Each hash function in the family is indexed by a choice of randoma{\displaystyle \mathbf {a} }andb{\displaystyle b}wherea{\displaystyle \mathbf {a} }is ad-dimensional vector with entries chosen independently from astable distributionandb{\displaystyle b}is a real number chosen uniformly from the range [0,r]. For a fixeda,b{\displaystyle \mathbf {a} ,b}the hash functionha,b{\displaystyle h_{\mathbf {a} ,b}}is given byha,b(υ)=⌊a⋅υ+br⌋{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }})=\left\lfloor {\frac {\mathbf {a} \cdot {\boldsymbol {\upsilon }}+b}{r}}\right\rfloor }. Other construction methods for hash functions have been proposed to better fit the data.[30]In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee. Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have highersemantic similarity.[31]The hashcodes are found via training of anartificial neural networkorgraphical model.[citation needed] One of the main applications of LSH is to provide a method for efficient approximatenearest neighbor searchalgorithms. Consider an LSH familyF{\displaystyle {\mathcal {F}}}. The algorithm has two main parameters: the width parameterkand the number of hash tablesL. In the first step, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis obtained by concatenatingkfunctionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}, i.e.,g(p)=[h1(p),…,hk(p)]{\displaystyle g(p)=[h_{1}(p),\ldots ,h_{k}(p)]}. In other words, a random hash functiongis obtained by concatenatingkrandomly chosen hash functions fromF{\displaystyle {\mathcal {F}}}. The algorithm then constructsLhash tables, each corresponding to a different randomly chosen hash functiong. In the preprocessing step we hash allnd-dimensional points from the data setSinto each of theLhash tables. Given that the resulting hash tables have onlynnon-zero entries, one can reduce the amount of memory used per each hash table toO(n){\displaystyle O(n)}using standardhash functions. Given a query pointq, the algorithm iterates over theLhash functionsg. For eachgconsidered, it retrieves the data points that are hashed into the same bucket asq. The process is stopped as soon as a point within distancecRfromqis found. Given the parameterskandL, the algorithm has the following performance guarantees: For a fixed approximation ratioc=1+ϵ{\displaystyle c=1+\epsilon }and probabilitiesP1{\displaystyle P_{1}}andP2{\displaystyle P_{2}}, one can setk=⌈log⁡nlog⁡1/P2⌉{\displaystyle k=\left\lceil {\tfrac {\log n}{\log 1/P_{2}}}\right\rceil }andL=⌈P1−k⌉=O(nρP1−1){\displaystyle L=\lceil P_{1}^{-k}\rceil =O(n^{\rho }P_{1}^{-1})}, whereρ=log⁡P1log⁡P2{\displaystyle \rho ={\tfrac {\log P_{1}}{\log P_{2}}}}. Then one obtains the following performance guarantees: Whentis large, it is possible to reduce the hashing time fromO(nρ){\displaystyle O(n^{\rho })}. This was shown by[32]and[33]which gave It is also sometimes the case that the factor1/P1{\displaystyle 1/P_{1}}can be very large. This happens for example withJaccard similaritydata, where even the most similar neighbor often has a quite low Jaccard similarity with the query. In[34]it was shown how to reduce the query time toO(nρ/P11−ρ){\displaystyle O(n^{\rho }/P_{1}^{1-\rho })}(not including hashing costs) and similarly the space usage.
https://en.wikipedia.org/wiki/Locality_sensitive_hashing
Maximum inner-product search(MIPS) is asearch problem, with a corresponding class ofsearch algorithmswhich attempt to maximise theinner productbetween a query and the data items to be retrieved. MIPS algorithms are used in a wide variety of big data applications, includingrecommendation algorithmsandmachine learning.[1] Formally, for a database of vectorsxi{\displaystyle x_{i}}defined over a set of labelsS{\displaystyle S}in aninner product spacewith an inner product⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }defined on it, MIPS search can be defined as the problem of determining for a given queryq{\displaystyle q}. Although there is an obviouslinear-timeimplementation, it is generally too slow to be used on practical problems. However, efficient algorithms exist to speed up MIPS search.[1][2] Under the assumption of all vectors in the set having constant norm, MIPS can be viewed as equivalent to anearest neighbor search(NNS) problem in which maximizing the inner product is equivalent to minimizing the correspondingdistance metricin the NNS problem.[3]Like other forms of NNS, MIPS algorithms may be approximate or exact.[4] MIPS search is used as part ofDeepMind'sRETROalgorithm.[5] Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Maximum_inner-product_search
Instatistics,econometricsand related fields,multidimensional analysis(MDA) is adata analysisprocess that groups data into two categories:data dimensionsand measurements. For example, adata setconsisting of the number of wins for a single football team at each of several years is a single-dimensional (in this case, longitudinal) data set. A data set consisting of the number of wins for several football teams in a single year is also a single-dimensional (in this case, cross-sectional) data set. A data set consisting of the number of wins for several football teams over several years is a two-dimensional data set. In many disciplines, two-dimensional data sets are also calledpanel data.[1]While, strictly speaking, two- and higher-dimensional data sets are "multi-dimensional", the term "multidimensional" tends to be applied only to data sets with three or more dimensions.[2]For example, some forecast data sets provide forecasts for multiple target periods, conducted by multiple forecasters, and made at multiple horizons. The three dimensions provide more information than can be gleaned from two-dimensional panel data sets. Computer software for MDA includeOnline analytical processing(OLAP) for data inrelational databases,pivot tablesfor data inspreadsheets, andArray DBMSsfor general multi-dimensional data (such asraster data) in science, engineering, and business. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Multidimensional_analysis
Nearest-neighbor interpolation(also known asproximal interpolationor, in some contexts,point sampling) is a simple method ofmultivariate interpolationin one or moredimensions. Interpolationis the problem of approximating the value of a function for a non-given point in some space when given the value of that function in points around (neighboring) that point. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.[1]The algorithm is very simple to implement and is commonly used (usually along withmipmapping) inreal-time3D rendering[2]to select color values for atexturedsurface. For a given set of points in space, aVoronoi diagramis a decomposition of space into cells, one for each given point, so that anywhere in space, the closest given point is inside the cell. This is equivalent to nearest neighbor interpolation, by assigning the function value at the given point to all the points inside the cell.[3]The figures on the right side show by color the shape of the cells. Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
Inbioinformatics,neighbor joiningis a bottom-up (agglomerative)clusteringmethod for the creation ofphylogenetic trees, created by Naruya Saitou andMasatoshi Neiin 1987.[1]Usually based onDNAorproteinsequencedata, the algorithm requires knowledge of the distance between each pair oftaxa(e.g., species or sequences) to create the phylogenetic tree.[2] Neighbor joining takes adistance matrix, which specifies the distance between each pair oftaxa, as input. The algorithm starts with a completely unresolved tree, whose topology corresponds to that of astar network, and iterates over the following steps, until the tree is completely resolved, and all branch lengths are known: Based on a distance matrix relating then{\displaystyle n}taxa, calculate then{\displaystyle n}xn{\displaystyle n}matrixQ{\displaystyle Q}as follows: whered(i,j){\displaystyle d(i,j)}is the distance between taxai{\displaystyle i}andj{\displaystyle j}. For each of the taxa in the pair being joined, use the following formula to calculate the distance to the new node: and: Taxaf{\displaystyle f}andg{\displaystyle g}are the paired taxa andu{\displaystyle u}is the newly created node. The branches joiningf{\displaystyle f}andu{\displaystyle u}andg{\displaystyle g}andu{\displaystyle u}, and their lengths,δ(f,u){\displaystyle \delta (f,u)}andδ(g,u){\displaystyle \delta (g,u)}are part of the tree which is gradually being created; they neither affect nor are affected by later neighbor-joining steps. For each taxon not considered in the previous step, we calculate the distance to the new node as follows: whereu{\displaystyle u}is the new node,k{\displaystyle k}is the node which we want to calculate the distance to andf{\displaystyle f}andg{\displaystyle g}are the members of the pair just joined. Neighbor joining on a set ofn{\displaystyle n}taxa requiresn−3{\displaystyle n-3}iterations. At each step one has to build and search aQ{\displaystyle Q}matrix. Initially theQ{\displaystyle Q}matrix is sizen×n{\displaystyle n\times n}, then the next step it is(n−1)×(n−1){\displaystyle (n-1)\times (n-1)}, etc. Implementing this in a straightforward way leads to an algorithm with a time complexity ofO(n3){\displaystyle O(n^{3})};[3]implementations exist which use heuristics to do much better than this on average.[4] Let us assume that we have five taxa(a,b,c,d,e){\displaystyle (a,b,c,d,e)}and the following distance matrixD{\displaystyle D}: We calculate theQ1{\displaystyle Q_{1}}values by equation (1). For example: We obtain the following values for theQ1{\displaystyle Q_{1}}matrix (the diagonal elements of the matrix are not used and are omitted here): In the example above,Q1(a,b)=−50{\displaystyle Q_{1}(a,b)=-50}. This is the smallest value ofQ1{\displaystyle Q_{1}}, so we join elementsa{\displaystyle a}andb{\displaystyle b}. Letu{\displaystyle u}denote the new node. By equation (2), above, the branches joininga{\displaystyle a}andb{\displaystyle b}tou{\displaystyle u}then have lengths: We then proceed to update the initial distance matrixD{\displaystyle D}into a new distance matrixD1{\displaystyle D_{1}}(see below), reduced in size by one row and one column because of the joining ofa{\displaystyle a}withb{\displaystyle b}into their neighboru{\displaystyle u}. Using equation (3) above, we compute the distance fromu{\displaystyle u}to each of the other nodes besidesa{\displaystyle a}andb{\displaystyle b}. In this case, we obtain: The resulting distance matrixD1{\displaystyle D_{1}}is: Bold values inD1{\displaystyle D_{1}}correspond to the newly calculated distances, whereas italicized values are not affected by the matrix update as they correspond to distances between elements not involved in the first joining of taxa. The correspondingQ2{\displaystyle Q_{2}}matrix is: We may choose either to joinu{\displaystyle u}andc{\displaystyle c}, or to joind{\displaystyle d}ande{\displaystyle e}; both pairs have the minimalQ2{\displaystyle Q_{2}}value of−28{\displaystyle -28}, and either choice leads to the same result. For concreteness, let us joinu{\displaystyle u}andc{\displaystyle c}and call the new nodev{\displaystyle v}. The lengths of the branches joiningu{\displaystyle u}andc{\displaystyle c}tov{\displaystyle v}can be calculated: The joining of the elements and the branch length calculation help drawing the neighbor joining treeas shown in the figure. The updated distance matrixD2{\displaystyle D_{2}}for the remaining 3 nodes,v{\displaystyle v},d{\displaystyle d}, ande{\displaystyle e}, is now computed: The tree topology is fully resolved at this point. However, for clarity, we can calculate theQ3{\displaystyle Q_{3}}matrix. For example: For concreteness, let us joinv{\displaystyle v}andd{\displaystyle d}and call the last nodew{\displaystyle w}. The lengths of the three remaining branches can be calculated: The neighbor joining tree is now complete,as shown in the figure. This example represents an idealized case: note that if we move from any taxon to any other along the branches of the tree, and sum the lengths of the branches traversed, the result is equal to the distance between those taxa in the input distance matrix. For example, going fromd{\displaystyle d}tob{\displaystyle b}we have2+2+3+3=10{\displaystyle 2+2+3+3=10}. A distance matrix whose distances agree in this way with some tree is said to be 'additive', a property which is rare in practice. Nonetheless it is important to note that, given an additive distance matrix as input, neighbor joining is guaranteed to find the tree whose distances between taxa agree with it. Neighbor joining may be viewed as agreedy heuristicfor thebalanced minimum evolution[5](BME) criterion. For each topology, BME defines the tree length (sum of branch lengths) to be a particular weighted sum of the distances in the distance matrix, with the weights depending on the topology. The BME optimal topology is the one which minimizes this tree length. NJ at each step greedily joins that pair of taxa which will give the greatest decrease in the estimated tree length. This procedure does not guarantee to find the optimum for the BME criterion, although it often does and is usually quite close.[5] The main virtue of NJ is that it is fast[6]: 466as compared toleast squares,maximum parsimonyandmaximum likelihoodmethods.[6]This makes it practical for analyzing large data sets (hundreds or thousands of taxa) and forbootstrapping, for which purposes other means of analysis (e.g.maximum parsimony,maximum likelihood) may becomputationallyprohibitive. Neighbor joining has the property that if the input distance matrix is correct, then the output tree will be correct. Furthermore, the correctness of the output tree topology is guaranteed as long as the distance matrix is 'nearly additive', specifically if each entry in the distance matrix differs from the true distance by less than half of the shortest branch length in the tree.[7]In practice the distance matrix rarely satisfies this condition, but neighbor joining often constructs the correct tree topology anyway.[8]The correctness of neighbor joining for nearly additive distance matrices implies that it isstatistically consistentunder many models of evolution; given data of sufficient length, neighbor joining will reconstruct the true tree with high probability. Compared withUPGMAandWPGMA, neighbor joining has the advantage that it does not assume all lineages evolve at the same rate (molecular clock hypothesis). Nevertheless, neighbor joining has been largely superseded by phylogenetic methods that do not rely on distance measures and offer superior accuracy under most conditions.[citation needed]Neighbor joining has the undesirable feature that it often assigns negative lengths to some of the branches. There are many programs available implementing neighbor joining. Among implementations ofcanonicalNJ (i.e. using the classical NJ optimisation criteria, therefore giving the same results), RapidNJ (started 2003, major update in 2011, still updated in 2023)[9]and NINJA (started 2009, last update 2013)[10]are considered state-of-the-art. They have typical run times proportional to approximately the square of the number of taxa. Variants that deviate from canonical include:
https://en.wikipedia.org/wiki/Neighbor_joining
Incomputer science, therange searchingproblem consists of processing a setSof objects, in order to determine which objects fromSintersect with a query object, called therange. For example, ifSis a set of points corresponding to the coordinates of several cities, find the subset of cities within a given range oflatitudesandlongitudes. The range searching problem and thedata structuresthat solve it are a fundamental topic ofcomputational geometry. Applications of the problem arise in areas such asgeographical information systems(GIS),computer-aided design(CAD) anddatabases. There are several variations of the problem, and different data structures may be necessary for different variations.[1]In order to obtain an efficient solution, several aspects of the problem need to be specified: In orthogonal range searching, the setSconsists ofn{\displaystyle n}points ind{\displaystyle d}dimensions, and the query consists of intervals in each of those dimensions. Thus, the query consists of a multi-dimensionalaxis-aligned rectangle. With an output size ofk{\displaystyle k},Jon Bentleyused ak-d treeto achieve (inBig O notation)O(n){\displaystyle O(n)}space andO(n1−1d+k){\displaystyle O{\big (}n^{1-{\frac {1}{d}}}+k{\big )}}query time.[2]Bentley also proposed usingrange trees, which improved query time toO(logd⁡n+k){\displaystyle O(\log ^{d}n+k)}but increased space toO(nlogd−1⁡n){\displaystyle O(n\log ^{d-1}n)}.[3]Dan Willardused downpointers, a special case offractional cascadingto reduce the query time further toO(logd−1⁡n+k){\displaystyle O(\log ^{d-1}n+k)}.[4] While the above results were achieved in thepointer machinemodel, further improvements have been made in theword RAMmodel of computationin low dimensions (2D, 3D, 4D).Bernard Chazelleused compress range trees to achieveO(log⁡n){\displaystyle O(\log n)}query time andO(n){\displaystyle O(n)}space for range counting.[5]Joseph JaJa and others later improved this query time toO(log⁡nlog⁡log⁡n){\displaystyle O\left({\dfrac {\log n}{\log \log n}}\right)}for range counting, which matches a lower bound and is thusasymptotically optimal.[6] As of 2015, the best results (in low dimensions (2D, 3D, 4D)) for range reporting found byTimothy M. Chan, Kasper Larsen, andMihai Pătrașcu, also using compressed range trees in the word RAM model of computation, are one of the following:[7] In the orthogonal case, if one of the bounds isinfinity, the query is called three-sided. If two of the bounds are infinity, the query is two-sided, and if none of the bounds are infinity, then the query is four-sided. While in static range searching the setSis known in advance,dynamicrange searching, insertions and deletions of points are allowed. In the incremental version of the problem, only insertions are allowed, whereas the decremental version only allows deletions. For the orthogonal case,Kurt Mehlhornand Stefan Näher created a data structure for dynamic range searching which usesdynamic fractional cascadingto achieveO(nlog⁡n){\displaystyle O(n\log n)}space andO(log⁡nlog⁡log⁡n+k){\displaystyle O(\log n\log \log n+k)}query time.[8]Both incremental and decremental versions of the problem can be solved withO(log⁡n+k){\displaystyle O(\log n+k)}query time, but it is unknown whether general dynamic range searching can be done with that query time. The problem of colored range counting considers the case where points havecategoricalattributes. If the categories are considered as colors of points in geometric space, then a query is for how many colors appear in a particular range. Prosenjit Gupta and others described a data structure in 1995 which solved 2D orthogonal colored range counting inO(n2log2⁡n){\displaystyle O(n^{2}\log ^{2}n)}space andO(log2⁡n){\displaystyle O(\log ^{2}n)}query time.[9]This was later on generalized to higher dimensions.[10] In addition to being considered incomputational geometry, range searching, and orthogonal range searching in particular, has applications forrange queriesindatabases. Colored range searching is also used for and motivated by searching through categorical data. For example, determining the rows in a database of bank accounts which represent people whose age is between 25 and 40 and who have between $10000 and $20000 might be an orthogonal range reporting problem where age and money are two dimensions.
https://en.wikipedia.org/wiki/Range_search
Inmathematics, aVoronoi diagramis apartitionof aplaneinto regions close to each of a given set of objects. It can be classified also as atessellation. In the simplest case, these objects are just finitely many points in the plane (called seeds, sites, or generators). For each seed there is a correspondingregion, called aVoronoi cell, consisting of all points of the plane closer to that seed than to any other. The Voronoi diagram of a set of points isdualto that set'sDelaunay triangulation. The Voronoi diagram is named after mathematicianGeorgy Voronoy, and is also called aVoronoi tessellation, aVoronoi decomposition, aVoronoi partition, or aDirichlet tessellation(afterPeter Gustav Lejeune Dirichlet). Voronoi cells are also known asThiessen polygons, afterAlfred H. Thiessen.[1][2][3]Voronoi diagrams have practical and theoretical applications in many fields, mainly inscienceandtechnology, but also invisual art.[4][5] In the simplest case, shown in the first picture, we are given a finite set of points{p1,…pn}{\displaystyle \{p_{1},\dots p_{n}\}}in theEuclidean plane. In this case, each pointpk{\displaystyle p_{k}}has a corresponding cellRk{\displaystyle R_{k}}consisting of the points in the Euclidean plane for whichpk{\displaystyle p_{k}}is the nearest site: the distance topk{\displaystyle p_{k}}is less than or equal to the minimum distance to any other sitepj{\displaystyle p_{j}}. For one other sitepj{\displaystyle p_{j}}, the points that are closer topk{\displaystyle p_{k}}than topj{\displaystyle p_{j}}, or equally distant, form aclosed half-space, whose boundary is theperpendicular bisectorof line segmentpjpk{\displaystyle p_{j}p_{k}}. CellRk{\displaystyle R_{k}}is the intersection of all of thesen−1{\displaystyle n-1}half-spaces, and hence it is aconvex polygon.[6]When two cells in the Voronoi diagram share a boundary, it is aline segment,ray, or line, consisting of all the points in the plane that are equidistant to their two nearest sites. Theverticesof the diagram, where three or more of these boundaries meet, are the points that have three or more equally distant nearest sites. LetX{\textstyle X}be ametric spacewith distance functiond{\textstyle d}. LetK{\textstyle K}be a set of indices and let(Pk)k∈K{\textstyle (P_{k})_{k\in K}}be atuple(indexed collection) of nonemptysubsets(the sites) in the spaceX{\textstyle X}. The Voronoi cell, or Voronoi region,Rk{\textstyle R_{k}}, associated with the sitePk{\textstyle P_{k}}is the set of all points inX{\textstyle X}whose distance toPk{\textstyle P_{k}}is not greater than their distance to the other sitesPj{\textstyle P_{j}}, wherej{\textstyle j}is any index different fromk{\textstyle k}. In other words, ifd(x,A)=inf{d(x,a)∣a∈A}{\textstyle d(x,\,A)=\inf\{d(x,\,a)\mid a\in A\}}denotes the distance between the pointx{\textstyle x}and the subsetA{\textstyle A}, then Rk={x∈X∣d(x,Pk)≤d(x,Pj)for allj≠k}{\displaystyle R_{k}=\{x\in X\mid d(x,P_{k})\leq d(x,P_{j})\;{\text{for all}}\;j\neq k\}} The Voronoi diagram is simply thetupleof cells(Rk)k∈K{\textstyle (R_{k})_{k\in K}}. In principle, some of the sites can intersect and even coincide (an application is described below for sites representing shops), but usually they are assumed to be disjoint. In addition, infinitely many sites are allowed in the definition (this setting has applications ingeometry of numbersandcrystallography), but again, in many cases only finitely many sites are considered. In the particular case where the space is afinite-dimensionalEuclidean space, each site is a point, there are finitely many points and all of them are different, then the Voronoi cells areconvex polytopesand they can be represented in a combinatorial way using their vertices, sides, two-dimensional faces, etc. Sometimes the induced combinatorial structure is referred to as the Voronoi diagram. In general however, the Voronoi cells may not be convex or even connected. In the usual Euclidean space, we can rewrite the formal definition in usual terms. Each Voronoi polygonRk{\textstyle R_{k}}is associated with a generator pointPk{\textstyle P_{k}}. LetX{\textstyle X}be the set of all points in the Euclidean space. LetP1{\textstyle P_{1}}be a point that generates its Voronoi regionR1{\textstyle R_{1}},P2{\textstyle P_{2}}that generatesR2{\textstyle R_{2}}, andP3{\textstyle P_{3}}that generatesR3{\textstyle R_{3}}, and so on. Then, as expressed by Tranet al,[7]"all locations in the Voronoi polygon are closer to the generator point of that polygon than any other generator point in the Voronoi diagram in Euclidean plane". As a simple illustration, consider a group of shops in a city. Suppose we want to estimate the number of customers of a given shop. With all else being equal (price, products, quality of service, etc.), it is reasonable to assume that customers choose their preferred shop simply by distance considerations: they will go to the shop located nearest to them. In this case the Voronoi cellRk{\displaystyle R_{k}}of a given shopPk{\displaystyle P_{k}}can be used for giving a rough estimate on the number of potential customers going to this shop (which is modeled by a point in our city). For most cities, the distance between points can be measured using the familiarEuclidean distance: or theManhattan distance: The corresponding Voronoi diagrams look different for different distance metrics. Informal use of Voronoi diagrams can be traced back toDescartesin 1644.[10]Peter Gustav Lejeune Dirichletused two-dimensional and three-dimensional Voronoi diagrams in his study of quadratic forms in 1850. British physicianJohn Snowused a Voronoi-like diagram in 1854 to illustrate how the majority of people who died in theBroad Street cholera outbreaklived closer to the infectedBroad Street pumpthan to any other water pump. Voronoi diagrams are named afterGeorgy Feodosievych Voronoywho defined and studied the generaln-dimensional case in 1908.[11]Voronoi diagrams that are used ingeophysicsandmeteorologyto analyse spatially distributed data are called Thiessen polygons after American meteorologistAlfred H. Thiessen, who used them to estimate rainfall from scattered measurements in 1911. Other equivalent names for this concept (or particular important cases of it): Voronoi polyhedra, Voronoi polygons, domain(s) of influence, Voronoi decomposition, Voronoi tessellation(s), Dirichlet tessellation(s). Voronoi tessellations of regularlatticesof points in two or three dimensions give rise to many familiar tessellations. Certain body-centered tetragonal lattices give a tessellation of space withrhombo-hexagonal dodecahedra. For the set of points (x,y) withxin a discrete setXandyin a discrete setY, we get rectangular tiles with the points not necessarily at their centers. Although a normal Voronoi cell is defined as the set of points closest to a single point inS, annth-order Voronoi cell is defined as the set of points having a particular set ofnpoints inSas itsnnearest neighbors. Higher-order Voronoi diagrams also subdivide space. Higher-order Voronoi diagrams can be generated recursively. To generate thenth-order Voronoi diagram from setS, start with the (n− 1)th-order diagram and replace each cell generated byX= {x1,x2, ...,xn−1} with a Voronoi diagram generated on the setS−X. For a set ofnpoints, the (n− 1)th-order Voronoi diagram is called a farthest-point Voronoi diagram. For a given set of pointsS= {p1,p2, ...,pn}, the farthest-point Voronoi diagram divides the plane into cells in which the same point ofPis the farthest point. A point ofPhas a cell in the farthest-point Voronoi diagram if and only if it is a vertex of theconvex hullofP. LetH= {h1,h2, ...,hk} be the convex hull ofP; then the farthest-point Voronoi diagram is a subdivision of the plane intokcells, one for each point inH, with the property that a pointqlies in the cell corresponding to a sitehiif and only if d(q,hi) > d(q,pj) for eachpj∈Swithhi≠pj, where d(p,q) is theEuclidean distancebetween two pointspandq.[12][13] The boundaries of the cells in the farthest-point Voronoi diagram have the structure of atopological tree, with infiniteraysas its leaves. Every finite tree is isomorphic to the tree formed in this way from a farthest-point Voronoi diagram.[14] As implied by the definition, Voronoi cells can be defined for metrics other than Euclidean, such as theMahalanobis distanceorManhattan distance. However, in these cases the boundaries of the Voronoi cells may be more complicated than in the Euclidean case, since the equidistant locus for two points may fail to be subspace of codimension 1, even in the two-dimensional case. Aweighted Voronoi diagramis the one in which the function of a pair of points to define a Voronoi cell is a distance function modified by multiplicative or additive weights assigned to generator points. In contrast to the case of Voronoi cells defined using a distance which is ametric, in this case some of the Voronoi cells may be empty. Apower diagramis a type of Voronoi diagram defined from a set of circles using thepower distance; it can also be thought of as a weighted Voronoi diagram in which a weight defined from the radius of each circle is added to thesquared Euclidean distancefrom the circle's center.[15] The Voronoi diagram ofn{\displaystyle n}points ind{\displaystyle d}-dimensional space can haveO(n⌈d/2⌉){\textstyle O(n^{\lceil d/2\rceil })}vertices, requiring the same bound for the amount of memory needed to store an explicit description of it. Therefore, Voronoi diagrams are often not feasible for moderate or high dimensions. A more space-efficient alternative is to use approximate Voronoi diagrams.[16] Voronoi diagrams are also related to other geometric structures such as themedial axis(which has found applications in image segmentation,optical character recognition, and other computational applications),straight skeleton, andzone diagrams. It is used in meteorology and engineering hydrology to find the weights for precipitation data of stations over an area (watershed). The points generating the polygons are the various station that record precipitation data. Perpendicular bisectors are drawn to the line joining any two stations. This results in the formation of polygons around the stations. The area(Ai){\displaystyle (A_{i})}touching station point is known as influence area of the station. The average precipitation is calculated by the formulaP¯=∑AiPi∑Ai{\displaystyle {\bar {P}}={\frac {\sum A_{i}P_{i}}{\sum A_{i}}}} Several efficient algorithms are known for constructing Voronoi diagrams, either directly (as the diagram itself) or indirectly by starting with aDelaunay triangulationand then obtaining its dual. Direct algorithms includeFortune's algorithm, anO(nlog(n)) algorithm for generating a Voronoi diagram from a set of points in a plane.Bowyer–Watson algorithm, anO(nlog(n)) toO(n2) algorithm for generating a Delaunay triangulation in any number of dimensions, can be used in an indirect algorithm for the Voronoi diagram. TheJump Flooding Algorithmcan generate approximate Voronoi diagrams in constant time and is suited for use on commodity graphics hardware.[42][43] Lloyd's algorithmand its generalization via theLinde–Buzo–Gray algorithm(akak-means clustering) use the construction of Voronoi diagrams as a subroutine. These methods alternate between steps in which one constructs the Voronoi diagram for a set of seed points, and steps in which the seed points are moved to new locations that are more central within their cells. These methods can be used in spaces of arbitrary dimension to iteratively converge towards a specialized form of the Voronoi diagram, called aCentroidal Voronoi tessellation, where the sites have been moved to points that are also the geometric centers of their cells. Voronoi meshes can also be generated in 3D.
https://en.wikipedia.org/wiki/Voronoi_diagram
Awaveletis awave-likeoscillationwith anamplitudethat begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful forsignal processing. For example, a wavelet could be created to have a frequency ofmiddle Cand a short duration of roughly one tenth of a second. If this wavelet were to beconvolvedwith a signal created from the recording of a melody, then the resulting signal would be useful for determining when the middle C note appeared in the song. Mathematically, a wavelet correlates with a signal if a portion of the signal is similar.Correlationis at the core of many practical wavelet applications. As a mathematical tool, wavelets can be used to extract information from many kinds of data, includingaudio signalsand images. Sets of wavelets are needed to analyze data fully. "Complementary" wavelets decompose a signal without gaps or overlaps so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful inwavelet-based compression/decompression algorithms, where it is desirable to recover the original information with minimal loss. In formal terms, this representation is awavelet seriesrepresentation of asquare-integrable functionwith respect to either acomplete,orthonormalset ofbasis functions, or anovercompleteset orframe of a vector space, for theHilbert spaceof square-integrable functions. This is accomplished throughcoherent states. Inclassical physics, the diffraction phenomenon is described by theHuygens–Fresnel principlethat treats each point in a propagatingwavefrontas a collection of individual spherical wavelets.[1]The characteristic bending pattern is most pronounced when a wave from acoherentsource (such as a laser) encounters a slit/aperture that is comparable in size to itswavelength. This is due to the addition, orinterference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. Multiple,closely spaced openings(e.g., adiffraction grating), can result in a complex pattern of varying intensity. The wordwavelethas been used for decades in digital signal processing and exploration geophysics.[2]The equivalentFrenchwordondelettemeaning "small wave" was used byJean MorletandAlex Grossmannin the early 1980s. Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms oftime-frequency representationforcontinuous-time(analog) signals and so are related toharmonic analysis.[3]Discrete wavelet transform (continuous in time) of adiscrete-time(sampled) signal by usingdiscrete-timefilterbanksof dyadic (octave band) configuration is a wavelet approximation to that signal. The coefficients of such a filter bank are called the shift and scaling coefficients in wavelets nomenclature. These filterbanks may contain eitherfinite impulse response(FIR) orinfinite impulse response(IIR) filters. The wavelets forming acontinuous wavelet transform(CWT) are subject to theuncertainty principleof Fourier analysis respective sampling theory:[4]given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in thescaleogramof a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle.[5][6][7][8] Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based. Incontinuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of theLpfunction spaceL2(R) ). For instance the signal may be represented on every frequency band of the form [f, 2f] for all positive frequenciesf> 0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components. The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function ψ inL2(R), themother wavelet. For the example of the scale one frequency band [1, 2] this function isψ(t)=2sinc⁡(2t)−sinc⁡(t)=sin⁡(2πt)−sin⁡(πt)πt{\displaystyle \psi (t)=2\,\operatorname {sinc} (2t)-\,\operatorname {sinc} (t)={\frac {\sin(2\pi t)-\sin(\pi t)}{\pi t}}}with the (normalized)sinc function. That, Meyer's, and two other examples of mother wavelets are: The subspace of scaleaor frequency band [1/a, 2/a] is generated by the functions (sometimes calledchild wavelets)ψa,b(t)=1aψ(t−ba),{\displaystyle \psi _{a,b}(t)={\frac {1}{\sqrt {a}}}\psi \left({\frac {t-b}{a}}\right),}whereais positive and defines the scale andbis any real number and defines the shift. The pair (a,b) defines a point in the right halfplaneR+×R. The projection of a functionxonto the subspace of scaleathen has the formxa(t)=∫RWTψ{x}(a,b)⋅ψa,b(t)db{\displaystyle x_{a}(t)=\int _{\mathbb {R} }WT_{\psi }\{x\}(a,b)\cdot \psi _{a,b}(t)\,db}withwavelet coefficientsWTψ{x}(a,b)=⟨x,ψa,b⟩=∫Rx(t)ψa,b(t)dt.{\displaystyle WT_{\psi }\{x\}(a,b)=\langle x,\psi _{a,b}\rangle =\int _{\mathbb {R} }x(t){\psi _{a,b}(t)}\,dt.} For the analysis of the signalx, one can assemble the wavelet coefficients into ascaleogramof the signal. See a list of someContinuous wavelets. It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper halfplane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is theaffinesystem for some real parametersa> 1,b> 0. The corresponding discrete subset of the halfplane consists of all the points (am,nb am) withm,ninZ. The correspondingchild waveletsare now given asψm,n(t)=1amψ(t−nbamam).{\displaystyle \psi _{m,n}(t)={\frac {1}{\sqrt {a^{m}}}}\psi \left({\frac {t-nba^{m}}{a^{m}}}\right).} A sufficient condition for the reconstruction of any signalxof finite energy by the formulax(t)=∑m∈Z∑n∈Z⟨x,ψm,n⟩⋅ψm,n(t){\displaystyle x(t)=\sum _{m\in \mathbb {Z} }\sum _{n\in \mathbb {Z} }\langle x,\,\psi _{m,n}\rangle \cdot \psi _{m,n}(t)}is that the functions{ψm,n:m,n∈Z}{\displaystyle \{\psi _{m,n}:m,n\in \mathbb {Z} \}}form anorthonormal basisofL2(R). In any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. In special situations this numerical complexity can be avoided if the scaled and shifted wavelets form amultiresolution analysis. This means that there has to exist anauxiliary function, thefather waveletφ inL2(R), and thatais an integer. A typical choice isa= 2 andb= 1. The most famous pair of father and mother wavelets is theDaubechies4-tap wavelet. Note that not every orthonormal discrete wavelet basis can be associated to a multiresolution analysis; for example, the Journe wavelet admits no multiresolution analysis.[9] From the mother and father wavelets one constructs the subspacesVm=span⁡(ϕm,n:n∈Z),whereϕm,n(t)=2−m/2ϕ(2−mt−n){\displaystyle V_{m}=\operatorname {span} (\phi _{m,n}:n\in \mathbb {Z} ),{\text{ where }}\phi _{m,n}(t)=2^{-m/2}\phi (2^{-m}t-n)}Wm=span⁡(ψm,n:n∈Z),whereψm,n(t)=2−m/2ψ(2−mt−n).{\displaystyle W_{m}=\operatorname {span} (\psi _{m,n}:n\in \mathbb {Z} ),{\text{ where }}\psi _{m,n}(t)=2^{-m/2}\psi (2^{-m}t-n).}The father waveletVi{\displaystyle V_{i}}keeps the time domain properties, while the mother waveletsWi{\displaystyle W_{i}}keeps the frequency domain properties. From these it is required that the sequence{0}⊂⋯⊂V1⊂V0⊂V−1⊂V−2⊂⋯⊂L2(R){\displaystyle \{0\}\subset \dots \subset V_{1}\subset V_{0}\subset V_{-1}\subset V_{-2}\subset \dots \subset L^{2}(\mathbb {R} )}forms amultiresolution analysisofL2and that the subspaces…,W1,W0,W−1,…{\displaystyle \dots ,W_{1},W_{0},W_{-1},\dots }are the orthogonal "differences" of the above sequence, that is,Wmis the orthogonal complement ofVminside the subspaceVm−1,Vm⊕Wm=Vm−1.{\displaystyle V_{m}\oplus W_{m}=V_{m-1}.} In analogy to thesampling theoremone may conclude that the spaceVmwith sampling distance 2mmore or less covers the frequency baseband from 0 to 1/2m-1. As orthogonal complement,Wmroughly covers the band [1/2m−1, 1/2m]. From those inclusions and orthogonality relations, especiallyV0⊕W0=V−1{\displaystyle V_{0}\oplus W_{0}=V_{-1}}, follows the existence of sequencesh={hn}n∈Z{\displaystyle h=\{h_{n}\}_{n\in \mathbb {Z} }}andg={gn}n∈Z{\displaystyle g=\{g_{n}\}_{n\in \mathbb {Z} }}that satisfy the identitiesgn=⟨ϕ0,0,ϕ−1,n⟩{\displaystyle g_{n}=\langle \phi _{0,0},\,\phi _{-1,n}\rangle }so thatϕ(t)=2∑n∈Zgnϕ(2t−n),{\textstyle \phi (t)={\sqrt {2}}\sum _{n\in \mathbb {Z} }g_{n}\phi (2t-n),}andhn=⟨ψ0,0,ϕ−1,n⟩{\displaystyle h_{n}=\langle \psi _{0,0},\,\phi _{-1,n}\rangle }so thatψ(t)=2∑n∈Zhnϕ(2t−n).{\textstyle \psi (t)={\sqrt {2}}\sum _{n\in \mathbb {Z} }h_{n}\phi (2t-n).}The second identity of the first pair is arefinement equationfor the father wavelet φ. Both pairs of identities form the basis for the algorithm of thefast wavelet transform. From the multiresolution analysis derives the orthogonal decomposition of the spaceL2asL2=Vj0⊕Wj0⊕Wj0−1⊕Wj0−2⊕Wj0−3⊕⋯{\displaystyle L^{2}=V_{j_{0}}\oplus W_{j_{0}}\oplus W_{j_{0}-1}\oplus W_{j_{0}-2}\oplus W_{j_{0}-3}\oplus \cdots }For any signal or functionS∈L2{\displaystyle S\in L^{2}}this gives a representation in basis functions of the corresponding subspaces asS=∑kcj0,kϕj0,k+∑j≤j0∑kdj,kψj,k{\displaystyle S=\sum _{k}c_{j_{0},k}\phi _{j_{0},k}+\sum _{j\leq j_{0}}\sum _{k}d_{j,k}\psi _{j,k}}where the coefficients arecj0,k=⟨S,ϕj0,k⟩{\displaystyle c_{j_{0},k}=\langle S,\phi _{j_{0},k}\rangle }anddj,k=⟨S,ψj,k⟩.{\displaystyle d_{j,k}=\langle S,\psi _{j,k}\rangle .} For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al[10]and Lindeberg,[11]with the latter method also involving a memory-efficient time-recursive implementation. For practical applications, and for efficiency reasons, one prefers continuously differentiable functions with compact support as mother (prototype) wavelet (functions). However, to satisfy analytical requirements (in the continuous WT) and in general for theoretical reasons, one chooses the wavelet functions from a subspace of thespaceL1(R)∩L2(R).{\displaystyle L^{1}(\mathbb {R} )\cap L^{2}(\mathbb {R} ).}This is the space ofLebesgue measurablefunctions that are bothabsolutely integrableandsquare integrablein the sense that∫−∞∞|ψ(t)|dt<∞{\displaystyle \int _{-\infty }^{\infty }|\psi (t)|\,dt<\infty }and∫−∞∞|ψ(t)|2dt<∞.{\displaystyle \int _{-\infty }^{\infty }|\psi (t)|^{2}\,dt<\infty .} Being in this space ensures that one can formulate the conditions of zero mean and square norm one:∫−∞∞ψ(t)dt=0{\displaystyle \int _{-\infty }^{\infty }\psi (t)\,dt=0}is the condition for zero mean, and∫−∞∞|ψ(t)|2dt=1{\displaystyle \int _{-\infty }^{\infty }|\psi (t)|^{2}\,dt=1}is the condition for square norm one. Forψto be a wavelet for thecontinuous wavelet transform(see there for exact statement), the mother wavelet must satisfy an admissibility criterion (loosely speaking, a kind of half-differentiability) in order to get a stably invertible transform. For thediscrete wavelet transform, one needs at least the condition that thewavelet seriesis a representation of the identity in thespaceL2(R). Most constructions of discrete WT make use of themultiresolution analysis, which defines the wavelet by a scaling function. This scaling function itself is a solution to a functional equation. In most situations it is useful to restrict ψ to be a continuous function with a higher numberMof vanishing moments, i.e. for all integerm<M∫−∞∞tmψ(t)dt=0.{\displaystyle \int _{-\infty }^{\infty }t^{m}\,\psi (t)\,dt=0.} The mother wavelet is scaled (or dilated) by a factor ofaand translated (or shifted) by a factor ofbto give (under Morlet's original formulation): ψa,b(t)=1aψ(t−ba).{\displaystyle \psi _{a,b}(t)={1 \over {\sqrt {a}}}\psi \left({t-b \over a}\right).} For the continuous WT, the pair (a,b) varies over the full half-planeR+×R; for the discrete WT this pair varies over a discrete subset of it, which is also calledaffine group. These functions are often incorrectly referred to as the basis functions of the (continuous) transform. In fact, as in the continuous Fourier transform, there is no basis in the continuous wavelet transform. Time-frequency interpretation uses a subtly different formulation (after Delprat). Restriction: The wavelet transform is often compared with theFourier transform, in which signals are represented as a sum of sinusoids. In fact, theFourier transformcan be viewed as a special case of the continuous wavelet transform with the choice of the mother waveletψ(t)=e−2πit{\displaystyle \psi (t)=e^{-2\pi it}}. The main difference in general is that wavelets are localized in both time and frequency whereas the standardFourier transformis only localized infrequency. Theshort-time Fourier transform(STFT) is similar to the wavelet transform, in that it is also time and frequency localized, but there are issues with the frequency/time resolution trade-off. In particular, assuming a rectangular window region, one may think of the STFT as a transform with a slightly different kernelψ(t)=g(t−u)e−2πit{\displaystyle \psi (t)=g(t-u)e^{-2\pi it}}whereg(t−u){\displaystyle g(t-u)}can often be written asrect⁡(t−uΔt){\textstyle \operatorname {rect} \left({\frac {t-u}{\Delta _{t}}}\right)}, whereΔt{\displaystyle \Delta _{t}}andurespectively denote the length and temporal offset of the windowing function. UsingParseval's theorem, one may define the wavelet's energy asE=∫−∞∞|ψ(t)|2dt=12π∫−∞∞|ψ^(ω)|2dω{\displaystyle E=\int _{-\infty }^{\infty }|\psi (t)|^{2}\,dt={\frac {1}{2\pi }}\int _{-\infty }^{\infty }|{\hat {\psi }}(\omega )|^{2}\,d\omega }From this, the square of the temporal support of the window offset by timeuis given byσu2=1E∫|t−u|2|ψ(t)|2dt{\displaystyle \sigma _{u}^{2}={\frac {1}{E}}\int |t-u|^{2}|\psi (t)|^{2}\,dt} and the square of the spectral support of the window acting on a frequencyξ{\displaystyle \xi }σ^ξ2=12πE∫|ω−ξ|2|ψ^(ω)|2dω{\displaystyle {\hat {\sigma }}_{\xi }^{2}={\frac {1}{2\pi E}}\int |\omega -\xi |^{2}|{\hat {\psi }}(\omega )|^{2}\,d\omega } Multiplication with a rectangular window in the time domain corresponds to convolution with asinc⁡(Δtω){\displaystyle \operatorname {sinc} (\Delta _{t}\omega )}function in the frequency domain, resulting in spuriousringing artifactsfor short/localized temporal windows. With the continuous-time Fourier transform,Δt→∞{\displaystyle \Delta _{t}\to \infty }and this convolution is with a delta function in Fourier space, resulting in the true Fourier transform of the signalx(t){\displaystyle x(t)}. The window function may be some otherapodizing filter, such as aGaussian. The choice of windowing function will affect the approximation error relative to the true Fourier transform. A given resolution cell's time-bandwidth product may not be exceeded with the STFT. All STFT basis elements maintain a uniform spectral and temporal support for all temporal shifts or offsets, thereby attaining an equal resolution in time for lower and higher frequencies. The resolution is purely determined by the sampling width. In contrast, the wavelet transform'smultiresolutionalproperties enables large temporal supports for lower frequencies while maintaining short temporal widths for higher frequencies by the scaling properties of the wavelet transform. This property extends conventional time-frequency analysis into time-scale analysis.[12] The discrete wavelet transform is less computationallycomplex, takingO(N)time as compared to O(NlogN) for thefast Fourier transform(FFT). This computational advantage is not inherent to the transform, but reflects the choice of a logarithmic division of frequency, in contrast to the equally spaced frequency divisions of the FFT which uses the same basis functions as the discrete Fourier transform (DFT).[13]This complexity only applies when the filter size has no relation to the signal size. A wavelet withoutcompact supportsuch as theShannon waveletwould require O(N2). (For instance, a logarithmic Fourier Transform also exists with O(N) complexity, but the original signal must be sampled logarithmically in time, which is only useful for certain types of signals.[14]) A wavelet (or a wavelet family) can be defined in various ways: An orthogonal wavelet is entirely defined by the scaling filter – a low-passfinite impulse response(FIR) filter of length 2Nand sum 1. Inbiorthogonalwavelets, separate decomposition and reconstruction filters are defined. For analysis with orthogonal wavelets the high pass filter is calculated as thequadrature mirror filterof the low pass, and reconstruction filters are the time reverse of the decomposition filters. Daubechies and Symlet wavelets can be defined by the scaling filter. Wavelets are defined by the wavelet function ψ(t) (i.e. the mother wavelet) and scaling function φ(t) (also called father wavelet) in the time domain. The wavelet function is in effect a band-pass filter and scaling that for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See[15]for a detailed explanation. For a wavelet with compact support, φ(t) can be considered finite in length and is equivalent to the scaling filterg. Meyer wavelets can be defined by scaling functions The wavelet only has a time domain representation as the wavelet function ψ(t). For instance,Mexican hat waveletscan be defined by a wavelet function. See a list of a fewcontinuous wavelets. The development of wavelets can be linked to several separate trains of thought, starting withAlfréd Haar's work in the early 20th century. Later work byDennis GaboryieldedGabor atoms(1946), which are constructed similarly to wavelets, and applied to similar purposes. Notable contributions to wavelet theory since then can be attributed toGeorge Zweig’s discovery of thecontinuous wavelet transform(CWT) in 1975 (originally called the cochlear transform and discovered while studying the reaction of the ear to sound),[16]Pierre Goupillaud,Alex GrossmannandJean Morlet's formulation of what is now known as the CWT (1982), Jan-Olov Strömberg's early work ondiscrete wavelets(1983), the Le Gall–Tabatabai (LGT) 5/3-taps non-orthogonal filter bank with linear phase (1988),[17][18][19]Ingrid Daubechies' orthogonal wavelets with compact support (1988),Stéphane Mallat's non-orthogonal multiresolution framework (1989),Ali Akansu'sbinomial QMF(1990), Nathalie Delprat's time-frequency interpretation of the CWT (1991), Newland'sharmonic wavelet transform(1993), andset partitioning in hierarchical trees(SPIHT) developed by Amir Said with William A. Pearlman in 1996.[20] TheJPEG 2000standard was developed from 1997 to 2000 by aJoint Photographic Experts Group(JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president).[21]In contrast to the DCT algorithm used by the originalJPEGformat, JPEG 2000 instead usesdiscrete wavelet transform(DWT) algorithms. It uses theCDF9/7 wavelet transform (developed by Ingrid Daubechies in 1992) for itslossy compressionalgorithm, and the Le Gall–Tabatabai (LGT) 5/3 discrete-time filter bank (developed by Didier Le Gall and Ali J. Tabatabai in 1988) for itslossless compressionalgorithm.[22]JPEG 2000technology, which includes theMotion JPEG 2000extension, was selected as thevideo coding standardfordigital cinemain 2004.[23] A wavelet is a mathematical function used to divide a given function orcontinuous-time signalinto different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets arescaledandtranslatedcopies (known as "daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditionalFourier transformsfor representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodicand/or non-stationarysignals. Wavelet transforms are classified intodiscrete wavelet transforms(DWTs) andcontinuous wavelet transforms(CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid. There are a large number of wavelet transforms each suitable for different applications. For a full list seelist of wavelet-related transformsbut the common ones are listed below: There are a number of generalized transforms of which the wavelet transform is a special case. For example, Yosef Joseph Segman introduced scale into theHeisenberg group, giving rise to a continuous transform space that is a function of time, scale, and frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-frequency volume. Another example of a generalized transform is thechirplet transformin which the CWT is also a two dimensional slice through the chirplet transform. An important application area for generalized transforms involves systems in which high frequency resolution is crucial. For example,darkfieldelectron optical transforms intermediate between direct andreciprocal spacehave been widely used in theharmonic analysisof atom clustering, i.e. in the study ofcrystalsandcrystal defects.[24]Now thattransmission electron microscopesare capable of providing digital images with picometer-scale information on atomic periodicity innanostructureof all sorts, the range ofpattern recognition[25]andstrain[26]/metrology[27]applications for intermediate transforms with high frequency resolution (like brushlets[28]and ridgelets[29]) is growing rapidly. Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform in the fractional Fourier transform domains. This transform is capable of providing the time- and fractional-domain information simultaneously and representing signals in the time-fractional-frequency plane.[30] Generally, an approximation to DWT is used fordata compressionif a signal is already sampled, and the CWT forsignal analysis.[31][32]Thus, DWT approximation is commonly used in engineering and computer science,[33]and the CWT in scientific research.[34] Like some other transforms, wavelet transforms can be used to transform data, then encode the transformed data, resulting in effective compression. For example,JPEG 2000is an image compression standard that uses biorthogonal wavelets. This means that although the frame is overcomplete, it is atight frame(see types offrames of a vector space), and the same frame functions (except for conjugation in the case of complex wavelets) are used for both analysis and synthesis, i.e., in both the forward and inverse transform. For details seewavelet compression. A related use is for smoothing/denoising data based on wavelet coefficient thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet coefficients that correspond to undesired frequency components smoothing and/or denoising operations can be performed. Wavelet transforms are also starting to be used for communication applications. WaveletOFDMis the basic modulation scheme used inHD-PLC(apower line communicationstechnology developed byPanasonic), and in one of the optional modes included in theIEEE 1901standard. Wavelet OFDM can achieve deeper notches than traditionalFFTOFDM, and wavelet OFDM does not require a guard interval (which usually represents significant overhead in FFT OFDM systems).[35] Often, signals can be represented well as a sum of sinusoids. However, consider a non-continuous signal with an abrupt discontinuity; this signal can still be represented as a sum of sinusoids, but requires an infinite number, which is an observation known asGibbs phenomenon. This, then, requires an infinite number of Fourier coefficients, which is not practical for many applications, such as compression. Wavelets are more useful for describing these signals with discontinuities because of their time-localized behavior (both Fourier and wavelet transforms are frequency-localized, but wavelets have an additional time-localization property). Because of this, many types of signals in practice may be non-sparse in the Fourier domain, but very sparse in the wavelet domain. This is particularly useful in signal reconstruction, especially in the recently popular field ofcompressed sensing. (Note that theshort-time Fourier transform(STFT) is also localized in time and frequency, but there are often problems with the frequency-time resolution trade-off. Wavelets are better signal representations because ofmultiresolution analysis.) This motivates why wavelet transforms are now being adopted for a vast number of applications, often replacing the conventionalFourier transform. Many areas of physics have seen this paradigm shift, includingmolecular dynamics,chaos theory,[36]ab initiocalculations,astrophysics,gravitational wavetransient data analysis,[37][38]density-matrixlocalisation,seismology,optics,turbulenceandquantum mechanics. This change has also occurred inimage processing,EEG,EMG,[39]ECGanalyses,brain rhythms,DNAanalysis,proteinanalysis,climatology, human sexual response analysis,[40]generalsignal processing,speech recognition, acoustics, vibration signals,[41]computer graphics,multifractal analysis, andsparse coding. Incomputer visionandimage processing, the notion ofscale spacerepresentation and Gaussian derivative operators is regarded as a canonical multi-scale representation. Suppose we measure a noisy signalx=s+v{\displaystyle x=s+v}, wheres{\displaystyle s}represents the signal andv{\displaystyle v}represents the noise. Assumes{\displaystyle s}has a sparse representation in a certain wavelet basis, andv∼N(0,σ2I){\displaystyle v\ \sim \ {\mathcal {N}}(0,\,\sigma ^{2}I)} Let the wavelet transform ofx{\displaystyle x}bey=WTx=WTs+WTv=p+z{\displaystyle y=W^{T}x=W^{T}s+W^{T}v=p+z}, wherep=WTs{\displaystyle p=W^{T}s}is the wavelet transform of the signal component andz=WTv{\displaystyle z=W^{T}v}is the wavelet transform of the noise component. Most elements inp{\displaystyle p}are 0 or close to 0, andz∼N(0,σ2I){\displaystyle z\ \sim \ \ {\mathcal {N}}(0,\,\sigma ^{2}I)} SinceW{\displaystyle W}is orthogonal, the estimation problem amounts to recovery of a signal in iidGaussian noise. Asp{\displaystyle p}is sparse, one method is to apply a Gaussian mixture model forp{\displaystyle p}. Assume a priorp∼aN(0,σ12)+(1−a)N(0,σ22){\displaystyle p\ \sim \ a{\mathcal {N}}(0,\,\sigma _{1}^{2})+(1-a){\mathcal {N}}(0,\,\sigma _{2}^{2})}, whereσ12{\displaystyle \sigma _{1}^{2}}is the variance of "significant" coefficients andσ22{\displaystyle \sigma _{2}^{2}}is the variance of "insignificant" coefficients. Thenp~=E(p/y)=τ(y)y{\displaystyle {\tilde {p}}=E(p/y)=\tau (y)y},τ(y){\displaystyle \tau (y)}is called the shrinkage factor, which depends on the prior variancesσ12{\displaystyle \sigma _{1}^{2}}andσ22{\displaystyle \sigma _{2}^{2}}. By setting coefficients that fall below a shrinkage threshold to zero, once the inverse transform is applied, an expectedly small amount of signal is lost due to the sparsity assumption. The larger coefficients are expected to primarily represent signal due to sparsity, and statistically very little of the signal, albeit the majority of the noise, is expected to be represented in such lower magnitude coefficients... therefore the zeroing-out operation is expected to remove most of the noise and not much signal. Typically, the above-threshold coefficients are not modified during this process. Some algorithms for wavelet-based denoising may attenuate larger coefficients as well, based on a statistical estimate of the amount of noise expected to be removed by such an attenuation. At last, apply the inverse wavelet transform to obtains~=Wp~{\displaystyle {\tilde {s}}=W{\tilde {p}}} Agarwal et al. proposed wavelet based advanced linear[42]and nonlinear[43]methods to construct and investigateClimate as complex networksat different timescales. Climate networks constructed usingSSTdatasets at different timescale averred that wavelet based multi-scale analysis of climatic processes holds the promise of better understanding the system dynamics that may be missed when processes are analyzed at one timescale only[44]
https://en.wikipedia.org/wiki/Wavelet
Asimple polygonthat is notconvexis calledconcave,[1]non-convex[2]orreentrant.[3]A concave polygon will always have at least onereflex interior angle—that is, an angle with a measure that is between 180 degrees and 360 degrees exclusive.[4] Some lines containing interior points of a concave polygon intersect its boundary at more than two points.[4]Somediagonalsof a concave polygon lie partly or wholly outside the polygon.[4]Somesidelinesof a concave polygon fail to divide the plane into two half-planes one of which entirely contains the polygon. None of these three statements holds for a convex polygon. As with any simple polygon, the sum of theinternal anglesof a concave polygon isπ×(n− 2)radians, equivalently 180×(n− 2) degrees (°), wherenis the number of sides. It is always possible topartitiona concave polygon into a set of convex polygons. Apolynomial-time algorithmfor finding a decomposition into as few convex polygons as possible is described byChazelle & Dobkin (1985).[5] According to Euclidean geometry, atrianglecan never be concave, but there exist concave polygons withnsides for anyn> 3. An example of a concavequadrilateralis thedart. At least one interior angle does not contain all other vertices in its edges and interior. Theconvex hullof the concave polygon's vertices, and that of its edges, contains points that are exterior to the polygon.
https://en.wikipedia.org/wiki/Concave_polygon
Inconvex analysis, anon-negativefunctionf:Rn→R+islogarithmically concave(orlog-concavefor short) if itsdomainis aconvex set, and if it satisfies the inequality for allx,y∈ domfand0 <θ< 1. Iffis strictly positive, this is equivalent to saying that thelogarithmof the function,log ∘f, isconcave; that is, for allx,y∈ domfand0 <θ< 1. Examples of log-concave functions are the 0-1indicator functionsof convex sets (which requires the more flexible definition), and theGaussian function. Similarly, a function islog-convexif it satisfies the reverse inequality for allx,y∈ domfand0 <θ< 1. Log-concave distributions are necessary for a number of algorithms, e.g.adaptive rejection sampling. Every distribution with log-concave density is amaximum entropy probability distributionwith specified meanμandDeviation risk measureD.[2]As it happens, many commonprobability distributionsare log-concave. Some examples:[3] Note that all of the parameter restrictions have the same basic source: The exponent of non-negative quantity must be non-negative in order for the function to be log-concave. The following distributions are non-log-concave for all parameters: Note that thecumulative distribution function(CDF) of all log-concave distributions is also log-concave. However, some non-log-concave distributions also have log-concave CDF's: The following are among the properties of log-concave distributions:
https://en.wikipedia.org/wiki/Logarithmically_concave_function
Inmathematics, aquasiconvex functionis areal-valuedfunctiondefined on anintervalor on aconvex subsetof a realvector spacesuch that theinverse imageof any set of the form(−∞,a){\displaystyle (-\infty ,a)}is aconvex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to bequasiconcave. Quasiconvexity is a more general property than convexity in that allconvex functionsare also quasiconvex, but not all quasiconvex functions are convex.Univariateunimodalfunctions are quasiconvex or quasiconcave, however this is not necessarily the case for functions with multiplearguments. For example, the 2-dimensionalRosenbrock functionis unimodal but not quasiconvex and functions withstar-convexsublevel sets can be unimodal without being quasiconvex. A functionf:S→R{\displaystyle f:S\to \mathbb {R} }defined on a convex subsetS{\displaystyle S}of a real vector space is quasiconvex if for allx,y∈S{\displaystyle x,y\in S}andλ∈[0,1]{\displaystyle \lambda \in [0,1]}we have In words, iff{\displaystyle f}is such that it is always true that a point directly between two other points does not give a higher value of the function than both of the other points do, thenf{\displaystyle f}is quasiconvex. Note that the pointsx{\displaystyle x}andy{\displaystyle y}, and the point directly between them, can be points on a line or more generally points inn-dimensional space. An alternative way (see introduction) of defining a quasi-convex functionf(x){\displaystyle f(x)}is to require that each sublevel setSα(f)={x∣f(x)≤α}{\displaystyle S_{\alpha }(f)=\{x\mid f(x)\leq \alpha \}}is a convex set. If furthermore for allx≠y{\displaystyle x\neq y}andλ∈(0,1){\displaystyle \lambda \in (0,1)}, thenf{\displaystyle f}isstrictly quasiconvex. That is, strict quasiconvexity requires that a point directly between two other points must give a lower value of the function than one of the other points does. Aquasiconcave functionis a function whose negative is quasiconvex, and astrictly quasiconcave functionis a function whose negative is strictly quasiconvex. Equivalently a functionf{\displaystyle f}is quasiconcave if and strictly quasiconcave if A (strictly) quasiconvex function has (strictly) convexlower contour sets, while a (strictly) quasiconcave function has (strictly) convexupper contour sets. A function that is both quasiconvex and quasiconcave isquasilinear. A particular case of quasi-concavity, ifS⊂R{\displaystyle S\subset \mathbb {R} }, isunimodality, in which there is a locally maximal value. Quasiconvex functions have applications inmathematical analysis, inmathematical optimization, and ingame theoryandeconomics. Innonlinear optimization, quasiconvex programming studiesiterative methodsthat converge to a minimum (if one exists) for quasiconvex functions. Quasiconvex programming is a generalization ofconvex programming.[1]Quasiconvex programming is used in the solution of "surrogate"dual problems, whose biduals provide quasiconvex closures of the primal problem, which therefore provide tighter bounds than do the convex closures provided by Lagrangiandual problems.[2]Intheory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated);[3]however, such theoretically "efficient" methods use "divergent-series"step size rules, which were first developed for classicalsubgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods,bundle methodsof descent, and nonsmoothfilter methods. Inmicroeconomics, quasiconcaveutility functionsimply that consumers haveconvex preferences. Quasiconvex functions are important also ingame theory,industrial organization, andgeneral equilibrium theory, particularly for applications ofSion's minimax theorem. Generalizing aminimax theoremofJohn von Neumann, Sion's theorem is also used in the theory ofpartial differential equations.
https://en.wikipedia.org/wiki/Quasiconcave_function
In mathematics,concavificationis the process of converting a non-concave function to aconcave function. A related concept isconvexification– converting a non-convex function to aconvex function. It is especially important ineconomicsandmathematical optimization.[1] An important special case of concavification is where the original function is aquasiconcave function. It is known that: Therefore, a natural question is:given a quasiconcave functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} },does there exist a monotonically increasingg:R→R{\displaystyle g:\mathbb {R} \to \mathbb {R} }such thatx↦g(f(x)){\displaystyle x\mapsto g(f(x))}is concave? As an example, consider the functionx↦f(x)=x2{\displaystyle x\mapsto f(x)=x^{2}}on the domainx≥0{\displaystyle x\geq 0}. This function is quasiconcave, but it is not concave (in fact, it is strictly convex). It can be concavified, for example, using the monotone transformationt↦g(t)=t1/4{\displaystyle t\mapsto g(t)=t^{1/4}}, sincex↦g(f(x))=x{\displaystyle x\mapsto g(f(x))={\sqrt {x}}}is concave. Not every concave function can be concavified in this way. A counter example was shown by Fenchel.[2]His example is:(x,y)↦f(x,y):=y+x+y2{\displaystyle (x,y)\mapsto f(x,y):=y+{\sqrt {x+y^{2}}}}. Fenchel proved that this function is quasiconcave, but there is no monotone transformationg:R→R{\displaystyle g:\mathbb {R} \to \mathbb {R} }such that(x,y)↦g(f(x,y)){\displaystyle (x,y)\mapsto g(f(x,y))}is concave.[3]: 7–9 Based on these examples, we define a function to beconcavifiableif there exists a monotone transformation that makes it concave. The question now becomes:what quasiconcave functions are concavifiable? Yakar Kannai treats the question in depth in the context ofutility functions, giving sufficient conditions under which continuousconvex preferencescan be represented by concave utility functions.[4] His results were later generalized by Connell and Rasmussen,[3]who give necessary and sufficient conditions for concavifiability. They show that the function(x,y)↦f(x,y)=eex⋅y{\displaystyle (x,y)\mapsto f(x,y)=e^{e^{x}}\cdot y}violates their conditions and thus is not concavifiable. They prove that this function is strictly quasiconcave and its gradient is non-vanishing, but it is not concavifiable.
https://en.wikipedia.org/wiki/Concavification
Artificial intelligence artusually meansvisual artworkgenerated (or enhanced) through the use ofartificial intelligence(AI) programs. Artists began to create AI art in the mid to late 20th century, when the discipline was founded. Throughoutits history, AI has raised manyphilosophical concernsrelated to thehuman mind,artificial beings, and also what can be consideredartin human–AI collaboration. Since the 20th century, people have used AI to create art, some of which has been exhibited in museums and won awards.[1] During theAI boomof the 2020s,text-to-image modelssuch asMidjourney,DALL-E,Stable Diffusion, andFLUX.1became widely available to the public, allowing users to quickly generate imagery with little effort.[2][3]Commentary about AI art in the 2020s has often focused on issues related tocopyright,deception,defamation, and its impact on more traditional artists, includingtechnological unemployment. Automated art dates back at least to theautomataofancient Greek civilization, when inventors such asDaedalusandHero of Alexandriawere described as designing machines capable of writing text, generating sounds, and playing music.[4][5]Creative automatons have flourished throughout history, such asMaillardet's automaton, created around 1800 and capable of creating multiple drawings and poems.[6] Also in the 19th century,Ada Lovelace, writes that "computing operations" could be used to generate music and poems, now referred to as "The Lovelace Effect," where a computer's behavior is viewed as creative.[7]Lovelace also discusses a concept known as "The Lovelace Objection," where she argues that a machine has "no pretensions whatever to originate anything."[8] In 1950, with the publication ofAlan Turing's paper "Computing Machinery and Intelligence", there was a shift from defining machine intelligence in abstract terms to evaluating whether a machine can mimic human behavior and responses convincingly.[9]Shortly after, the academic discipline of artificial intelligence was founded at a researchworkshopatDartmouth Collegein 1956.[10]Since its founding, researchers in the field have explored philosophical questions about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored bymyth,fiction, andphilosophysince antiquity.[11] Since the founding of AI in the 1950s, artists have used artificial intelligence to create artistic works. These works were sometimes referred to asalgorithmic art,[12]computer art,digital art, ornew media art.[13] One of the first significant AI art systems isAARON, developed byHarold Cohenbeginning in the late 1960s at theUniversity of Californiaat San Diego.[14]AARON uses a symbolic rule-based approach to generate technical images in the era ofGOFAIprogramming, and it was developed by Cohen with the goal of being able to code the act of drawing.[15]AARON was exhibited in 1972 at theLos Angeles County Museum of Art.[16]From 1973 to 1975, Cohen refined AARON during a residency at theArtificial Intelligence LaboratoryatStanford University.[17]In 2024, theWhitney Museum of American Artexhibited AI art from throughout Cohen's career, including re-created versions of his early robotic drawing machines.[17] Karl Simshas exhibited art created withartificial lifesince the 1980s. He received an M.S. in computer graphics from theMIT Media Labin 1987 and was artist-in-residence from 1990 to 1996 at thesupercomputermanufacturer and artificial intelligence companyThinking Machines.[18][19][20]In both 1991 and 1992, Sims won the Golden Nica award atPrix Ars Electronicafor his videos using artificial evolution.[21][22][23]In 1997, Sims created the interactive artificial evolution installationGalápagosfor theNTT InterCommunication Centerin Tokyo.[24]Sims received anEmmy Awardin 2019 for outstanding achievement in engineering development.[25] In 1999,Scott Dravesand a team of several engineers created and releasedElectric Sheepas afree softwarescreensaver.[26]Electric Sheepis a volunteer computing project for animating and evolvingfractal flames, which are distributed to networked computers which display them as a screensaver. The screensaver used AI to create an infinite animation by learning from its audience. In 2001, Draves won the Fundacion Telefónica Life 4.0 prize forElectric Sheep.[27][unreliable source?] In 2014,Stephanie Dinkinsbegan working onConversations with Bina48.[28]For the series, Dinkins recorded her conversations withBINA48, a social robot that resembles a middle-aged black woman.[29][30]In 2019, Dinkins won theCreative Capitalaward for her creation of an evolving artificial intelligence based on the "interests and culture(s) of people of color."[31] In 2015,Sougwen ChungbeganMimicry (Drawing Operations Unit: Generation 1), an ongoing collaboration between the artist and a robotic arm.[32]In 2019, Chung won theLumen Prizefor her continued performances with a robotic arm that uses AI to attempt to draw in a manner similar to Chung.[33] In 2018, an auction sale of artificial intelligence art was held atChristie'sin New York where the AI artworkEdmond de Belamysold forUS$432,500, which was almost 45 times higher than its estimate of US$7,000–10,000. The artwork was created by Obvious, a Paris-based collective.[34][35][36] In 2024, Japanese filmgenerAIdoscopewas released. The film was co-directed byHirotaka Adachi, Takeshi Sone, and Hiroki Yamaguchi. All video, audio, and music in the film were created with artificial intelligence.[37] In 2025, Japaneseanimetelevision seriesTwins Hinahimawas released. The anime was produced and animated with AI assistance during the process of cutting and conversion of photographs into anime illustrations and later retouched by art staff. Most of the remaining parts such as characters and logos were hand-drawn with various software.[38][39] Deep learning, characterized by its multi-layer structure that attempts to mimic the human brain, first came about in the 2010s and causing a significant shift in the world of AI art.[40]During the deep learning era, there are mainly these types of designs for generative art:autoregressive models,diffusion models,GANs,normalizing flows. In 2014,Ian Goodfellowand colleagues atUniversité de Montréaldeveloped thegenerative adversarial network(GAN), a type ofdeep neural networkcapable of learning to mimic thestatistical distributionof input data such as images. The GAN uses a "generator" to create new images and a "discriminator" to decide which created images are considered successful.[41]Unlike previous algorithmic art that followed hand-coded rules, generative adversarial networks could learn a specificaestheticby analyzing adatasetof example images.[12] In 2015, a team atGooglereleasedDeepDream, a program that uses aconvolutional neural networkto find and enhance patterns in images via algorithmicpareidolia.[42][43][44]The process creates deliberately over-processed images with a dream-like appearance reminiscent of apsychedelic experience.[45]Later, in 2017, a conditional GAN learned to generate 1000 image classes ofImageNet, a large visualdatabasedesigned for use invisual object recognition softwareresearch.[46][47]By conditioning the GAN on both random noise and a specific class label, this approach enhanced the quality of image synthesis for class-conditional models.[48] Autoregressive modelswere used for image generation, such as PixelRNN (2016), which autoregressively generates one pixel after another with arecurrent neural network.[49]Immediately after theTransformerarchitecture was proposed inAttention Is All You Need(2018), it was used for autoregressive generation of images, but without text conditioning.[50] The websiteArtbreeder, launched in 2018, uses the modelsStyleGANand BigGAN[51][52]to allow users to generate and modify images such as faces, landscapes, and paintings.[53] In the 2020s,text-to-image models, which generate images based onprompts, became widely used, marking yet another shift in the creation of AI generated artworks.[2] In 2021, using the influentiallarge languagegenerative pre-trained transformermodels that are used inGPT-2andGPT-3,OpenAIreleased a series of images created with the text-to-image AI modelDALL-E 1.[54]It was an autoregressive generative model with essentially the same architecture as GPT-3. Along with this, later in 2021,EleutherAIreleased theopen sourceVQGAN-CLIP[55]based on OpenAI's CLIP model.[56]Diffusion models, generative models used to create synthetic data based on existing data,[57]were first proposed in 2015,[58]but they only became better than GANs in early 2021.[59]Latent diffusion modelwas published in December 2021 and became the basis for the laterStable Diffusion(August 2022).[60] In 2022,Midjourney[61]was released, followed byGoogle Brain'sImagenand Parti, which were announced in May 2022,Microsoft's NUWA-Infinity,[62][2]and thesource-availableStable Diffusion, which was released in August 2022.[63][64][65]DALL-E2, a successor to DALL-E, was beta-tested and released (with the further successor DALL-E3 being released in 2023). Stability AI has a Stable Diffusion web interface called DreamStudio,[66]plugins forKrita,Photoshop,Blender, andGIMP,[67]and theAutomatic1111web-based open sourceuser interface.[68][69][70]Stable Diffusion's main pre-trained model is shared on theHugging Face Hub.[71] Ideogramwas released in August 2023, this model is known for its ability to generate legible text.[72][73] In 2024,Fluxwas released. This model can generate realistic images and was integrated intoGrok, the chatbot used onX (formerly Twitter), andLe Chat, the chatbot ofMistral AI.[3][74][75][76]Flux was developed by Black Forest Labs, founded by the researchers behind Stable Diffusion.[77]Grok later switched to its own text-to-image modelAurorain December of the same year.[78]Several companies, along with their products, have also developed an AI model integrated with an image editing service.Adobehas released and integrated the AI modelFireflyintoPremiere Pro,Photoshop, andIllustrator.[79][80]Microsoft has also publicly announced AI image-generator features forMicrosoft Paint.[81]Along with this, some examples oftext-to-video modelsof the mid-2020s areRunway's Gen-2, Google'sVideoPoet, and OpenAI'sSora, which was released in December 2024.[82][83] There are many tools available to the artist when working with diffusion models. They can define both positive and negative prompts, but they are also afforded a choice in using (or omitting the use of)VAEs,LoRAs, hypernetworks, IP-adapter, and embedding/textual inversions. Artists can tweak settings like guidance scale (which balances creativity and accuracy), seed (to control randomness), and upscalers (to enhance image resolution), among others. Additional influence can be exerted during pre-inference by means of noise manipulation, while traditional post-processing techniques are frequently used post-inference. People can also train their own models. In addition, procedural "rule-based" generation of images using mathematical patterns, algorithms that simulate brush strokes and other painted effects, and deep learning algorithms such as generative adversarial networks (GANs) and transformers have been developed. Several companies have released apps and websites that allow one to forego all the options mentioned entirely while solely focusing on the positive prompt. There also exist programs which transform photos into art-like images in the style of well-known sets of paintings.[84][85] There are many options, ranging from simple consumer-facing mobile apps toJupyternotebooks and web UIs that require powerful GPUs to run effectively.[86]Additional functionalities include "textual inversion," which refers to enabling the use of user-provided concepts (like an object or a style) learned from a few images. Novel art can then be generated from the associated word(s) (the text that has been assigned to the learned, often abstract, concept)[87][88]and model extensions or fine-tuning (such asDreamBooth). AI has the potential for asocietal transformation, which may include enabling the expansion of noncommercial niche genres (such ascyberpunk derivativeslikesolarpunk) by amateurs, novel entertainment, fast prototyping,[89]increasing art-making accessibility,[89]and artistic output per effort or expenses or time[89]—e.g., via generating drafts, draft-definitions, and image components (inpainting). Generated images are sometimes used as sketches,[90]low-cost experiments,[91]inspiration, or illustrations ofproof-of-concept-stage ideas. Additional functionalities or improvements may also relate to post-generation manual editing (i.e., polishing), such as subsequent tweaking with an image editor.[91] Promptsfor some text-to-image models can also include images and keywords and configurable parameters, such as artistic style, which is often used via keyphrases like "in the style of [name of an artist]" in the prompt[92]/or selection of a broad aesthetic/art style.[93][90]There are platforms for sharing, trading, searching, forking/refining, or collaborating on prompts for generating specific imagery from image generators.[94][95][96][97]Prompts are often shared along with images onimage-sharingwebsites such asRedditand AI art-dedicated websites. A prompt is not the complete input needed for the generation of an image; additional inputs that determine the generated image include theoutput resolution,random seed, and random sampling parameters.[98] Synthetic media, which includes AI art, was described in 2022 as a major technology-driven trend that will affect business in the coming years.[89]Harvard Kennedy Schoolresearchers voiced concerns about synthetic media serving as a vector for political misinformation soon after studying the proliferation of AI art on the X platform.[99]Synthographyis a proposed term for the practice of generating images that are similar to photographs using AI.[100] A major concern raised about AI-generated images and art issampling biaswithin model training data leading towards discriminatory output from AI art models. In 2023,University of Washingtonresearchers found evidence of racial bias within the Stable Diffusion model, with images of a "person" corresponding most frequently with images of males from Europe or North America.[101] Looking more into thesampling biasfound within AI training data, in 2017, researchers at Princeton University used AI software to link over 2 million words, finding that European names were viewed as more "pleasant" than African-Americans names, and that the words "woman" and "girl" were more likely to be associated with the arts instead of science and math, "which were most likely connected to males."[102]Generative AI models typically work based on user-entered word-based prompts, especially in the case ofdiffusion models, and this word-related bias may lead to biased results. Along with this, generative AI can perpetuate harmful stereotypes regarding women. For example,Lensa, an AI app that trended onTikTokin 2023, was known to lighten black skin, make users thinner, and generate hypersexualized images of women.[103]Melissa Heikkilä, a senior reporter atMIT Technology Review, shared the findings of an experiment using Lensa, noting that the generated avatars did not resemble her and often depicted her in a hypersexualized manner.[104]Experts suggest that such outcomes can result from biases in the datasets used to train AI models, which can sometimes contain imbalanced representations, including hypersexual or nude imagery.[105][106] In 2024, Google'schatbotGemini's AI image generator was criticized for perceivedracial bias, with claims that Gemini deliberately underrepresented white people in its results.[107]Users reported that it generated images of white historical figures like theFounding Fathers,Nazi soldiers, andVikingsas other races, and that it refused to process prompts such as "happy white people" and "idealnuclear family".[107][108]Google later apologized for "missing the mark" and took Gemini's image generator offline for updates.[109]This prompted discussions about the ethical implications[110]of representing historical figures through a contemporary lens, leading critics to argue that these outputs could mislead audiences regarding actual historical contexts.[111]In addition to the well-documented representational issues such as racial and gender bias, some scholars have also pointed out deeper conceptual assumptions that shape how we perceive AI-generated art. For instance, framing AI strictly as a passive tool overlooks how cultural and technological factors influence its outputs. Others suggest viewing AI as part of a collaborative creative process, where both human and machine contribute to the artistic result.[112] Legal scholars, artists, and media corporations have considered the legal and ethical implications of artificial intelligence art since the 20th century. Some artists use AI art to critique and explore the ethics of usinggathered datato produce new artwork.[113] In 1985, intellectual property law professorPamela Samuelsonargued thatUS copyrightshould allocate algorithmically generated artworks to the user of the computer program.[114]A 2019Florida Law Reviewarticle presented three perspectives on the issue. In the first, artificial intelligence itself would become the copyright owner; to do this, Section 101 of the US Copyright Act would need to be amended to define "author" as a computer. In the second, following Samuelson's argument, the user, programmer, or artificial intelligence company would be the copyright owner. This would be an expansion of the "work for hire" doctrine, under which ownership of a copyright is transferred to the "employer." In the third situation, copyright assignments would never take place, and such works would be in thepublic domain, as copyright assignments require an act of authorship.[115] In 2022, coinciding with the rising availability of consumer-grade AI image generation services, popular discussion renewed over the legality and ethics of AI-generated art. A particular topic is the inclusion of copyrighted artwork and images in AI training datasets, with artists objecting to commercial AI products using their works without consent, credit, or financial compensation.[116]In September 2022, Reema Selhi, of theDesign and Artists Copyright Society, stated that "there are no safeguards for artists to be able to identify works in databases that are being used and opt out."[117]Some have claimed that images generated with these models can bear resemblance to extant artwork, sometimes including the remains of the original artist's signature.[117][118]In December 2022, users of the portfolio platform ArtStation staged an online protest against non-consensual use of their artwork within datasets; this resulted in opt-out services, such as "Have I Been Trained?" increasing in profile, as well as some online art platforms promising to offer their own opt-out options.[119]According to theUS Copyright Office, artificial intelligence programs are unable to hold copyright,[120][121][122]a decision upheld at the Federal District level as of August 2023 followed the reasoning from themonkey selfie copyright dispute.[123] OpenAI, the developer ofDALL-E, has its own policy on who owns generated art. They assign the right and title of a generated image to the creator, meaning the user who inputted the prompt owns the image generated, along with the right to sell, reprint, and merchandise it.[124] In January 2023, three artists—Sarah Andersen,Kelly McKernan, and Karla Ortiz—filed acopyright infringementlawsuit against Stability AI,Midjourney, andDeviantArt, claiming that it is legally required to obtain the consent of artists before training neural nets on their work and that these companies infringed on the rights of millions of artists by doing so on five billion images scraped from the web.[125]In July 2023, U.S. District JudgeWilliam Orrickwas inclined to dismiss most of the lawsuits filed by Andersen, McKernan, and Ortiz, but allowed them to file a new complaint.[126]Also in 2023, Stability AI was sued byGetty Imagesfor using its images in the training data.[127]A tool built bySimon Willisonallowed people to search 0.5% of the training data for Stable Diffusion V1.1, i.e., 12 million of the 2.3 billion instances fromLAION2B. Artist Karen Hallion discovered that her copyrighted images were used as training data without their consent.[128] In March 2024, Tennessee enacted theELVIS Act, which prohibits the use of AI to mimic a musician's voice without permission.[129]A month later in that year,Adam Schiffintroduced theGenerative AI Copyright Disclosure Actwhich, if passed, would require that AI companies to submit copyrighted works in their datasets to theRegister of Copyrightsbefore releasing new generative AI systems.[130]In November 2024, a group of artists and activists shared early access to OpenAI’s unreleased video generation model,Sora, viaHuggingface. The action, accompanied by a statement, criticized the exploitative use of artists’ work by major corporations.'[131][132][133] As with other types ofphoto manipulationsince the early 19th century, some people in the early 21st century have been concerned that AI could be used to create content that is misleading and can be made to damage a person's reputation, such asdeepfakes.[134]ArtistSarah Andersen, who previously had her art copied and edited to depictNeo-Nazibeliefs, stated that the spread ofhate speechonline can be worsened by the use of image generators.[128]Some also generate images or videos for the purpose ofcatfishing. AI systems have the ability to create deepfake content, which is often viewed as harmful and offensive. The creation of deepfakes poses a risk to individuals who have not consented to it.[135]This mainly refers todeepfake pornographywhich is used asrevenge porn, where sexually explicit material is disseminated to humiliate or harm another person. AI-generatedchild pornographyhas been deemed a potential danger to society due to its unlawful nature.[136] After winning the 2023 "Creative" "Open competition" Sony World Photography Awards, Boris Eldagsen stated that his entry was actually created with artificial intelligence. Photographer Feroz Khan commented to theBBCthat Eldagsen had "clearly shown that even experienced photographers and art experts can be fooled".[138]Smaller contests have been affected as well; in 2023, a contest run by authorMark LawrenceasSelf-Published Fantasy Blog-Offwas cancelled after the winning entry was allegedly exposed to be a collage of images generated with Midjourney.[139] In May 2023, on social media sites such as Reddit and Twitter, attention was given to a Midjourney-generated image ofPope Franciswearing a white puffer coat.[140][141]Additionally, an AI-generated image of an attack on thePentagonwent viral as part of ahoax news storyon Twitter.[142][143] In the days beforeMarch 2023 indictment of Donald Trumpas part of theStormy Daniels–Donald Trump scandal, several AI-generated images allegedly depicting Trump's arrest went viral online.[144][145]On March 20, British journalistEliot Higginsgenerated various images of Donald Trump being arrested or imprisoned using Midjourney v5 and posted them on Twitter; two images of Trump struggling against arresting officers went viral under the mistaken impression that they were genuine, accruing more than 5 million views in three days.[146][147]According to Higgins, the images were not meant to mislead, but he was banned from using Midjourney services as a result. As of April 2024, the tweet had garnered more than 6.8 million views. In February 2024, the paperCellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathwaywas published using AI-generated images. It was later retracted fromFrontiers in Cell and Developmental Biologybecause the paper "does not meet the standards".[148] To mitigate some deceptions, OpenAI developed a tool in 2024 to detect images that were generated by DALL-E 3.[149]In testing, this tool accurately identified DALL-E 3-generated images approximately 98% of the time. The tool is also fairly capable of recognizing images that have been visually modified by users post-generation.[150] As generative AI image software such asStable DiffusionandDALL-Econtinue to advance, the potential problems and concerns that these systems pose for creativity and artistry have risen.[128]In 2022, artists working in various media raised concerns about the impact thatgenerative artificial intelligencecould have on their ability to earn money, particularly if AI-based images started replacing artists working in theillustration and design industries.[151][152]In August 2022, digital artist R. J. Palmer stated that "I could easily envision a scenario where using AI, a single artist or art director could take the place of 5–10 entry level artists... I have seen a lot of self-published authors and such say how great it will be that they don’t have to hire an artist."[118]Scholars Jiang et al. state that "Leaders of companies like Open AI and Stability AI have openly stated that they expect generative AI systems to replace creatives imminently."[128]A 2022 case study found that AI-produced images created by technology likeDALL-Ecaused some traditional artists to be concerned about losing work, while others use it to their advantage and view it as a tool.[135] AI-based images have become more commonplace in art markets and search engines because AI-basedtext-to-image systemsare trained from pre-existing artistic images, sometimes without the original artist's consent, allowing the software to mimic specific artists' styles.[128][153]For example, Polish digital artist Greg Rutkowski has stated that it is more difficult to search for his work online because many of the images in the results are AI-generated specifically to mimic his style.[64]Furthermore, some training databases on which AI systems are based are not accessible to the public. The ability of AI-based art software to mimic or forge artistic style also raises concerns of malice or greed.[128][154][155]Works of AI-generated art, such asThéâtre D'opéra Spatial, a text-to-image AI illustration that won the grand prize in the August 2022 digital art competition at theColorado State Fair, have begun to overwhelm art contests and other submission forums meant for small artists.[128][154][155]TheNetflixshort filmThe Dog & the Boy, released in January 2023, received backlash online for its use of artificial intelligence art to create the film's background artwork.[156]Within the same vein,DisneyreleasedSecret Invasion, aMarvelTV show with an AI-generated intro, on Disney+ in 2023, causing concern and backlash regarding the idea that artists could be made obsolete by machine-learning tools.[157] AI art has sometimes been deemed to be able to replace traditionalstock images.[158]In 2023,Shutterstockannounced a beta test of an AI tool that can regenerate partial content of other Shutterstock's images.Getty ImagesandNvidiahave partnered with the launch of Generative AI byiStock, a model trained on Getty's library and iStock's photo library using Nvidia's Picasso model.[159] Researchers fromHugging FaceandCarnegie Mellon Universityreported in a 2023 paper that generating one thousand 1024×1024 images usingStable Diffusion's XL 1.0 base model requires 11.49kWhof energy and generates 1,594 grams (56.2 oz) ofcarbon dioxide, which is roughly equivalent to driving an average gas-powered car a distance of 4.1 miles (6.6 km). Comparing 88 different models, the paper concluded that image-generation models used on average around 2.9kWh of energy per 1,000inferences.[160] In addition to the creation of original art, research methods that use AI have been generated to quantitatively analyze digital art collections. This has been made possible due to the large-scale digitization of artwork in the past few decades. According to CETINIC and SHE (2022), using artificial intelligence to analyze already-existing art collections can provide new perspectives on the development of artistic styles and the identification of artistic influences.[161][162] Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art.[163]Close reading focuses on specific visual aspects of one piece. Some tasks performed by machines in close reading methods include computational artist authentication and analysis of brushstrokes or texture properties. In contrast, through distant viewing methods, the similarity across an entire collection for a specific feature can be statistically visualized. Common tasks relating to this method include automatic classification,object detection,multimodal tasks, knowledge discovery in art history, and computational aesthetics.[162]Synthetic images can also be used to train AI algorithms forart authenticationand to detect forgeries.[164] Researchers have also introduced models that predict emotional responses to art. One such model is ArtEmis, a large-scale dataset paired with machine learning models. ArtEmis includes emotional annotations from over 6,500 participants along with textual explanations. By analyzing both visual inputs and the accompanying text descriptions from this dataset, ArtEmis enables the generation of nuanced emotional predictions.[165][166] AI has also been used in arts outside of visual arts. Generative AI has been used in video game productionbeyond imagery, especially forlevel design(e.g., forcustom maps) and creating new content (e.g., quests or dialogue) orinteractive storiesin video games.[167][168]AI has also been used in theliterary arts,[169]such as helping withwriter's block, inspiration, or rewriting segments.[170][171][172][173]In the culinary arts, some prototypecooking robotscan dynamicallytaste, which can assist chefs in analyzing the content and flavor of dishes during the cooking process.[174]
https://en.wikipedia.org/wiki/Artificial_intelligence_art
Deepfakes(aportmanteauof'deep learning'and'fake'[1]) are images, videos, or audio that have been edited or generatedusing artificial intelligence, AI-based tools or AV editing software. They may depict real or fictional people and are considered a form ofsynthetic media, that is media that is usually created by artificial intelligence systems by combining various media elements into a new media artifact.[2][3] While the act of creating fake content is not new, deepfakes uniquely leveragemachine learningandartificial intelligencetechniques,[4][5][6]includingfacial recognitionalgorithms and artificialneural networkssuch asvariational autoencoders(VAEs) andgenerative adversarial networks(GANs).[5][7]In turn, the field of image forensics develops techniques todetect manipulated images.[8]Deepfakes have garnered widespread attention for their potential use in creatingchild sexual abusematerial,celebrity pornographic videos,revenge porn,fake news,hoaxes,bullying, andfinancial fraud.[9][10][11][12] Academics have raised concerns about the potential for deepfakes to promote disinformation and hate speech, as well as interfere with elections. In response, theinformation technologyindustry and governments have proposed recommendations and methods to detect and mitigate their use. Academic research has also delved deeper into the factors driving deepfake engagement online as well as potential countermeasures to malicious application of deepfakes. From traditionalentertainmenttogaming, deepfake technology has evolved to be increasingly convincing[13]and available to the public, allowing for the disruption of the entertainment andmediaindustries.[14] Photo manipulationwas developed in the 19th century and soon applied to motion pictures. Technology steadily improved during the 20th century, and more quickly with the advent ofdigital video. Deepfake technology has been developed by researchers at academic institutions beginning in the 1990s, and later by amateurs in online communities.[15][16]More recently the methods have been adopted by industry.[17] Academic research related to deepfakes is split between the field ofcomputer vision, a sub-field of computer science,[15]which develops techniques for creating and identifying deepfakes, and humanities and social science approaches that study the social, ethical, aesthetic implications as well as journalistic and informational implications of deepfakes.[18]As deepfakes have risen in prominence in popularity with innovations provided by AI tools, significant research has gone into detection methods and defining the factors driving engagement with deepfakes on the internet.[19][20]Deepfakes have been shown to appear on social media platforms and other parts of the internet for purposes ranging from entertainment and education related to deepfakes to misinformation to elicit strong reactions.[21]There are gaps in research related to the propagation of deepfakes on social media. Negativity and emotional response are the primary driving factors for users sharing deepfakes.[22] Age and lack of literacy related to deepfakes are another factor that drives engagement. Older users who may be technologically-illiterate might not recognize deepfakes as falsified content and share this content because they believe it to be true. Alternatively, younger users accustomed to the entertainment value of deepfakes are more likely to share them with an awareness of their falsified content.[23]Despite cognitive ability being a factor in successfully detecting deepfakes, individuals who are aware of a deepfake may be just as likely to share it on social media as one who does not know it is a deepfake.[24]Within scholarship focused on detecting deepfakes, deep-learning methods using techniques to identify software-induced artifacts have been found to be the most effective in separating a deepfake from an authentic product.[25]Due to the capabilities of deepfakes, concerns have developed related to regulations and literacy toward the technology.[26]The potential malicious applications of deepfakes and their capability to impact public figures, reputations, or promote misleading narratives are the primary drivers of these concerns.[27]Amongst some experts, potential malicious applications of deepfakes have encouraged them into labeling deepfakes as a potential danger to democratic societies that would benefit from a regulatory framework to mitigate potential risks.[28] In cinema studies, deepfakes illustrate how how "the human face is emerging as a central object of ambivalence in the digital age".[29]Video artists have used deepfakes to "playfully rewrite film history by retrofitting canonical cinema with new star performers".[30]Film scholar Christopher Holliday analyses how altering the gender and race of performers in familiar movie scenes destabilizes gender classifications and categories.[30]The concept of "queering" deepfakes is also discussed in Oliver M. Gingrich's discussion of media artworks that use deepfakes to reframe gender,[31]including British artistJake Elwes'Zizi: Queering the Dataset, an artwork that uses deepfakes of drag queens to intentionally play with gender. The aesthetic potentials of deepfakes are also beginning to be explored. Theatre historian John Fletcher notes that early demonstrations of deepfakes are presented as performances, and situates these in the context of theater, discussing "some of the more troubling paradigm shifts" that deepfakes represent as a performance genre.[32] Philosophers and media scholars have discussed the ethical implications of deepfakes in the dissemination of disinformation. Amina Vatreš from the Department of Communication Studies at the University of Sarajevo identifies three factors contributing to the widespread acceptance of deepfakes, and where its greatest danger lies: 1) convincing visualization and auditory support, 2) widespread accessibility, and 3) the inability to draw a clear line between truth and falsehood.[33]Another area of discussion on deepfakes is in relation to pornography made with deepfakes.[34]Media scholar Emily van der Nagel draws upon research in photography studies on manipulated images to discuss verification systems, that allow women to consent to uses of their images.[35] Beyond pornography, deepfakes have been framed by philosophers as an "epistemic threat" to knowledge and thus to society.[36]There are several other suggestions for how to deal with the risks deepfakes give rise beyond pornography, but also to corporations, politicians and others, of "exploitation, intimidation, and personal sabotage",[37]and there are several scholarly discussions of potential legal and regulatory responses both in legal studies and media studies.[38]In psychology and media studies, scholars discuss the effects ofdisinformationthat uses deepfakes,[39][40]and the social impact of deepfakes.[41] While most English-language academic studies of deepfakes focus on the Western anxieties about disinformation and pornography, digital anthropologist Gabriele de Seta has analyzed the Chinese reception of deepfakes, which are known ashuanlian, which translates to "changing faces". The Chinese term does not contain the "fake" of the English deepfake, and de Seta argues that this cultural context may explain why the Chinese response has centered on practical regulatory measures to "fraud risks, image rights, economic profit, and ethical imbalances".[42] A landmark early project was the "Video Rewrite" program, published in 1997. The program modified existing video footage of a person speaking to depict that person mouthing the words from a different audio track.[43]It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of the subject's face.[43] Contemporary academic projects have focused on creating more realistic videos and improving deepfake techniques.[44][45]The "Synthesizing Obama" program, published in 2017, modifies video footage of former presidentBarack Obamato depict him mouthing the words contained in a separate audio track.[44]The project lists as a main research contribution to itsphotorealistictechnique for synthesizing mouth shapes from audio.[44]The "Face2Face" program, published in 2016, modifies video footage of a person's face to depict them mimicking another person's facial expressions.[45]The project highlights its primary research contribution as the development of the first method for re-enacting facial expressions in real time using a camera that does not capture depth, enabling the technique to work with common consumer cameras. In August 2018, researchers at theUniversity of California, Berkeleypublished a paper introducing a deepfake dancing app that can create the impression of masterful dancing ability using AI.[46]This project expands the application of deepfakes to the entire body; previous works focused on the head or parts of the face.[47] Researchers have also shown that deepfakes are expanding into other domains such as medical imagery.[48]In this work, it was shown how an attacker can automatically inject or remove lung cancer in a patient's3D CT scan. The result was so convincing that it fooled three radiologists and a state-of-the-art lung cancer detection AI. To demonstrate the threat, the authors successfully performed the attack on a hospital in aWhite hat penetration test.[49] A survey of deepfakes, published in May 2020, provides a timeline of how the creation and detection deepfakes have advanced over the last few years.[50]The survey identifies that researchers have been focusing on resolving the following challenges of deepfake creation: Overall, deepfakes are expected to have several implications in media and society, media production, media representations, media audiences, gender, law, and regulation, and politics.[51] The termdeepfakeoriginated in late 2017 from aReddituser named "deepfakes".[52]He, along with other members of Reddit's "r/deepfakes", shared deepfakes they created; many videos involved celebrities' faces swapped onto the bodies of actors in pornographic videos,[52]while non-pornographic content included many videos with actorNicolas Cage's face swapped into various movies.[53] Other online communities remain, including Reddit communities that do not share pornography, such as "r/SFWdeepfakes" (short for "safe for work deepfakes"), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios.[54]Other online communities continue to share pornography on platforms that have not banned deepfake pornography.[55] In January 2018, a proprietary desktop application called "FakeApp" was launched.[56]This app allows users to easily create and share videos with their faces swapped with each other.[57]As of 2019, "FakeApp" had been largely replaced by open-source alternatives such as "Faceswap", command line-based "DeepFaceLab", and web-based apps such as DeepfakesWeb.com[58][59][60] Larger companies started to use deepfakes.[17]Corporate training videos can be created using deepfaked avatars and their voices, for exampleSynthesia, which uses deepfake technology with avatars to create personalized videos.[61]The mobile appMomocreated the application Zao which allows users to superimpose their face on television and movie clips with a single picture.[17]As of 2019 the Japanese AI company DataGrid made a full body deepfake that could create a person from scratch.[62] As of 2020audio deepfakes, and AI software capable of detecting deepfakes andcloning human voicesafter 5 seconds of listening time also exist.[63][64][65][66][67][68]A mobile deepfake app, Impressions, was launched in March 2020. It was the first app for the creation of celebrity deepfake videos from mobile phones.[69][70] Deepfake technology's ability to fabricate messages and actions of others can include deceased individuals. On 29 October 2020,Kim Kardashianposted a video featuring ahologramof her late fatherRobert Kardashiancreated by the company Kaleida, which used a combination of performance, motion tracking, SFX, VFX andDeepFaketechnologies to create the illusion.[71][72] In 2020, a deepfake video of Joaquin Oliver, a victim of theParkland shootingwas created as part of a gun safety campaign. Oliver's parents partnered with nonprofit Change the Ref and McCann Health to produce a video in which Oliver to encourage people to support gun safety legislation and politicians who back do so as well.[73] In 2022, a deepfake video ofElvis Presleywas used on the programAmerica's Got Talent 17.[74] A TV commercial used a deepfake video ofBeatlesmemberJohn Lennon, who was murdered in 1980.[75] Deepfakes rely on a type ofneural networkcalled anautoencoder.[76]These consist of an encoder, which reduces an image to a lower dimensionallatent space, and a decoder, which reconstructs the image from the latent representation.[77]Deepfakes utilize this architecture by having a universal encoder which encodes a person in to the latent space.[citation needed]The latent representation contains key features about their facial features and body posture. This can then be decoded with a model trained specifically for the target. This means the target's detailed information will be superimposed on the underlying facial and body features of the original video, represented in the latent space.[citation needed] A popular upgrade to this architecture attaches agenerative adversarial networkto the decoder. AGANtrains a generator, in this case the decoder, and a discriminator in an adversarial relationship. The generator creates new images from the latent representation of the source material, while the discriminator attempts to determine whether or not the image is generated.[citation needed]This causes the generator to create images that mimic reality extremely well as any defects would be caught by the discriminator.[78]Both algorithms improve constantly in azero sum game. This makes deepfakes difficult to combat as they are constantly evolving; any time a defect is determined, it can be corrected.[78] Digital clonesof professional actors have appeared infilmsbefore, and progress in deepfake technology is expected to further the accessibility and effectiveness of such clones.[79]The use of AI technology was a major issue in the2023 SAG-AFTRA strike, as new techniques enabled the capability of generating and storing a digital likeness to use in place of actors.[80] Disneyhas improved their visual effects using high-resolution deepfake face swapping technology.[81]Disney improved their technology through progressive training programmed to identify facial expressions, implementing a face-swapping feature, and iterating in order to stabilize and refine the output.[81]This high-resolution deepfake technology saves significant operational and production costs.[82]Disney's deepfake generation model can produce AI-generated media at a 1024 x 1024 resolution, as opposed to common models that produce media at a 256 x 256 resolution.[82]The technology allows Disney tode-agecharacters or revive deceased actors.[83]Similar technology was initially used by fans to unofficially insert faces into existing media, such as overlayingHarrison Ford's young face onto Han Solo's face inSolo: A Star Wars Story.[84]Disney used deepfakes for the characters of Princess Leia andGrand Moff TarkininRogue One.[85][86] The 2020 documentaryWelcome to Chechnyaused deepfake technology to obscure the identity of the people interviewed, so as to protect them from retaliation.[87] Creative Artists Agencyhas developed a facility to capture the likeness of an actor "in a single day", to develop a digital clone of the actor, which would be controlled by the actor or their estate alongside otherpersonality rights.[88] Companies which have used digital clones of professional actors in advertisements includePuma,NikeandProcter & Gamble.[89] Deepfakes allowed for the use of David Beckham in a campaign using nearly nine languages to raise awareness the fight against Malaria.[90] In the 2024 IndianTamilscience fictionaction thrillerThe Greatest of All Time, the teenage version ofVijay's character Jeevan is portrayed by Ayaz Khan. Vijay's teenage face was then attained byAIdeepfake.[91] Deepfakes are also being used in education and media to create realistic videos and interactive content, which offer new ways to engage audiences. However, they also bring risks, especially for spreading false information, which has led to calls for responsible use and clear rules. In March 2018 the multidisciplinary artist Joseph Ayerle published thevideo artworkUn'emozione per sempre 2.0(English title:The Italian Game). The artist worked with Deepfake technology to create anAI actor,a synthetic version of 80s movie starOrnella Muti, traveling in time from 1978 to 2018. TheMassachusetts Institute of Technologyreferred this artwork in the study "Collective Wisdom".[92]The artist used Ornella Muti'stime travelto explore generational reflections, while also investigating questions about the role of provocation in the world of art.[93]For the technical realization Ayerle used scenes of photo modelKendall Jenner. The program replaced Jenner's face by an AI calculated face of Ornella Muti. As a result, the AI actor has the face of the Italian actor Ornella Muti and the body of Kendall Jenner. Deepfakes have been widely used insatireor to parody celebrities and politicians. The 2020 webseriesSassy Justice, created byTrey ParkerandMatt Stone, heavily features the use of deepfaked public figures to satirize current events and raise awareness of deepfake technology.[94] Deepfakes can be used to generate blackmail materials that falsely incriminate a victim. A report by the AmericanCongressional Research Servicewarned that deepfakes could be used to blackmail elected officials or those with access toclassified informationforespionageorinfluencepurposes.[95] Alternatively, since the fakes cannot reliably be distinguished from genuine materials, victims of actual blackmail can now claim that the true artifacts are fakes, granting them plausible deniability. The effect is to void credibility of existing blackmail materials, which erases loyalty to blackmailers and destroys the blackmailer's control. This phenomenon can be termed "blackmail inflation", since it "devalues" real blackmail, rendering it worthless.[96]It is possible to utilize commodity GPU hardware with a small software program to generate this blackmail content for any number of subjects in huge quantities, driving up the supply of fake blackmail content limitlessly and in highly scalable fashion.[97] On June 8, 2022,[98]Daniel Emmet, a formerAGTcontestant, teamed up with theAIstartup[99][100]Metaphysic AI, to create a hyperrealistic deepfake to make it appear asSimon Cowell. Cowell, notoriously known for severely critiquing contestants,[101]was on stage interpreting "You're The Inspiration" byChicago. Emmet sang on stage as an image of Simon Cowell emerged on the screen behind him in flawless synchronicity.[102] On August 30, 2022, Metaphysic AI had 'deep-fake'Simon Cowell,Howie MandelandTerry Crewssingingoperaon stage.[103] On September 13, 2022, Metaphysic AI performed with asyntheticversion ofElvis Presleyfor the finals ofAmerica's Got Talent.[104] TheMITartificial intelligence project15.aihas been used for content creation for multiple Internetfandoms, particularly on social media.[105][106][107] In 2023 the bandsABBAandKISSpartnered withIndustrial Light & MagicandPophouse Entertainmentto develop deepfake avatars capable of performingvirtual concerts.[108] Fraudsters and scammers make use of deepfakes to trick people into fake investment schemes,financial fraud,cryptocurrencies,sending money, and followingendorsements. The likenesses of celebrities and politicians have been used for large-scale scams, as well as those of private individuals, which are used inspearphishingattacks. According to theBetter Business Bureau, deepfake scams are becoming more prevalent.[109]These scams are responsible for an estimated $12 billion in fraud losses globally.[110]According to a recent report these numbers are expected to reach $40 Billion over the next three years.[110] Fake endorsements have misused the identities of celebrities likeTaylor Swift,[111][109]Tom Hanks,[112]Oprah Winfrey,[113]andElon Musk;[114]news anchors[115]likeGayle King[112]andSally Bundock;[116]and politicians likeLee Hsien Loong[117]andJim Chalmers.[118][119]Videos of them have appeared inonline advertisementsonYouTube,Facebook, andTikTok, who have policies againstsynthetic and manipulated media.[120][111][121]Ads running these videos are seen by millions of people. A singleMedicare fraudcampaign had been viewed more than 195 million times across thousands of videos.[120][122]Deepfakes have been used for: a fake giveaway ofLe Creusetcookware for a "shipping fee" without receiving the products, except for hidden monthly charges;[111]weight-loss gummies that charge significantly more than what was said;[113]a fake iPhone giveaway;[111][121]and fraudulentget-rich-quick,[114][123]investment,[124]andcryptocurrencyschemes.[117][125] Many ads pair AIvoice cloningwith "decontextualized video of the celebrity" to mimic authenticity. Others use a whole clip from a celebrity before moving to a different actor or voice.[120]Some scams may involve real-time deepfakes.[121] Celebrities have been warning people of these fake endorsements, and to be more vigilant against them.[109][111][113]Celebrities are unlikely to file lawsuits against every person operating deepfake scams, as "finding and suing anonymous social media users is resource intensive," thoughcease and desistletters to social media companies work in getting videos and ads taken down.[126] Audio deepfakeshave been used as part ofsocial engineeringscams, fooling people into thinking they are receiving instructions from a trusted individual.[127]In 2019, a U.K.-based energy firm's CEO was scammed over the phone when he was ordered to transfer €220,000 into a Hungarian bank account by an individual who reportedly used audio deepfake technology to impersonate the voice of the firm's parent company's chief executive.[128][129] As of 2023, the combination advances in deepfake technology, which could clone an individual's voice from a recording of a few seconds to a minute, and newtext generation tools, enabled automated impersonation scams, targeting victims using a convincing digital clone of a friend or relative.[130] Audio deepfakes can be used to mask a user's real identity. Inonline gaming, for example, aplayermay want to choose a voice that sounds like theirin-game characterwhen speaking to other players. Those who are subject toharassment, such as women, children, and transgender people, can use these "voice skins" to hide their gender or age.[131] In 2020, aninternet memeemerged utilizing deepfakes to generate videos of people singing the chorus of "Baka Mitai"(ばかみたい), a song from the gameYakuza 0in the video game seriesLike a Dragon. In the series, the melancholic song is sung by the player in akaraokeminigame. Most iterations of this meme use a 2017 video uploaded by user Dobbsyrules, wholip syncsthe song, as a template.[132][133] Deepfakes have been used to misrepresent well-known politicians in videos. In 2017, Deepfake pornography prominently surfaced on the Internet, particularly onReddit.[156]As of 2019, many deepfakes on the internet feature pornography of female celebrities whose likeness is typically used without their consent.[157]A report published in October 2019 by Dutch cybersecurity startup Deeptrace estimated that 96% of all deepfakes online were pornographic.[158]As of 2018, aDaisy Ridleydeepfake first captured attention,[156]among others.[159][160][161]As of October 2019, most of the deepfake subjects on the internet were British and American actors.[157]However, around a quarter of the subjects are South Korean, the majority of which are K-pop stars.[157][162] In June 2019, a downloadableWindowsandLinuxapplication called DeepNude was released that used neural networks, specificallygenerative adversarial networks, to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50.[163][164]On 27 June the creators removed the application and refunded consumers.[165] Female celebrities are often a main target when it comes to deepfake pornography. In 2023, deepfake porn videos appeared online ofEmma WatsonandScarlett Johanssonin a face swapping app.[166]In 2024, deepfake porn images circulated online ofTaylor Swift.[167] Academic studies have reported that women, LGBT people and people of colour (particularly activists, politicians and those questioning power) are at higher risk of being targets of promulgation of deepfake pornography.[168] Deepfakes have begun to see use in popular social media platforms, notably through Zao, a Chinese deepfake app that allows users to substitute their own faces onto those of characters in scenes from films and television shows such asRomeo + JulietandGame of Thrones.[169]The app originally faced scrutiny over its invasive user data and privacy policy, after which the company put out a statement claiming it would revise the policy.[17]In January 2020 Facebook announced that it was introducing new measures to counter this on its platforms.[170] TheCongressional Research Servicecited unspecified evidence as showing that foreignintelligence operativesused deepfakes to create social media accounts with the purposes ofrecruitingindividuals with access toclassified information.[95] In 2021, realistic deepfake videos of actorTom Cruisewere released onTikTok, which went viral and garnered more than tens of millions of views. The deepfake videos featured an "artificial intelligence-generated doppelganger" of Cruise doing various activities such as teeing off at the golf course, showing off a coin trick, and biting into a lollipop. The creator of the clips,BelgianVFXArtist Chris Umé,[171]said he first got interested in deepfakes in 2018 and saw the "creative potential" of them.[172][173] Deepfake photographs can be used to createsockpuppets, non-existent people, who are active both online and in traditional media. A deepfake photograph appears to have been generated together with a legend for an apparently non-existent person named Oliver Taylor, whose identity was described as a university student in the United Kingdom. The Oliver Taylor persona submitted opinion pieces in several newspapers and was active in online media attacking a British legal academic and his wife, as "terrorist sympathizers." The academic had drawn international attention in 2018 when he commenced a lawsuit in Israel against NSO, a surveillance company, on behalf of people in Mexico who alleged they were victims of NSO'sphone hackingtechnology.Reuterscould find only scant records for Oliver Taylor and "his" university had no records for him. Many experts agreed that the profile photo is a deepfake. Several newspapers have not retracted articles attributed to him or removed them from their websites. It is feared that such techniques are a new battleground indisinformation.[174] Collections of deepfake photographs of non-existent people onsocial networkshave also been deployed as part of Israelipartisanpropaganda. TheFacebookpage "Zionist Spring" featured photos of non-existent persons along with their "testimonies" purporting to explain why they have abandoned their left-leaning politics to embraceright-wing politics, and the page also contained large numbers of posts fromPrime Minister of IsraelBenjamin Netanyahuand his son and from other Israeli right wing sources. The photographs appear to have been generated by "human image synthesis" technology, computer software that takes data from photos of real people to produce a realistic composite image of a non-existent person. In much of the "testimonies," the reason given for embracing the political right was the shock of learning of allegedincitementto violence against the prime minister. Right wing Israeli television broadcasters then broadcast the "testimonies" of these non-existent people based on the fact that they were being "shared" online. The broadcasters aired these "testimonies" despite being unable to find such people, explaining "Why does the origin matter?" Other Facebook fake profiles—profiles of fictitious individuals—contained material that allegedly contained such incitement against the right wing prime minister, in response to which the prime minister complained that there was a plot to murder him.[175][176] Though fake photos have long been plentiful, faking motion pictures has been more difficult, and the presence of deepfakes increases the difficulty of classifying videos as genuine or not.[134]AI researcher Alex Champandard has said people should know how fast things can be corrupted with deepfake technology, and that the problem is not a technical one, but rather one to be solved by trust in information and journalism.[134]Computer science associate professorHao Liof theUniversity of Southern Californiastates that deepfakes created for malicious use, such asfake news, will be even more harmful if nothing is done to spread awareness of deepfake technology.[177]Li predicted that genuine videos and deepfakes would become indistinguishable in as soon as half a year, as of October 2019, due to rapid advancement inartificial intelligenceand computer graphics.[177]FormerGooglefraud czarShuman Ghosemajumderhas called deepfakes an area of "societal concern" and said that they will inevitably evolve to a point at which they can be generated automatically, and an individual could use that technology to produce millions of deepfake videos.[178] A primary pitfall is that humanity could fall into an age in which it can no longer be determined whether a medium's content corresponds to the truth.[134][179]Deepfakes are one of a number of tools fordisinformation attack, creating doubt, and undermining trust. They have a potential to interfere with democratic functions in societies, such as identifying collective agendas, debating issues, informing decisions, and solving problems though the exercise of political will.[180]People may also start to dismiss real events as fake.[131] Deepfakes possess the ability to damage individual entities tremendously.[181]This is because deepfakes are often targeted at one individual, and/or their relations to others in hopes to create a narrative powerful enough to influence public opinion or beliefs. This can be done through deepfake voice phishing, which manipulates audio to create fake phone calls or conversations.[181]Another method of deepfake use is fabricated private remarks, which manipulate media to convey individuals voicing damaging comments.[181]The quality of a negative video or audio does not need to be that high. As long as someone's likeness and actions are recognizable, a deepfake can hurt their reputation.[131] In September 2020 Microsoft made public that they are developing a Deepfake detection software tool.[182] Beyond public‑figure defamation, deepfakes are increasingly used in K–12 and higher‑education settings to target peers with false portrayals that extend well beyond static images or text messages.[183]Students have reported encountering synthetic videos depicting classmates in non‑consensual intimate acts and experiencing academic or social disruptions as a result. Schools and universities frequently find themselves ill‑equipped to address these incidents, as most anti‑bullying policies predate AI‑generated media and focus narrowly on on‑campus behavior or student‑owned devices. When deepfake harassment originates off‑campus—via private social networks, encrypted messaging, or servers hosted overseas—administrators struggle to determine jurisdiction, balance free‑speech concerns, and coordinate with law enforcement. To close these gaps, legal scholars recommend updating both state and federal anti‑bullying statutes to explicitly include “non‑consensual synthetic imagery” alongside traditional definitions of harassment. Concurrently, integrating AI‑literacy curricula—covering deepfake detection techniques, ethical considerations, and responsible digital citizenship—can empower students to recognize, report, and resist deepfake bullying before it escalates.[184] With the rise of accessible deepfake tools—ranging from open‑source platforms like DeepFaceLab and Faceswap to user‑friendly mobile apps such as Zao and Impressions—malicious actors have begun weaponizing synthetic media as a novel form of cyberbullying.[185]Termed “deepfake bullying,” these attacks can take many forms: perpetrators may superimpose a student’s face onto videos of underage drinking or drug use, create fabricated nude images without consent, or manufacture scenes of criminal behavior to falsely implicate victims. Because modern generative adversarial networks can produce hyper‑realistic results, targets often cannot distinguish real from fake, causing intense embarrassment and panic when content first appears. Once released, deepfake content tends to circulate rapidly across messaging apps, social feeds, and even private chat groups, where it can be downloaded, reedited, and recirculated—subjecting victims to repeated retraumatization and a persistent fear of new exposures. Psychological assessments of affected students report elevated levels of anxiety, depression, and social withdrawal; some victims have withdrawn from school activities or changed schools entirely to escape ongoing harassment. In response, educators, school counselors, and child‑psychology experts are calling for explicit policies that define deepfake bullying as a disciplinary offense, mandate immediate content takedowns, and require trauma‑informed support structures—such as specialized counseling services, peer‑support hotlines, and restorative‑justice circles—to help victims regain a sense of safety and agency.[186] Detecting fake audio is a highly complex task that requires careful attention to the audio signal in order to achieve good performance. Using deep learning, preprocessing of feature design and masking augmentation have been proven effective in improving performance.[187] Most of the academic research surrounding deepfakes focuses on the detection of deepfake videos.[188]One approach to deepfake detection is to use algorithms to recognize patterns and pick up subtle inconsistencies that arise in deepfake videos.[188]For example, researchers have developed automatic systems that examine videos for errors such as irregular blinking patterns of lighting.[189][15]This approach has been criticized because deepfake detection is characterized by a "moving goal post" where the production of deepfakes continues to change and improve as algorithms to detect deepfakes improve.[188]In order to assess the most effective algorithms for detecting deepfakes, a coalition of leading technology companies hosted the Deepfake Detection Challenge to accelerate the technology for identifying manipulated content.[190]The winning model of the Deepfake Detection Challenge was 65% accurate on the holdout set of 4,000 videos.[191]A team at Massachusetts Institute of Technology published a paper in December 2021 demonstrating that ordinary humans are 69–72% accurate at identifying a random sample of 50 of these videos.[192] A team at the University of Buffalo published a paper in October 2020 outlining their technique of using reflections of light in the eyes of those depicted to spot deepfakes with a high rate of success, even without the use of an AI detection tool, at least for the time being.[193] In the case of well-documented individuals such as political leaders, algorithms have been developed to distinguish identity-based features such as patterns of facial, gestural, and vocal mannerisms and detect deep-fake impersonators.[194] Another team led by Wael AbdAlmageed with Visual Intelligence and Multimedia Analytics Laboratory (VIMAL) of theInformation Sciences Instituteat theUniversity Of Southern Californiadeveloped two generations[195][196]of deepfake detectors based onconvolutional neural networks. The first generation[195]usedrecurrent neural networksto spot spatio-temporal inconsistencies to identify visual artifacts left by the deepfake generation process. The algorithm achieved 96% accuracy on FaceForensics++, the only large-scale deepfake benchmark available at that time. The second generation[196]used end-to-end deep networks to differentiate between artifacts and high-level semantic facial information using two-branch networks. The first branch propagates colour information while the other branch suppresses facial content and amplifies low-level frequencies usingLaplacian of Gaussian (LoG). Further, they included a new loss function that learns a compact representation of bona fide faces, while dispersing the representations (i.e. features) of deepfakes. VIMAL's approach showed state-of-the-art performance on FaceForensics++ and Celeb-DF benchmarks, and onMarch 16, 2022(the same day of the release), was used to identify the deepfake of Volodymyr Zelensky out-of-the-box without any retraining or knowledge of the algorithm with which the deepfake was created.[citation needed] Other techniques suggest thatblockchaincould be used to verify the source of the media.[197]For instance, a video might have to be verified through the ledger before it is shown on social media platforms.[197]With this technology, only videos from trusted sources would be approved, decreasing the spread of possibly harmful deepfake media.[197] Digitally signing of all video and imagery by cameras and video cameras, including smartphone cameras, was suggested to fight deepfakes.[198]That allows tracing every photograph or video back to its original owner that can be used to pursue dissidents.[198] One easy way to uncover deepfake video calls consists in asking the caller to turn sideways.[199] Henry Ajder who works for Deeptrace, a company that detects deepfakes, says there are several ways to protect against deepfakes in the workplace. Semantic passwords or secret questions can be used when holding important conversations. Voice authentication and otherbiometric security featuresshould be up to date. Educate employees about deepfakes.[131] Due to the capability of deepfakes to fool viewers and believably mimic a person, research has indicated that the concept of truth through observation cannot be fully relied on.[200]Additionally, literacy of the technology among populations could be called into question due to the relatively new success of convincing deepfakes.[201]When combined with increasing ease of access to the technology, this has led to the concern amongst some experts that some societies are not prepared to interact with deepfakes organically without potential consequences from sharing misinformation and disinformation.[202]Media literacyhas been considered as a potential counter to "prime" a viewer to identify a deepfake when they encounter one organically by engendering critical thinking.[203]While media literacy education can have conflicting results in the overall success in detecting deepfakes,[204]research has indicated that critical thinking and a skeptical outlook toward a presented piece of media are effective at assisting an individual in determining a deepfake.[205][206]Media literacy frameworks promote critical analysis of media and the motivations behind the presentation of the associated content. Media literacy shows promise as a potential cognitive countermeasure when interacting with malicious deepfakes.[207] In March 2024, a video clip was shown from theBuckingham Palace, whereKate Middletonhad cancer and she was undergoing chemotherapy. However, the clip fuelled rumours that the woman in that clip was an AI deepfake.[208]UCLA's race director Johnathan Perkins doubted she had cancer, and further speculated that she could be in critical condition or dead.[209] Recently, the use of deepfakes has inspired research on deepfake's capability and effects when used in disinformation campaigns. This capability has raised concerns, partly due to the potential of deepfakes to circumvent a person's skepticism and influence their views on an issue.[210][179]Due to the continued advancement in technology that improves deceptive capabilities of deepfakes, some scholars believe that deepfakes could pose a significant threat to democratic societies.[211]Studies have investigated the effects of political deepfakes.[210][211][179]In two separate studies focusing on Dutch participants, it was found that deepfakes have varying effects on an audience. As a tool of disinformation, deepfakes did not necessarily produce stronger reactions or shifts in viewpoints than traditional textual disinformation.[210]However, deepfakes did produce a reassuring effect on individuals who held preconceived notions that aligned with the viewpoint promoted by the deepfake disinformation in the study.[210]Additionally, deepfakes are effective when designed to target a specific demographic segment related to a particular issue.[211]"Microtargeting" involves understanding nuanced political issues of a specific demographic to create a targeted deepfake. The targeted deepfake is then used to connect with and influence the viewpoint of that demographic. Targeted deepfakes were found to be notably effective by the researchers.[211]Research has also found that the political effects of deepfakes are not necessarily as straightforward or assured. Researchers in the United Kingdom uncovered that deepfake political disinformation does not have a guaranteed effect on populations beyond indications that it may sow distrust or uncertainty in a source that provides the deepfake.[179]The implications of distrust in sources led researchers to conclude that deepfakes may have outsized effect in a "low-trust" information environment where public institutions are not trusted by the public.[179] Across the world, there are key instances where deepfakes have been used to misrepresent well-known politicians and other public figures. Twitter(laterX) is taking active measures to handle synthetic and manipulated media on their platform. In order to prevent disinformation from spreading, Twitter is placing a notice on tweets that contain manipulated media and/or deepfakes that signal to viewers that the media is manipulated.[237]There will also be a warning that appears to users who plan on retweeting, liking, or engaging with the tweet.[237]Twitter will also work to provide users a link next to the tweet containing manipulated or synthetic media that links to a Twitter Moment or credible news article on the related topic—as a debunking action.[237]Twitter also has the ability to remove any tweets containing deepfakes or manipulated media that may pose a harm to users' safety.[237]In order to better improve Twitter's detection of deepfakes and manipulated media, Twitter asked users who are interested in partnering with them to work on deepfake detection solutions to fill out a form.[238] "In August 2024, thesecretaries of stateof Minnesota, Pennsylvania, Washington, Michigan and New Mexico penned an open letter to X owner Elon Musk urging modifications to its AI chatbotGrok's newtext-to-videogenerator, added in August 2024, stating that it had disseminated election misinformation.[239][240][241] Facebookhas taken efforts towards encouraging the creation of deepfakes in order to develop state of the art deepfake detection software. Facebook was the prominent partner in hosting the Deepfake Detection Challenge (DFDC), held December 2019, to 2114 participants who generated more than 35,000 models.[242]The top performing models with the highest detection accuracy were analyzed for similarities and differences; these findings are areas of interest in further research to improve and refine deepfake detection models.[242]Facebook has also detailed that the platform will be taking down media generated with artificial intelligence used to alter an individual's speech.[243]However, media that has been edited to alter the order or context of words in one's message would remain on the site but be labeled as false, since it was not generated by artificial intelligence.[243] On 31 January 2018,Gfycatbegan removing all deepfakes from its site.[244][245]OnReddit, the r/deepfakes subreddit was banned on 7 February 2018, due to the policy violation of "involuntary pornography".[246][247][248][249][250]In the same month, representatives fromTwitterstated that they would suspend accounts suspected of posting non-consensual deepfake content.[251]Chat siteDiscordhas taken action against deepfakes in the past,[252]and has taken a general stance against deepfakes.[245][253]In September 2018,Googleadded "involuntary synthetic pornographic imagery" to its ban list, allowing anyone to request the block of results showing their fake nudes.[254][check quotation syntax] In February 2018,Pornhubsaid that it would ban deepfake videos on its website because it is considered "non consensual content" which violates their terms of service.[255]They also stated previously to Mashable that they will take down content flagged as deepfakes.[256]Writers from Motherboard reported that searching "deepfakes" onPornhubstill returned multiple recent deepfake videos.[255] Facebookhas previously stated that they would not remove deepfakes from their platforms.[257]The videos will instead be flagged as fake by third-parties and then have a lessened priority in user's feeds.[258]This response was prompted in June 2019 after a deepfake featuring a 2016 video ofMark Zuckerbergcirculated on Facebook andInstagram.[257] In May 2022,Googleofficially changed the terms of service for theirJupyter Notebook colabs, banning the use of their colab service for the purpose of creating deepfakes.[259]This came a few days after a VICE article had been published, claiming that "most deepfakes are non-consensual porn" and that the main use of popular deepfake software DeepFaceLab (DFL), "the most important technology powering the vast majority of this generation of deepfakes" which often was used in combination with Google colabs, would be to create non-consensual pornography, by pointing to the fact that among many other well-known examples of third-party DFL implementations such as deepfakes commissioned byThe Walt Disney Company, official music videos, and web seriesSassy Justiceby the creators ofSouth Park, DFL'sGitHubpage also links to deepfake porn websiteMr.‍Deepfakesand participants of the DFL Discord server also participate onMr.‍Deepfakes.[260] In the United States, there have been some responses to the problems posed by deepfakes. In 2018, the Malicious Deep Fake Prohibition Act was introduced to theUS Senate;[261]in 2019, the Deepfakes Accountability Act was introduced in the116th United States CongressbyU.S. representativeforNew York's 9th congressional districtYvette Clarke.[262]Several states have also introduced legislation regarding deepfakes, including Virginia,[263]Texas, California, and New York;[264]charges as varied asidentity theft,cyberstalking, andrevenge pornhave been pursued, while more comprehensive statutes are urged.[254] Among U.S. legislative efforts, on 3 October 2019, California governorGavin Newsomsigned into law Assembly Bills No. 602 and No. 730.[265][266]Assembly Bill No. 602 provides individuals targeted by sexually explicit deepfake content made without their consent with a cause of action against the content's creator.[265]Assembly Bill No. 730 prohibits the distribution of malicious deepfake audio or visual media targeting a candidate running for public office within 60 days of their election.[266]U.S. representative Yvette Clarke introduced H.R. 5586: Deepfakes Accountability Act into the118th United States Congresson September 20, 2023 in an effort to protect national security from threats posed by deepfake technology.[267]U.S. representativeMaría Salazarintroduced H.R. 6943: No AI Fraud Act into the118th United States Congresson January 10, 2024, to establish specific property rights of individual physicality, including voice.[268] In November 2019, China announced that deepfakes and other synthetically faked footage should bear a clear notice about their fakeness starting in 2020. Failure to comply could be considered acrimetheCyberspace Administration of Chinastated on its website.[269]The Chinese government seems to be reserving the right to prosecute both users andonline video platformsfailing to abide by the rules.[270]The Cyberspace Administration of China, theMinistry of Industry and Information Technology, and theMinistry of Public Securityjointly issued the Provision on the Administration of Deep Synthesis Internet Information Service in November 2022.[271]China's updated Deep Synthesis Provisions (Administrative Provisions on Deep Synthesis in Internet-Based Information Services) went into effect in January 2023.[272] In the United Kingdom, producers of deepfake material could be prosecuted for harassment, but deepfake production was not a specific crime[273]until 2023, when theOnline Safety Actwas passed, which made deepfakes illegal; the UK plans to expand the Act's scope to criminalize deepfakes created with "intention to cause distress" in 2024.[274][275] In Canada, in 2019, theCommunications Security Establishmentreleased a report which said that deepfakes could be used to interfere in Canadian politics, particularly to discredit politicians and influence voters.[276][277]As a result, there are multiple ways for citizens in Canada to deal with deepfakes if they are targeted by them.[278]In February 2024,billC-63 was tabled in the44th Canadian Parliamentin order to enact theOnline Harms Act, which would amend Criminal Code, and other Acts. An earlier version of the Bill, C-36, was ended by the dissolution of the 43rd Canadian Parliament in September 2021.[279][280] In India, there are no direct laws or regulation on AI or deepfakes, but there are provisions under the Indian Penal Code and Information Technology Act 2000/2008, which can be looked at for legal remedies, and the new proposed Digital India Act will have a chapter on AI and deepfakes in particular, as per the MoS Rajeev Chandrasekhar.[281] In Europe, the European Union's 2024Artificial Intelligence Act(AI Act) takes a risk-based approach to regulating AI systems, including deepfakes. It establishes categories of "unacceptable risk," "high risk," "specific/limited or transparency risk", and "minimal risk" to determine the level of regulatory obligations for AI providers and users. However, the lack of clear definitions for these risk categories in the context of deepfakes creates potential challenges for effective implementation. Legal scholars have raised concerns about the classification of deepfakes intended for political misinformation or the creation of non-consensual intimate imagery. Debate exists over whether such uses should always be considered "high-risk" AI systems, which would lead to stricter regulatory requirements.[282] In August 2024, the IrishData Protection Commission(DPC) launched court proceedings againstXfor its unlawful use of the personal data of over 60 million EU/EEA users, in order to train its AI technologies, such as itschatbotGrok.[283] In 2016, theDefense Advanced Research Projects Agency(DARPA) launched the Media Forensics (MediFor) program which was funded through 2020.[284]MediFor aimed at automatically spotting digital manipulation in images and videos, includingDeepfakes.[285][286]In the summer of 2018, MediFor held an event where individuals competed to create AI-generated videos, audio, and images as well as automated tools to detect these deepfakes.[287]According to the MediFor program, it established a framework of three tiers of information - digital integrity, physical integrity and semantic integrity - to generate one integrity score in an effort to enable accurate detection of manipulated media.[288] In 2019, DARPA hosted a "proposers day" for the Semantic Forensics (SemaFor) program where researchers were driven to prevent viral spread of AI-manipulated media.[289]DARPA and the Semantic Forensics Program were also working together to detect AI-manipulated media through efforts in training computers to utilize common sense, logical reasoning.[289]Built on the MediFor's technologies, SemaFor's attribution algorithms infer if digital media originates from a particular organization or individual, while characterization algorithms determine whether media was generated or manipulated for malicious purposes.[290]In March 2024, SemaFor published an analytic catalog that offers the public access to open-source resources developed under SemaFor.[291][292] TheInternational Panel on the Information Environmentwas launched in 2023 as a consortium of over 250 scientists working to develop effective countermeasures to deepfakes and other problems created by perverse incentives in organizations disseminating information via the Internet.[293] Media related toDeepfakeat Wikimedia Commons
https://en.wikipedia.org/wiki/Deepfake
Inmachine learning,diffusion models, also known asdiffusion-based generative modelsorscore-based generative models, are a class oflatent variablegenerativemodels. A diffusion model consists of two major components: the forward diffusion process, and the reverse sampling process. The goal of diffusion models is to learn adiffusion processfor a given dataset, such that the process can generate new elements that are distributed similarly as the original dataset. A diffusion model models data as generated by a diffusion process, whereby a new datum performs arandom walk with driftthrough the space of all possible data.[1]A trained diffusion model can be sampled in many ways, with different efficiency and quality. There are various equivalent formalisms, includingMarkov chains, denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations.[2]They are typically trained usingvariational inference.[3]The model responsible for denoising is typically called its "backbone". The backbone may be of any kind, but they are typicallyU-netsortransformers. As of 2024[update], diffusion models are mainly used forcomputer visiontasks, includingimage denoising,inpainting,super-resolution,image generation, and video generation. These typically involve training a neural network to sequentiallydenoiseimages blurred withGaussian noise.[1][4]The model is trained to reverse the process of adding noise to an image. After training to convergence, it can be used for image generation by starting with an image composed of random noise, and applying the network iteratively to denoise the image. Diffusion-based image generators have seen widespread commercial interest, such asStable DiffusionandDALL-E. These models typically combine diffusion models with other models, such as text-encoders and cross-attention modules to allow text-conditioned generation.[5] Other than computer vision, diffusion models have also found applications innatural language processing[6][7]such astext generation[8][9]andsummarization,[10]sound generation,[11]and reinforcement learning.[12][13] Diffusion models were introduced in 2015 as a method to train a model that can sample from a highly complex probability distribution. They used techniques fromnon-equilibrium thermodynamics, especiallydiffusion.[14] Consider, for example, how one might model the distribution of all naturally-occurring photos. Each image is a point in the space of all images, and the distribution of naturally-occurring photos is a "cloud" in space, which, by repeatedly adding noise to the images, diffuses out to the rest of the image space, until the cloud becomes all but indistinguishable from aGaussian distributionN(0,I){\displaystyle {\mathcal {N}}(0,I)}. A model that can approximately undo the diffusion can then be used to sample from the original distribution. This is studied in "non-equilibrium" thermodynamics, as the starting distribution is not in equilibrium, unlike the final distribution. The equilibrium distribution is the Gaussian distributionN(0,I){\displaystyle {\mathcal {N}}(0,I)}, with pdfρ(x)∝e−12‖x‖2{\displaystyle \rho (x)\propto e^{-{\frac {1}{2}}\|x\|^{2}}}. This is just theMaxwell–Boltzmann distributionof particles in a potential wellV(x)=12‖x‖2{\displaystyle V(x)={\frac {1}{2}}\|x\|^{2}}at temperature 1. The initial distribution, being very much out of equilibrium, would diffuse towards the equilibrium distribution, making biased random steps that are a sum of pure randomness (like aBrownian walker) and gradient descent down the potential well. The randomness is necessary: if the particles were to undergo only gradient descent, then they will all fall to the origin, collapsing the distribution. The 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method byvariational inference.[3][15] To present the model, we need some notation. Aforward diffusion processstarts at some starting pointx0∼q{\displaystyle x_{0}\sim q}, whereq{\displaystyle q}is the probability distribution to be learned, then repeatedly adds noise to it byxt=1−βtxt−1+βtzt{\displaystyle x_{t}={\sqrt {1-\beta _{t}}}x_{t-1}+{\sqrt {\beta _{t}}}z_{t}}wherez1,...,zT{\displaystyle z_{1},...,z_{T}}are IID samples fromN(0,I){\displaystyle {\mathcal {N}}(0,I)}. This is designed so that for any starting distribution ofx0{\displaystyle x_{0}}, we havelimtxt|x0{\displaystyle \lim _{t}x_{t}|x_{0}}converging toN(0,I){\displaystyle {\mathcal {N}}(0,I)}. The entire diffusion process then satisfiesq(x0:T)=q(x0)q(x1|x0)⋯q(xT|xT−1)=q(x0)N(x1|α1x0,β1I)⋯N(xT|αTxT−1,βTI){\displaystyle q(x_{0:T})=q(x_{0})q(x_{1}|x_{0})\cdots q(x_{T}|x_{T-1})=q(x_{0}){\mathcal {N}}(x_{1}|{\sqrt {\alpha _{1}}}x_{0},\beta _{1}I)\cdots {\mathcal {N}}(x_{T}|{\sqrt {\alpha _{T}}}x_{T-1},\beta _{T}I)}orln⁡q(x0:T)=ln⁡q(x0)−∑t=1T12βt‖xt−1−βtxt−1‖2+C{\displaystyle \ln q(x_{0:T})=\ln q(x_{0})-\sum _{t=1}^{T}{\frac {1}{2\beta _{t}}}\|x_{t}-{\sqrt {1-\beta _{t}}}x_{t-1}\|^{2}+C}whereC{\displaystyle C}is a normalization constant and often omitted. In particular, we note thatx1:T|x0{\displaystyle x_{1:T}|x_{0}}is agaussian process, which affords us considerable freedom inreparameterization. For example, by standard manipulation with gaussian process,xt|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}xt−1|xt,x0∼N(μ~t(xt,x0),σ~t2I){\displaystyle x_{t-1}|x_{t},x_{0}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},x_{0}),{\tilde {\sigma }}_{t}^{2}I)}In particular, notice that for larget{\displaystyle t}, the variablext|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}converges toN(0,I){\displaystyle {\mathcal {N}}(0,I)}. That is, after a long enough diffusion process, we end up with somexT{\displaystyle x_{T}}that is very close toN(0,I){\displaystyle {\mathcal {N}}(0,I)}, with all traces of the originalx0∼q{\displaystyle x_{0}\sim q}gone. For example, sincext|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}we can samplext|x0{\displaystyle x_{t}|x_{0}}directly "in one step", instead of going through all the intermediate stepsx1,x2,...,xt−1{\displaystyle x_{1},x_{2},...,x_{t-1}}. We knowxt−1|x0{\textstyle x_{t-1}|x_{0}}is a gaussian, andxt|xt−1{\textstyle x_{t}|x_{t-1}}is another gaussian. We also know that these are independent. Thus we can perform a reparameterization:xt−1=α¯t−1x0+1−α¯t−1z{\displaystyle x_{t-1}={\sqrt {{\bar {\alpha }}_{t-1}}}x_{0}+{\sqrt {1-{\bar {\alpha }}_{t-1}}}z}xt=αtxt−1+1−αtz′{\displaystyle x_{t}={\sqrt {\alpha _{t}}}x_{t-1}+{\sqrt {1-\alpha _{t}}}z'}wherez,z′{\textstyle z,z'}are IID gaussians. There are 5 variablesx0,xt−1,xt,z,z′{\textstyle x_{0},x_{t-1},x_{t},z,z'}and two linear equations. The two sources of randomness arez,z′{\textstyle z,z'}, which can be reparameterized by rotation, since the IID gaussian distribution is rotationally symmetric. By plugging in the equations, we can solve for the first reparameterization:xt=α¯tx0+αt−α¯tz+1−αtz′⏟=σtz″{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\underbrace {{\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}z+{\sqrt {1-\alpha _{t}}}z'} _{=\sigma _{t}z''}}wherez″{\textstyle z''}is a gaussian with mean zero and variance one. To find the second one, we complete the rotational matrix:[z″z‴]=[αt−α¯tσtβtσt??][zz′]{\displaystyle {\begin{bmatrix}z''\\z'''\end{bmatrix}}={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}&{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}\\?&?\end{bmatrix}}{\begin{bmatrix}z\\z'\end{bmatrix}}} Since rotational matrices are all of the form[cos⁡θsin⁡θ−sin⁡θcos⁡θ]{\textstyle {\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}}, we know the matrix must be[z″z‴]=[αt−α¯tσtβtσt−βtσtαt−α¯tσt][zz′]{\displaystyle {\begin{bmatrix}z''\\z'''\end{bmatrix}}={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}&{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}\\-{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}&{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}\end{bmatrix}}{\begin{bmatrix}z\\z'\end{bmatrix}}}and since the inverse of rotational matrix is its transpose,[zz′]=[αt−α¯tσt−βtσtβtσtαt−α¯tσt][z″z‴]{\displaystyle {\begin{bmatrix}z\\z'\end{bmatrix}}={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}&-{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}\\{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}&{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}\end{bmatrix}}{\begin{bmatrix}z''\\z'''\end{bmatrix}}} Plugging back, and simplifying, we havext=α¯tx0+σtz″{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z''}xt−1=μ~t(xt,x0)−σ~tz‴{\displaystyle x_{t-1}={\tilde {\mu }}_{t}(x_{t},x_{0})-{\tilde {\sigma }}_{t}z'''} The key idea of DDPM is to use a neural network parametrized byθ{\displaystyle \theta }. The network takes in two argumentsxt,t{\displaystyle x_{t},t}, and outputs a vectorμθ(xt,t){\displaystyle \mu _{\theta }(x_{t},t)}and a matrixΣθ(xt,t){\displaystyle \Sigma _{\theta }(x_{t},t)}, such that each step in the forward diffusion process can be approximately undone byxt−1∼N(μθ(xt,t),Σθ(xt,t)){\displaystyle x_{t-1}\sim {\mathcal {N}}(\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}. This then gives us a backward diffusion processpθ{\displaystyle p_{\theta }}defined bypθ(xT)=N(xT|0,I){\displaystyle p_{\theta }(x_{T})={\mathcal {N}}(x_{T}|0,I)}pθ(xt−1|xt)=N(xt−1|μθ(xt,t),Σθ(xt,t)){\displaystyle p_{\theta }(x_{t-1}|x_{t})={\mathcal {N}}(x_{t-1}|\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}The goal now is to learn the parameters such thatpθ(x0){\displaystyle p_{\theta }(x_{0})}is as close toq(x0){\displaystyle q(x_{0})}as possible. To do that, we usemaximum likelihood estimationwith variational inference. TheELBO inequalitystates thatln⁡pθ(x0)≥Ex1:T∼q(⋅|x0)[ln⁡pθ(x0:T)−ln⁡q(x1:T|x0)]{\displaystyle \ln p_{\theta }(x_{0})\geq E_{x_{1:T}\sim q(\cdot |x_{0})}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}, and taking one more expectation, we getEx0∼q[ln⁡pθ(x0)]≥Ex0:T∼q[ln⁡pθ(x0:T)−ln⁡q(x1:T|x0)]{\displaystyle E_{x_{0}\sim q}[\ln p_{\theta }(x_{0})]\geq E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}We see that maximizing the quantity on the right would give us a lower bound on the likelihood of observed data. This allows us to perform variational inference. Define the loss functionL(θ):=−Ex0:T∼q[ln⁡pθ(x0:T)−ln⁡q(x1:T|x0)]{\displaystyle L(\theta ):=-E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}and now the goal is to minimize the loss by stochastic gradient descent. The expression may be simplified to[16]L(θ)=∑t=1TExt−1,xt∼q[−ln⁡pθ(xt−1|xt)]+Ex0∼q[DKL(q(xT|x0)‖pθ(xT))]+C{\displaystyle L(\theta )=\sum _{t=1}^{T}E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]+E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]+C}whereC{\displaystyle C}does not depend on the parameter, and thus can be ignored. Sincepθ(xT)=N(xT|0,I){\displaystyle p_{\theta }(x_{T})={\mathcal {N}}(x_{T}|0,I)}also does not depend on the parameter, the termEx0∼q[DKL(q(xT|x0)‖pθ(xT))]{\displaystyle E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]}can also be ignored. This leaves justL(θ)=∑t=1TLt{\displaystyle L(\theta )=\sum _{t=1}^{T}L_{t}}withLt=Ext−1,xt∼q[−ln⁡pθ(xt−1|xt)]{\displaystyle L_{t}=E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]}to be minimized. Sincext−1|xt,x0∼N(μ~t(xt,x0),σ~t2I){\displaystyle x_{t-1}|x_{t},x_{0}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},x_{0}),{\tilde {\sigma }}_{t}^{2}I)}, this suggests that we should useμθ(xt,t)=μ~t(xt,x0){\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu }}_{t}(x_{t},x_{0})}; however, the network does not have access tox0{\displaystyle x_{0}}, and so it has to estimate it instead. Now, sincext|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}, we may writext=α¯tx0+σtz{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z}, wherez{\displaystyle z}is some unknown gaussian noise. Now we see that estimatingx0{\displaystyle x_{0}}is equivalent to estimatingz{\displaystyle z}. Therefore, let the network output a noise vectorϵθ(xt,t){\displaystyle \epsilon _{\theta }(x_{t},t)}, and let it predictμθ(xt,t)=μ~t(xt,xt−σtϵθ(xt,t)α¯t)=xt−ϵθ(xt,t)βt/σtαt{\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu }}_{t}\left(x_{t},{\frac {x_{t}-\sigma _{t}\epsilon _{\theta }(x_{t},t)}{\sqrt {{\bar {\alpha }}_{t}}}}\right)={\frac {x_{t}-\epsilon _{\theta }(x_{t},t)\beta _{t}/\sigma _{t}}{\sqrt {\alpha _{t}}}}}It remains to designΣθ(xt,t){\displaystyle \Sigma _{\theta }(x_{t},t)}. The DDPM paper suggested not learning it (since it resulted in "unstable training and poorer sample quality"), but fixing it at some valueΣθ(xt,t)=ζt2I{\displaystyle \Sigma _{\theta }(x_{t},t)=\zeta _{t}^{2}I}, where eitherζt2=βtorσ~t2{\displaystyle \zeta _{t}^{2}=\beta _{t}{\text{ or }}{\tilde {\sigma }}_{t}^{2}}yielded similar performance. With this, the loss simplifies toLt=βt22αtσt2ζt2Ex0∼q;z∼N(0,I)[‖ϵθ(xt,t)−z‖2]+C{\displaystyle L_{t}={\frac {\beta _{t}^{2}}{2\alpha _{t}\sigma _{t}^{2}\zeta _{t}^{2}}}E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]+C}which may be minimized by stochastic gradient descent. The paper noted empirically that an even simpler loss functionLsimple,t=Ex0∼q;z∼N(0,I)[‖ϵθ(xt,t)−z‖2]{\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}resulted in better models. After a noise prediction network is trained, it can be used for generating data points in the original distribution in a loop as follows: Score-based generative model is another formulation of diffusion modelling. They are also called noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD).[17][18][19][20] Consider the problem of image generation. Letx{\displaystyle x}represent an image, and letq(x){\displaystyle q(x)}be the probability distribution over all possible images. If we haveq(x){\displaystyle q(x)}itself, then we can say for certain how likely a certain image is. However, this is intractable in general. Most often, we are uninterested in knowing the absolute probability of a certain image. Instead, we are usually only interested in knowing how likely a certain image is compared to its immediate neighbors — e.g. how much more likely is an image of cat compared to some small variants of it? Is it more likely if the image contains two whiskers, or three, or with some Gaussian noise added? Consequently, we are actually quite uninterested inq(x){\displaystyle q(x)}itself, but rather,∇xln⁡q(x){\displaystyle \nabla _{x}\ln q(x)}. This has two major effects: Let thescore functionbes(x):=∇xln⁡q(x){\displaystyle s(x):=\nabla _{x}\ln q(x)}; then consider what we can do withs(x){\displaystyle s(x)}. As it turns out,s(x){\displaystyle s(x)}allows us to sample fromq(x){\displaystyle q(x)}using thermodynamics. Specifically, if we have a potential energy functionU(x)=−ln⁡q(x){\displaystyle U(x)=-\ln q(x)}, and a lot of particles in the potential well, then the distribution at thermodynamic equilibrium is theBoltzmann distributionqU(x)∝e−U(x)/kBT=q(x)1/kBT{\displaystyle q_{U}(x)\propto e^{-U(x)/k_{B}T}=q(x)^{1/k_{B}T}}. At temperaturekBT=1{\displaystyle k_{B}T=1}, the Boltzmann distribution is exactlyq(x){\displaystyle q(x)}. Therefore, to modelq(x){\displaystyle q(x)}, we may start with a particle sampled at any convenient distribution (such as the standard gaussian distribution), then simulate the motion of the particle forwards according to theLangevin equationdxt=−∇xtU(xt)dt+dWt{\displaystyle dx_{t}=-\nabla _{x_{t}}U(x_{t})dt+dW_{t}}and the Boltzmann distribution is,by Fokker-Planck equation, the unique thermodynamic equilibrium. So no matter what distributionx0{\displaystyle x_{0}}has, the distribution ofxt{\displaystyle x_{t}}converges in distribution toq{\displaystyle q}ast→∞{\displaystyle t\to \infty }. Given a densityq{\displaystyle q}, we wish to learn a score function approximationfθ≈∇ln⁡q{\displaystyle f_{\theta }\approx \nabla \ln q}. This isscore matching.[21]Typically, score matching is formalized as minimizingFisher divergencefunctionEq[‖fθ(x)−∇ln⁡q(x)‖2]{\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]}. By expanding the integral, and performing an integration by parts,Eq[‖fθ(x)−∇ln⁡q(x)‖2]=Eq[‖fθ‖2+2∇⋅fθ]+C{\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]=E_{q}[\|f_{\theta }\|^{2}+2\nabla \cdot f_{\theta }]+C}giving us a loss function, also known as theHyvärinen scoring rule, that can be minimized by stochastic gradient descent. Suppose we need to model the distribution of images, and we wantx0∼N(0,I){\displaystyle x_{0}\sim {\mathcal {N}}(0,I)}, a white-noise image. Now, most white-noise images do not look like real images, soq(x0)≈0{\displaystyle q(x_{0})\approx 0}for large swaths ofx0∼N(0,I){\displaystyle x_{0}\sim {\mathcal {N}}(0,I)}. This presents a problem for learning the score function, because if there are no samples around a certain point, then we can't learn the score function at that point. If we do not know the score function∇xtln⁡q(xt){\displaystyle \nabla _{x_{t}}\ln q(x_{t})}at that point, then we cannot impose the time-evolution equation on a particle:dxt=∇xtln⁡q(xt)dt+dWt{\displaystyle dx_{t}=\nabla _{x_{t}}\ln q(x_{t})dt+dW_{t}}To deal with this problem, we performannealing. Ifq{\displaystyle q}is too different from a white-noise distribution, then progressively add noise until it is indistinguishable from one. That is, we perform a forward diffusion, then learn the score function, then use the score function to perform a backward diffusion. Consider again the forward diffusion process, but this time in continuous time:xt=1−βtxt−1+βtzt{\displaystyle x_{t}={\sqrt {1-\beta _{t}}}x_{t-1}+{\sqrt {\beta _{t}}}z_{t}}By taking theβt→β(t)dt,dtzt→dWt{\displaystyle \beta _{t}\to \beta (t)dt,{\sqrt {dt}}z_{t}\to dW_{t}}limit, we obtain a continuous diffusion process, in the form of astochastic differential equation:dxt=−12β(t)xtdt+β(t)dWt{\displaystyle dx_{t}=-{\frac {1}{2}}\beta (t)x_{t}dt+{\sqrt {\beta (t)}}dW_{t}}whereWt{\displaystyle W_{t}}is aWiener process(multidimensional Brownian motion). Now, the equation is exactly a special case of theoverdamped Langevin equationdxt=−DkBT(∇xU)dt+2DdWt{\displaystyle dx_{t}=-{\frac {D}{k_{B}T}}(\nabla _{x}U)dt+{\sqrt {2D}}dW_{t}}whereD{\displaystyle D}is diffusion tensor,T{\displaystyle T}is temperature, andU{\displaystyle U}is potential energy field. If we substitute inD=12β(t)I,kBT=1,U=12‖x‖2{\displaystyle D={\frac {1}{2}}\beta (t)I,k_{B}T=1,U={\frac {1}{2}}\|x\|^{2}}, we recover the above equation. This explains why the phrase "Langevin dynamics" is sometimes used in diffusion models. Now the above equation is for the stochastic motion of a single particle. Suppose we have a cloud of particles distributed according toq{\displaystyle q}at timet=0{\displaystyle t=0}, then after a long time, the cloud of particles would settle into the stable distribution ofN(0,I){\displaystyle {\mathcal {N}}(0,I)}. Letρt{\displaystyle \rho _{t}}be the density of the cloud of particles at timet{\displaystyle t}, then we haveρ0=q;ρT≈N(0,I){\displaystyle \rho _{0}=q;\quad \rho _{T}\approx {\mathcal {N}}(0,I)}and the goal is to somehow reverse the process, so that we can start at the end and diffuse back to the beginning. ByFokker-Planck equation, the density of the cloud evolves according to∂tln⁡ρt=12β(t)(n+(x+∇ln⁡ρt)⋅∇ln⁡ρt+Δln⁡ρt){\displaystyle \partial _{t}\ln \rho _{t}={\frac {1}{2}}\beta (t)\left(n+(x+\nabla \ln \rho _{t})\cdot \nabla \ln \rho _{t}+\Delta \ln \rho _{t}\right)}wheren{\displaystyle n}is the dimension of space, andΔ{\displaystyle \Delta }is theLaplace operator. Equivalently,∂tρt=12β(t)(∇⋅(xρt)+Δρt){\displaystyle \partial _{t}\rho _{t}={\frac {1}{2}}\beta (t)(\nabla \cdot (x\rho _{t})+\Delta \rho _{t})} If we have solvedρt{\displaystyle \rho _{t}}for timet∈[0,T]{\displaystyle t\in [0,T]}, then we can exactly reverse the evolution of the cloud. Suppose we start with another cloud of particles with densityν0=ρT{\displaystyle \nu _{0}=\rho _{T}}, and let the particles in the cloud evolve according to dyt=12β(T−t)ytdt+β(T−t)∇ytln⁡ρT−t(yt)⏟score functiondt+β(T−t)dWt{\displaystyle dy_{t}={\frac {1}{2}}\beta (T-t)y_{t}dt+\beta (T-t)\underbrace {\nabla _{y_{t}}\ln \rho _{T-t}\left(y_{t}\right)} _{\text{score function }}dt+{\sqrt {\beta (T-t)}}dW_{t}} then by plugging into the Fokker-Planck equation, we find that∂tρT−t=∂tνt{\displaystyle \partial _{t}\rho _{T-t}=\partial _{t}\nu _{t}}. Thus this cloud of points is the original cloud, evolving backwards.[22] At the continuous limit,α¯t=(1−β1)⋯(1−βt)=e∑iln⁡(1−βi)→e−∫0tβ(t)dt{\displaystyle {\bar {\alpha }}_{t}=(1-\beta _{1})\cdots (1-\beta _{t})=e^{\sum _{i}\ln(1-\beta _{i})}\to e^{-\int _{0}^{t}\beta (t)dt}}and soxt|x0∼N(e−12∫0tβ(t)dtx0,(1−e−∫0tβ(t)dt)I){\displaystyle x_{t}|x_{0}\sim N\left(e^{-{\frac {1}{2}}\int _{0}^{t}\beta (t)dt}x_{0},\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)I\right)}In particular, we see that we can directly sample from any point in the continuous diffusion process without going through the intermediate steps, by first samplingx0∼q,z∼N(0,I){\displaystyle x_{0}\sim q,z\sim {\mathcal {N}}(0,I)}, then getxt=e−12∫0tβ(t)dtx0+(1−e−∫0tβ(t)dt)z{\displaystyle x_{t}=e^{-{\frac {1}{2}}\int _{0}^{t}\beta (t)dt}x_{0}+\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)z}. That is, we can quickly samplext∼ρt{\displaystyle x_{t}\sim \rho _{t}}for anyt≥0{\displaystyle t\geq 0}. Now, define a certain probability distributionγ{\displaystyle \gamma }over[0,∞){\displaystyle [0,\infty )}, then the score-matching loss function is defined as the expected Fisher divergence:L(θ)=Et∼γ,xt∼ρt[‖fθ(xt,t)‖2+2∇⋅fθ(xt,t)]{\displaystyle L(\theta )=E_{t\sim \gamma ,x_{t}\sim \rho _{t}}[\|f_{\theta }(x_{t},t)\|^{2}+2\nabla \cdot f_{\theta }(x_{t},t)]}After training,fθ(xt,t)≈∇ln⁡ρt{\displaystyle f_{\theta }(x_{t},t)\approx \nabla \ln \rho _{t}}, so we can perform the backwards diffusion process by first samplingxT∼N(0,I){\displaystyle x_{T}\sim {\mathcal {N}}(0,I)}, then integrating the SDE fromt=T{\displaystyle t=T}tot=0{\displaystyle t=0}:xt−dt=xt+12β(t)xtdt+β(t)fθ(xt,t)dt+β(t)dWt{\displaystyle x_{t-dt}=x_{t}+{\frac {1}{2}}\beta (t)x_{t}dt+\beta (t)f_{\theta }(x_{t},t)dt+{\sqrt {\beta (t)}}dW_{t}}This may be done by any SDE integration method, such asEuler–Maruyama method. The name "noise conditional score network" is explained thus: DDPM and score-based generative models are equivalent.[18][1][23]This means that a network trained using DDPM can be used as a NCSN, and vice versa. We know thatxt|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}, so byTweedie's formula, we have∇xtln⁡q(xt)=1σt2(−xt+α¯tEq[x0|xt]){\displaystyle \nabla _{x_{t}}\ln q(x_{t})={\frac {1}{\sigma _{t}^{2}}}(-x_{t}+{\sqrt {{\bar {\alpha }}_{t}}}E_{q}[x_{0}|x_{t}])}As described previously, the DDPM loss function is∑tLsimple,t{\displaystyle \sum _{t}L_{simple,t}}withLsimple,t=Ex0∼q;z∼N(0,I)[‖ϵθ(xt,t)−z‖2]{\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}wherext=α¯tx0+σtz{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z}. By a change of variables,Lsimple,t=Ex0,xt∼q[‖ϵθ(xt,t)−xt−α¯tx0σt‖2]=Ext∼q,x0∼q(⋅|xt)[‖ϵθ(xt,t)−xt−α¯tx0σt‖2]{\displaystyle L_{simple,t}=E_{x_{0},x_{t}\sim q}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}x_{0}}{\sigma _{t}}}\right\|^{2}\right]=E_{x_{t}\sim q,x_{0}\sim q(\cdot |x_{t})}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}x_{0}}{\sigma _{t}}}\right\|^{2}\right]}and the term inside becomes a least squares regression, so if the network actually reaches the global minimum of loss, then we haveϵθ(xt,t)=xt−α¯tEq[x0|xt]σt=−σt∇xtln⁡q(xt){\displaystyle \epsilon _{\theta }(x_{t},t)={\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}E_{q}[x_{0}|x_{t}]}{\sigma _{t}}}=-\sigma _{t}\nabla _{x_{t}}\ln q(x_{t})} Thus, a score-based network predicts noise, and can be used for denoising. Conversely, the continuous limitxt−1=xt−dt,βt=β(t)dt,ztdt=dWt{\displaystyle x_{t-1}=x_{t-dt},\beta _{t}=\beta (t)dt,z_{t}{\sqrt {dt}}=dW_{t}}of the backward equationxt−1=xtαt−βtσtαtϵθ(xt,t)+βtzt;zt∼N(0,I){\displaystyle x_{t-1}={\frac {x_{t}}{\sqrt {\alpha _{t}}}}-{\frac {\beta _{t}}{\sigma _{t}{\sqrt {\alpha _{t}}}}}\epsilon _{\theta }(x_{t},t)+{\sqrt {\beta _{t}}}z_{t};\quad z_{t}\sim {\mathcal {N}}(0,I)}gives us precisely the same equation as score-based diffusion:xt−dt=xt(1+β(t)dt/2)+β(t)∇xtln⁡q(xt)dt+β(t)dWt{\displaystyle x_{t-dt}=x_{t}(1+\beta (t)dt/2)+\beta (t)\nabla _{x_{t}}\ln q(x_{t})dt+{\sqrt {\beta (t)}}dW_{t}}Thus, at infinitesimal steps of DDPM, a denoising network performs score-based diffusion. In DDPM, the sequence of numbers0=σ0<σ1<⋯<σT<1{\displaystyle 0=\sigma _{0}<\sigma _{1}<\cdots <\sigma _{T}<1}is called a (discrete time)noise schedule. In general, consider a strictly increasing monotonic functionσ{\displaystyle \sigma }of typeR→(0,1){\displaystyle \mathbb {R} \to (0,1)}, such as thesigmoid function. In that case, a noise schedule is a sequence of real numbersλ1<λ2<⋯<λT{\displaystyle \lambda _{1}<\lambda _{2}<\cdots <\lambda _{T}}. It then defines a sequence of noisesσt:=σ(λt){\displaystyle \sigma _{t}:=\sigma (\lambda _{t})}, which then derives the other quantitiesβt=1−1−σt21−σt−12{\displaystyle \beta _{t}=1-{\frac {1-\sigma _{t}^{2}}{1-\sigma _{t-1}^{2}}}}. In order to use arbitrary noise schedules, instead of training a noise prediction modelϵθ(xt,t){\displaystyle \epsilon _{\theta }(x_{t},t)}, one trainsϵθ(xt,σt){\displaystyle \epsilon _{\theta }(x_{t},\sigma _{t})}. Similarly, for the noise conditional score network, instead of trainingfθ(xt,t){\displaystyle f_{\theta }(x_{t},t)}, one trainsfθ(xt,σt){\displaystyle f_{\theta }(x_{t},\sigma _{t})}. The original DDPM method for generating images is slow, since the forward diffusion process usually takesT∼1000{\displaystyle T\sim 1000}to make the distribution ofxT{\displaystyle x_{T}}to appear close to gaussian. However this means the backward diffusion process also take 1000 steps. Unlike the forward diffusion process, which can skip steps asxt|x0{\displaystyle x_{t}|x_{0}}is gaussian for allt≥1{\displaystyle t\geq 1}, the backward diffusion process does not allow skipping steps. For example, to samplext−2|xt−1∼N(μθ(xt−1,t−1),Σθ(xt−1,t−1)){\displaystyle x_{t-2}|x_{t-1}\sim {\mathcal {N}}(\mu _{\theta }(x_{t-1},t-1),\Sigma _{\theta }(x_{t-1},t-1))}requires the model to first samplext−1{\displaystyle x_{t-1}}. Attempting to directly samplext−2|xt{\displaystyle x_{t-2}|x_{t}}would require us to marginalize outxt−1{\displaystyle x_{t-1}}, which is generally intractable. DDIM[24]is a method to take any model trained on DDPM loss, and use it to sample with some steps skipped, sacrificing an adjustable amount of quality. If we generate the Markovian chain case in DDPM to non-Markovian case, DDIM corresponds to the case that the reverse process has variance equals to 0. In other words, the reverse process (and also the forward process) is deterministic. When using fewer sampling steps, DDIM outperforms DDPM. In detail, the DDIM sampling method is as follows. Start with the forward diffusion processxt=α¯tx0+σtϵ{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}\epsilon }. Then, during the backward denoising process, givenxt,ϵθ(xt,t){\displaystyle x_{t},\epsilon _{\theta }(x_{t},t)}, the original data is estimated asx0′=xt−σtϵθ(xt,t)α¯t{\displaystyle x_{0}'={\frac {x_{t}-\sigma _{t}\epsilon _{\theta }(x_{t},t)}{\sqrt {{\bar {\alpha }}_{t}}}}}then the backward diffusion process can jump to any step0≤s<t{\displaystyle 0\leq s<t}, and the next denoised sample isxs=α¯sx0′+σs2−(σs′)2ϵθ(xt,t)+σs′ϵ{\displaystyle x_{s}={\sqrt {{\bar {\alpha }}_{s}}}x_{0}'+{\sqrt {\sigma _{s}^{2}-(\sigma '_{s})^{2}}}\epsilon _{\theta }(x_{t},t)+\sigma _{s}'\epsilon }whereσs′{\displaystyle \sigma _{s}'}is an arbitrary real number within the range[0,σs]{\displaystyle [0,\sigma _{s}]}, andϵ∼N(0,I){\displaystyle \epsilon \sim {\mathcal {N}}(0,I)}is a newly sampled gaussian noise.[16]If allσs′=0{\displaystyle \sigma _{s}'=0}, then the backward process becomes deterministic, and this special case of DDIM is also called "DDIM". The original paper noted that when the process is deterministic, samples generated with only 20 steps are already very similar to ones generated with 1000 steps on the high-level. The original paper recommended defining a single "eta value"η∈[0,1]{\displaystyle \eta \in [0,1]}, such thatσs′=ησ~s{\displaystyle \sigma _{s}'=\eta {\tilde {\sigma }}_{s}}. Whenη=1{\displaystyle \eta =1}, this is the original DDPM. Whenη=0{\displaystyle \eta =0}, this is the fully deterministic DDIM. For intermediate values, the process interpolates between them. By the equivalence, the DDIM algorithm also applies for score-based diffusion models. Since the diffusion model is a general method for modelling probability distributions, if one wants to model a distribution over images, one can first encode the images into a lower-dimensional space by an encoder, then use a diffusion model to model the distribution over encoded images. Then to generate an image, one can sample from the diffusion model, then use a decoder to decode it into an image.[25] The encoder-decoder pair is most often avariational autoencoder(VAE). [26]proposed various architectural improvements. For example, they proposed log-space interpolation during backward sampling. Instead of sampling fromxt−1∼N(μ~t(xt,x~0),σ~t2I){\displaystyle x_{t-1}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},{\tilde {x}}_{0}),{\tilde {\sigma }}_{t}^{2}I)}, they recommended sampling fromN(μ~t(xt,x~0),(σtvσ~t1−v)2I){\displaystyle {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},{\tilde {x}}_{0}),(\sigma _{t}^{v}{\tilde {\sigma }}_{t}^{1-v})^{2}I)}for a learned parameterv{\displaystyle v}. In thev-predictionformalism, the noising formulaxt=α¯tx0+1−α¯tϵt{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+{\sqrt {1-{\bar {\alpha }}_{t}}}\epsilon _{t}}is reparameterised by an angleϕt{\displaystyle \phi _{t}}such thatcos⁡ϕt=α¯t{\displaystyle \cos \phi _{t}={\sqrt {{\bar {\alpha }}_{t}}}}and a "velocity" defined bycos⁡ϕtϵt−sin⁡ϕtx0{\displaystyle \cos \phi _{t}\epsilon _{t}-\sin \phi _{t}x_{0}}. The network is trained to predict the velocityv^θ{\displaystyle {\hat {v}}_{\theta }}, and denoising is byxϕt−δ=cos⁡(δ)xϕt−sin⁡(δ)v^θ(xϕt){\displaystyle x_{\phi _{t}-\delta }=\cos(\delta )\;x_{\phi _{t}}-\sin(\delta ){\hat {v}}_{\theta }\;(x_{\phi _{t}})}.[27]This parameterization was found to improve performance, as the model can be trained to reach total noise (i.e.ϕt=90∘{\displaystyle \phi _{t}=90^{\circ }}) and then reverse it, whereas the standard parameterization never reaches total noise sinceα¯t>0{\displaystyle {\sqrt {{\bar {\alpha }}_{t}}}>0}is always true.[28] Classifier guidance was proposed in 2021 to improve class-conditional generation by using a classifier. The original publication usedCLIP text encodersto improve text-conditional image generation.[29] Suppose we wish to sample not from the entire distribution of images, but conditional on the image description. We don't want to sample a generic image, but an image that fits the description "black cat with red eyes". Generally, we want to sample from the distributionp(x|y){\displaystyle p(x|y)}, wherex{\displaystyle x}ranges over images, andy{\displaystyle y}ranges over classes of images (a description "black cat with red eyes" is just a very detailed class, and a class "cat" is just a very vague description). Taking the perspective of thenoisy channel model, we can understand the process as follows: To generate an imagex{\displaystyle x}conditional on descriptiony{\displaystyle y}, we imagine that the requester really had in mind an imagex{\displaystyle x}, but the image is passed through a noisy channel and came out garbled, asy{\displaystyle y}. Image generation is then nothing but inferring whichx{\displaystyle x}the requester had in mind. In other words, conditional image generation is simply "translating from a textual language into a pictorial language". Then, as in noisy-channel model, we use Bayes theorem to getp(x|y)∝p(y|x)p(x){\displaystyle p(x|y)\propto p(y|x)p(x)}in other words, if we have a good model of the space of all images, and a good image-to-class translator, we get a class-to-image translator "for free". In the equation for backward diffusion, the score∇ln⁡p(x){\displaystyle \nabla \ln p(x)}can be replaced by∇xln⁡p(x|y)=∇xln⁡p(x)⏟score+∇xln⁡p(y|x)⏟classifier guidance{\displaystyle \nabla _{x}\ln p(x|y)=\underbrace {\nabla _{x}\ln p(x)} _{\text{score}}+\underbrace {\nabla _{x}\ln p(y|x)} _{\text{classifier guidance}}}where∇xln⁡p(x){\displaystyle \nabla _{x}\ln p(x)}is the score function, trained as previously described, and∇xln⁡p(y|x){\displaystyle \nabla _{x}\ln p(y|x)}is found by using a differentiable image classifier. During the diffusion process, we need to condition on the time, giving∇xtln⁡p(xt|y,t)=∇xtln⁡p(y|xt,t)+∇xtln⁡p(xt|t){\displaystyle \nabla _{x_{t}}\ln p(x_{t}|y,t)=\nabla _{x_{t}}\ln p(y|x_{t},t)+\nabla _{x_{t}}\ln p(x_{t}|t)}Although, usually the classifier model does not depend on time, in which casep(y|xt,t)=p(y|xt){\displaystyle p(y|x_{t},t)=p(y|x_{t})}. Classifier guidance is defined for the gradient of score function, thus for score-based diffusion network, but as previously noted, score-based diffusion models are equivalent to denoising models byϵθ(xt,t)=−σt∇xtln⁡p(xt|t){\displaystyle \epsilon _{\theta }(x_{t},t)=-\sigma _{t}\nabla _{x_{t}}\ln p(x_{t}|t)}, and similarly,ϵθ(xt,y,t)=−σt∇xtln⁡p(xt|y,t){\displaystyle \epsilon _{\theta }(x_{t},y,t)=-\sigma _{t}\nabla _{x_{t}}\ln p(x_{t}|y,t)}. Therefore, classifier guidance works for denoising diffusion as well, using the modified noise prediction:[29]ϵθ(xt,y,t)=ϵθ(xt,t)−σt∇xtln⁡p(y|xt,t)⏟classifier guidance{\displaystyle \epsilon _{\theta }(x_{t},y,t)=\epsilon _{\theta }(x_{t},t)-\underbrace {\sigma _{t}\nabla _{x_{t}}\ln p(y|x_{t},t)} _{\text{classifier guidance}}} The classifier-guided diffusion model samples fromp(x|y){\displaystyle p(x|y)}, which is concentrated around themaximum a posteriori estimatearg⁡maxxp(x|y){\displaystyle \arg \max _{x}p(x|y)}. If we want to force the model to move towards themaximum likelihood estimatearg⁡maxxp(y|x){\displaystyle \arg \max _{x}p(y|x)}, we can usepγ(x|y)∝p(y|x)γp(x){\displaystyle p_{\gamma }(x|y)\propto p(y|x)^{\gamma }p(x)}whereγ>0{\displaystyle \gamma >0}is interpretable asinverse temperature. In the context of diffusion models, it is usually called theguidance scale. A highγ{\displaystyle \gamma }would force the model to sample from a distribution concentrated aroundarg⁡maxxp(y|x){\displaystyle \arg \max _{x}p(y|x)}. This sometimes improves quality of generated images.[29] This gives a modification to the previous equation:∇xln⁡pβ(x|y)=∇xln⁡p(x)+γ∇xln⁡p(y|x){\displaystyle \nabla _{x}\ln p_{\beta }(x|y)=\nabla _{x}\ln p(x)+\gamma \nabla _{x}\ln p(y|x)}For denoising models, it corresponds to[30]ϵθ(xt,y,t)=ϵθ(xt,t)−γσt∇xtln⁡p(y|xt,t){\displaystyle \epsilon _{\theta }(x_{t},y,t)=\epsilon _{\theta }(x_{t},t)-\gamma \sigma _{t}\nabla _{x_{t}}\ln p(y|x_{t},t)} If we do not have a classifierp(y|x){\displaystyle p(y|x)}, we could still extract one out of the image model itself:[30]∇xln⁡pγ(x|y)=(1−γ)∇xln⁡p(x)+γ∇xln⁡p(x|y){\displaystyle \nabla _{x}\ln p_{\gamma }(x|y)=(1-\gamma )\nabla _{x}\ln p(x)+\gamma \nabla _{x}\ln p(x|y)}Such a model is usually trained by presenting it with both(x,y){\displaystyle (x,y)}and(x,None){\displaystyle (x,{\rm {None}})}, allowing it to model both∇xln⁡p(x|y){\displaystyle \nabla _{x}\ln p(x|y)}and∇xln⁡p(x){\displaystyle \nabla _{x}\ln p(x)}. Note that for CFG, the diffusion model cannot be merely a generative model of the entire data distribution∇xln⁡p(x){\displaystyle \nabla _{x}\ln p(x)}. It must be a conditional generative model∇xln⁡p(x|y){\displaystyle \nabla _{x}\ln p(x|y)}. For example, in stable diffusion, the diffusion backbone takes as input both a noisy modelxt{\displaystyle x_{t}}, a timet{\displaystyle t}, and a conditioning vectory{\displaystyle y}(such as a vector encoding a text prompt), and produces a noise predictionϵθ(xt,y,t){\displaystyle \epsilon _{\theta }(x_{t},y,t)}. For denoising models, it corresponds toϵθ(xt,y,t,γ)=ϵθ(xt,t)+γ(ϵθ(xt,y,t)−ϵθ(xt,t)){\displaystyle \epsilon _{\theta }(x_{t},y,t,\gamma )=\epsilon _{\theta }(x_{t},t)+\gamma (\epsilon _{\theta }(x_{t},y,t)-\epsilon _{\theta }(x_{t},t))}As sampled by DDIM, the algorithm can be written as[31]ϵuncond←ϵθ(xt,t)ϵcond←ϵθ(xt,t,c)ϵCFG←ϵuncond+γ(ϵcond−ϵuncond)x0←(xt−σtϵCFG)/1−σt2xs←1−σs2x0+σs2−(σs′)2ϵuncond+σs′ϵ{\displaystyle {\begin{aligned}\epsilon _{\text{uncond}}&\leftarrow \epsilon _{\theta }(x_{t},t)\\\epsilon _{\text{cond}}&\leftarrow \epsilon _{\theta }(x_{t},t,c)\\\epsilon _{\text{CFG}}&\leftarrow \epsilon _{\text{uncond}}+\gamma (\epsilon _{\text{cond}}-\epsilon _{\text{uncond}})\\x_{0}&\leftarrow (x_{t}-\sigma _{t}\epsilon _{\text{CFG}})/{\sqrt {1-\sigma _{t}^{2}}}\\x_{s}&\leftarrow {\sqrt {1-\sigma _{s}^{2}}}x_{0}+{\sqrt {\sigma _{s}^{2}-(\sigma _{s}')^{2}}}\epsilon _{\text{uncond}}+\sigma _{s}'\epsilon \\\end{aligned}}}A similar technique applies to language model sampling. Also, if the unconditional generationϵuncond←ϵθ(xt,t){\displaystyle \epsilon _{\text{uncond}}\leftarrow \epsilon _{\theta }(x_{t},t)}is replaced byϵneg cond←ϵθ(xt,t,c′){\displaystyle \epsilon _{\text{neg cond}}\leftarrow \epsilon _{\theta }(x_{t},t,c')}, then it results in negative prompting, which pushes the generation away fromc′{\displaystyle c'}condition.[32][33] Given a diffusion model, one may regard it either as a continuous process, and sample from it by integrating a SDE, or one can regard it as a discrete process, and sample from it by iterating the discrete steps. The choice of the "noise schedule"βt{\displaystyle \beta _{t}}can also affect the quality of samples. A noise schedule is a function that sends a natural number to a noise level:t↦βt,t∈{1,2,…},β∈(0,1){\displaystyle t\mapsto \beta _{t},\quad t\in \{1,2,\dots \},\beta \in (0,1)}A noise schedule is more often specified by a mapt↦σt{\displaystyle t\mapsto \sigma _{t}}. The two definitions are equivalent, sinceβt=1−1−σt21−σt−12{\displaystyle \beta _{t}=1-{\frac {1-\sigma _{t}^{2}}{1-\sigma _{t-1}^{2}}}}. In the DDPM perspective, one can use the DDPM itself (with noise), or DDIM (with adjustable amount of noise). The case where one adds noise is sometimes called ancestral sampling.[34]One can interpolate between noise and no noise. The amount of noise is denotedη{\displaystyle \eta }("eta value") in the DDIM paper, withη=0{\displaystyle \eta =0}denoting no noise (as indeterministicDDIM), andη=1{\displaystyle \eta =1}denoting full noise (as in DDPM). In the perspective of SDE, one can use any of thenumerical integration methods, such asEuler–Maruyama method,Heun's method,linear multistep methods, etc. Just as in the discrete case, one can add an adjustable amount of noise during the integration.[35] A survey and comparison of samplers in the context of image generation is in.[36] Notable variants include[37]Poisson flow generative model,[38]consistency model,[39]critically-damped Langevin diffusion,[40]GenPhys,[41]cold diffusion,[42]discrete diffusion,[43][44]etc. Abstractly speaking, the idea of diffusion model is to take an unknown probability distribution (the distribution of natural-looking images), then progressively convert it to a known probability distribution (standard gaussian distribution), by building an absolutely continuous probability path connecting them. The probability path is in fact defined implicitly by the score function∇ln⁡pt{\displaystyle \nabla \ln p_{t}}. In denoising diffusion models, the forward process adds noise, and the backward process removes noise. Both the forward and backward processes areSDEs, though the forward process is integrable in closed-form, so it can be done at no computational cost. The backward process is not integrable in closed-form, so it must be integrated step-by-step by standard SDE solvers, which can be very expensive. The probability path in diffusions model is defined through anItô processand one can retrieve the deterministic process by using the Probability ODE flow formulation.[1] In flow-based diffusion models, the forward process is a deterministic flow along a time-dependent vector field, and the backward process is also a deterministic flow along the same vector field, but going backwards. Both processes are solutions toODEs. If the vector field is well-behaved, the ODE will also be well-behaved. Given two distributionsπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}, a flow-based model is a time-dependent velocity fieldvt(x){\displaystyle v_{t}(x)}in[0,1]×Rd{\displaystyle [0,1]\times \mathbb {R} ^{d}}, such that if we start by sampling a pointx∼π0{\displaystyle x\sim \pi _{0}}, and let it move according to the velocity field:ddtϕt(x)=vt(ϕt(x))t∈[0,1],starting fromϕ0(x)=x{\displaystyle {\frac {d}{dt}}\phi _{t}(x)=v_{t}(\phi _{t}(x))\quad t\in [0,1],\quad {\text{starting from }}\phi _{0}(x)=x}we end up with a pointx1∼π1{\displaystyle x_{1}\sim \pi _{1}}. The solutionϕt{\displaystyle \phi _{t}}of the above ODE define a probability pathpt=[ϕt]#π0{\displaystyle p_{t}=[\phi _{t}]_{\#}\pi _{0}}by thepushforward measureoperator. In particular,[ϕ1]#π0=π1{\displaystyle [\phi _{1}]_{\#}\pi _{0}=\pi _{1}}. The probability path and the velocity field also satisfy thecontinuity equation, in the sense of probability distribution:∂tpt+∇⋅(vtpt)=0{\displaystyle \partial _{t}p_{t}+\nabla \cdot (v_{t}p_{t})=0}To construct a probability path, we start by construct a conditional probability pathpt(x|z){\displaystyle p_{t}(x\vert z)}and the corresponding conditional velocity fieldvt(x|z){\displaystyle v_{t}(x\vert z)}on some conditional distributionq(z){\displaystyle q(z)}. A natural choice is the Gaussian conditional probability path:pt(x|z)=N(mt(z),ζt2I){\displaystyle p_{t}(x\vert z)={\mathcal {N}}\left(m_{t}(z),\zeta _{t}^{2}I\right)}The conditional velocity field which corresponds to the geodesic path between conditional Gaussian path isvt(x|z)=ζt′ζt(x−mt(z))+mt′(z){\displaystyle v_{t}(x\vert z)={\frac {\zeta _{t}'}{\zeta _{t}}}(x-m_{t}(z))+m_{t}'(z)}The probability path and velocity field are then computed by marginalizing pt(x)=∫pt(x|z)q(z)dzandvt(x)=Eq(z)[vt(x|z)pt(x|z)pt(x)]{\displaystyle p_{t}(x)=\int p_{t}(x\vert z)q(z)dz\qquad {\text{ and }}\qquad v_{t}(x)=\mathbb {E} _{q(z)}\left[{\frac {v_{t}(x\vert z)p_{t}(x\vert z)}{p_{t}(x)}}\right]} The idea ofoptimal transport flow[45]is to construct a probability path minimizing theWasserstein metric. The distribution on which we condition is an approximation of the optimal transport plan betweenπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}:z=(x0,x1){\displaystyle z=(x_{0},x_{1})}andq(z)=Γ(π0,π1){\displaystyle q(z)=\Gamma (\pi _{0},\pi _{1})}, whereΓ{\displaystyle \Gamma }is the optimal transport plan, which can be approximated bymini-batch optimal transport.If the batch size is not large, then the transport it computes can be very far from the true optimal transport. The idea ofrectified flow[46][47]is to learn a flow model such that the velocity is nearly constant along each flow path. This is beneficial, because we can integrate along such a vector field with very few steps. For example, if an ODEϕt˙(x)=vt(ϕt(x)){\displaystyle {\dot {\phi _{t}}}(x)=v_{t}(\phi _{t}(x))}follows perfectly straight paths, it simplifies toϕt(x)=x0+t⋅v0(x0){\displaystyle \phi _{t}(x)=x_{0}+t\cdot v_{0}(x_{0})}, allowing for exact solutions in one step. In practice, we cannot reach such perfection, but when the flow field is nearly so, we can take a few large steps instead of many little steps. The general idea is to start with two distributionsπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}, then construct a flow fieldϕ0={ϕt:t∈[0,1]}{\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\}}from it, then repeatedly apply a "reflow" operation to obtain successive flow fieldsϕ1,ϕ2,…{\displaystyle \phi ^{1},\phi ^{2},\dots }, each straighter than the previous one. When the flow field is straight enough for the application, we stop. Generally, for any time-differentiable processϕt{\displaystyle \phi _{t}},vt{\displaystyle v_{t}}can be estimated by solving:minθ∫01Ex∼pt[‖vt(x,θ)−vt(x)‖2]dt.{\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{x\sim p_{t}}\left[\lVert {v_{t}(x,\theta )-v_{t}(x)}\rVert ^{2}\right]\,\mathrm {d} t.} In rectified flow, by injecting strong priors that intermediate trajectories are straight, it can achieve both theoretical relevance for optimal transport and computational efficiency, as ODEs with straight paths can be simulated precisely without time discretization. Specifically, rectified flow seeks to match an ODE with the marginal distributions of thelinear interpolationbetween points from distributionsπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}. Given observationsx0∼π0{\displaystyle x_{0}\sim \pi _{0}}andx1∼π1{\displaystyle x_{1}\sim \pi _{1}}, the canonical linear interpolationxt=tx1+(1−t)x0,t∈[0,1]{\displaystyle x_{t}=tx_{1}+(1-t)x_{0},t\in [0,1]}yields a trivial casex˙t=x1−x0{\displaystyle {\dot {x}}_{t}=x_{1}-x_{0}}, which cannot be causally simulated withoutx1{\displaystyle x_{1}}. To address this,xt{\displaystyle x_{t}}is "projected" into a space of causally simulatable ODEs, by minimizing the least squares loss with respect to the directionx1−x0{\displaystyle x_{1}-x_{0}}:minθ∫01Eπ0,π1,pt[‖(x1−x0)−vt(xt)‖2]dt.{\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{\pi _{0},\pi _{1},p_{t}}\left[\lVert {(x_{1}-x_{0})-v_{t}(x_{t})}\rVert ^{2}\right]\,\mathrm {d} t.} The data pair(x0,x1){\displaystyle (x_{0},x_{1})}can be any coupling ofπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}, typically independent (i.e.,(x0,x1)∼π0×π1{\displaystyle (x_{0},x_{1})\sim \pi _{0}\times \pi _{1}}) obtained by randomly combining observations fromπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}. This process ensures that the trajectories closely mirror the density map ofxt{\displaystyle x_{t}}trajectories butrerouteat intersections to ensure causality. This rectifying process is also known as Flow Matching,[48]Stochastic Interpolation,[49]and alpha-(de)blending.[50] A distinctive aspect of rectified flow is its capability for "reflow", which straightens the trajectory of ODE paths. Denote the rectified flowϕ0={ϕt:t∈[0,1]}{\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\}}induced from(x0,x1){\displaystyle (x_{0},x_{1})}asϕ0=Rectflow((x0,x1)){\displaystyle \phi ^{0}={\mathsf {Rectflow}}((x_{0},x_{1}))}. Recursively applying thisRectflow(⋅){\displaystyle {\mathsf {Rectflow}}(\cdot )}operator generates a series of rectified flowsϕk+1=Rectflow((ϕ0k(x0),ϕ1k(x1))){\displaystyle \phi ^{k+1}={\mathsf {Rectflow}}((\phi _{0}^{k}(x_{0}),\phi _{1}^{k}(x_{1})))}. This "reflow" process not only reduces transport costs but also straightens the paths of rectified flows, makingϕk{\displaystyle \phi ^{k}}paths straighter with increasingk{\displaystyle k}. Rectified flow includes a nonlinear extension where linear interpolationxt{\displaystyle x_{t}}is replaced with any time-differentiable curve that connectsx0{\displaystyle x_{0}}andx1{\displaystyle x_{1}}, given byxt=αtx1+βtx0{\displaystyle x_{t}=\alpha _{t}x_{1}+\beta _{t}x_{0}}. This framework encompasses DDIM and probability flow ODEs as special cases, with particular choices ofαt{\displaystyle \alpha _{t}}andβt{\displaystyle \beta _{t}}. However, in the case where the path ofxt{\displaystyle x_{t}}is not straight, the reflow process no longer ensures a reduction in convex transport costs, and also no longer straighten the paths ofϕt{\displaystyle \phi _{t}}.[46] See[51]for a tutorial on flow matching, with animations. For generating images by DDPM, we need a neural network that takes a timet{\displaystyle t}and a noisy imagext{\displaystyle x_{t}}, and predicts a noiseϵθ(xt,t){\displaystyle \epsilon _{\theta }(x_{t},t)}from it. Since predicting the noise is the same as predicting the denoised image, then subtracting it fromxt{\displaystyle x_{t}}, denoising architectures tend to work well. For example, theU-Net, which was found to be good for denoising images, is often used for denoising diffusion models that generate images.[52] For DDPM, the underlying architecture ("backbone") does not have to be a U-Net. It just has to predict the noise somehow. For example, the diffusion transformer (DiT) uses aTransformerto predict the mean and diagonal covariance of the noise, given the textual conditioning and the partially denoised image. It is the same as standard U-Net-based denoising diffusion model, with a Transformer replacing the U-Net.[53]Mixture of experts-Transformer can also be applied.[54] DDPM can be used to model general data distributions, not just natural-looking images. For example, Human Motion Diffusion[55]models human motion trajectory by DDPM. Each human motion trajectory is a sequence of poses, represented by either joint rotations or positions. It uses aTransformernetwork to generate a less noisy trajectory out of a noisy one. The base diffusion model can only generate unconditionally from the whole distribution. For example, a diffusion model learned onImageNetwould generate images that look like a random image from ImageNet. To generate images from just one category, one would need to impose the condition, and then sample from the conditional distribution. Whatever condition one wants to impose, one needs to first convert the conditioning into a vector of floating point numbers, then feed it into the underlying diffusion model neural network. However, one has freedom in choosing how to convert the conditioning into a vector. Stable Diffusion, for example, imposes conditioning in the form ofcross-attention mechanism, where the query is an intermediate representation of the image in the U-Net, and both key and value are the conditioning vectors. The conditioning can be selectively applied to only parts of an image, and new kinds of conditionings can be finetuned upon the base model, as used in ControlNet.[56] As a particularly simple example, considerimage inpainting. The conditions arex~{\displaystyle {\tilde {x}}}, the reference image, andm{\displaystyle m}, the inpaintingmask. The conditioning is imposed at each step of the backward diffusion process, by first samplingx~t∼N(α¯tx~,σt2I){\displaystyle {\tilde {x}}_{t}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}{\tilde {x}},\sigma _{t}^{2}I\right)}, a noisy version ofx~{\displaystyle {\tilde {x}}}, then replacingxt{\displaystyle x_{t}}with(1−m)⊙xt+m⊙x~t{\displaystyle (1-m)\odot x_{t}+m\odot {\tilde {x}}_{t}}, where⊙{\displaystyle \odot }meanselementwise multiplication.[57]Another application of cross-attention mechanism is prompt-to-prompt image editing.[58] Conditioning is not limited to just generating images from a specific category, or according to a specific caption (as in text-to-image). For example,[55]demonstrated generating human motion, conditioned on an audio clip of human walking (allowing syncing motion to a soundtrack), or video of human running, or a text description of human motion, etc. For how conditional diffusion models are mathematically formulated, see a methodological summary in.[59] As generating an image takes a long time, one can try to generate a small image by a base diffusion model, then upscale it by other models. Upscaling can be done byGAN,[60]Transformer,[61]or signal processing methods likeLanczos resampling. Diffusion models themselves can be used to perform upscaling. Cascading diffusion model stacks multiple diffusion models one after another, in the style ofProgressive GAN. The lowest level is a standard diffusion model that generate 32x32 image, then the image would be upscaled by a diffusion model specifically trained for upscaling, and the process repeats.[52] In more detail, the diffusion upscaler is trained as follows:[52] This section collects some notable diffusion models, and briefly describes their architecture. The DALL-E series by OpenAI are text-conditional diffusion models of images. The first version of DALL-E (2021) is not actually a diffusion model. Instead, it uses a Transformer architecture that autoregressively generates a sequence of tokens, which is then converted to an image by the decoder of a discrete VAE. Released with DALL-E was the CLIP classifier, which was used by DALL-E to rank generated images according to how close the image fits the text. GLIDE (2022-03)[62]is a 3.5-billion diffusion model, and a small version was released publicly.[5]Soon after, DALL-E 2 was released (2022-04).[63]DALL-E 2 is a 3.5-billion cascaded diffusion model that generates images from text by "inverting the CLIP image encoder", the technique which they termed "unCLIP". The unCLIP method contains 4 models: a CLIP image encoder, a CLIP text encoder, an image decoder, and a "prior" model (which can be a diffusion model, or an autoregressive model). During training, the prior model is trained to convert CLIP image encodings to CLIP text encodings. The image decoder is trained to convert CLIP image encodings back to images. During inference, a text is converted by the CLIP text encoder to a vector, then it is converted by the prior model to an image encoding, then it is converted by the image decoder to an image. Sora(2024-02) is a diffusion Transformer model (DiT). Stable Diffusion(2022-08), released by Stability AI, consists of a denoising latent diffusion model (860 million parameters), a VAE, and a text encoder. The denoising network is a U-Net, with cross-attention blocks to allow for conditional image generation.[64][25] Stable Diffusion 3 (2024-03)[65]changed the latent diffusion model from the UNet to a Transformer model, and so it is a DiT. It uses rectified flow. Stable Video 4D (2024-07)[66]is a latent diffusion model for videos of 3D objects. Imagen (2022)[67][68]uses aT5-XXL language modelto encode the input text into an embedding vector. It is a cascaded diffusion model with three sub-models. The first step denoises a white noise to a 64×64 image, conditional on the embedding vector of the text. This model has 2B parameters. The second step upscales the image by 64×64→256×256, conditional on embedding. This model has 650M parameters. The third step is similar, upscaling by 256×256→1024×1024. This model has 400M parameters. The three denoising networks are all U-Nets. Muse (2023-01)[69]is not a diffusion model, but an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. Imagen 2 (2023-12) is also diffusion-based. It can generate images based on a prompt that mixes images and text. No further information available.[70]Imagen 3 (2024-05) is too. No further information available.[71] Veo (2024) generates videos by latent diffusion. The diffusion is conditioned on a vector that encodes both a text prompt and an image prompt.[72] Make-A-Video (2022) is a text-to-video diffusion model.[73][74] CM3leon (2023) is not a diffusion model, but an autoregressive causally masked Transformer, with mostly the same architecture asLLaMa-2.[75][76] Transfusion (2024) is a Transformer that combines autoregressive text generation and denoising diffusion. Specifically, it generates text autoregressively (with causal masking), and generates images by denoising multiple times over image tokens (with all-to-all attention).[77] Movie Gen (2024) is a series of Diffusion Transformers operating on latent space and by flow matching.[78]
https://en.wikipedia.org/wiki/Diffusion_model
Generative artificial intelligence(Generative AI,GenAI,[1]orGAI) is a subfield ofartificial intelligencethat uses generative models to produce text, images, videos, or other forms of data.[2][3][4]These modelslearnthe underlying patterns and structures of theirtraining dataand use them to produce new data[5][6]based on the input, which often comes in the form of natural languageprompts.[7][8] Generative AI tools have become more common since an "AI boom" in the 2020s. This boom was made possible by improvements intransformer-baseddeepneural networks, particularlylarge language models(LLMs). Major tools includechatbotssuch asChatGPT,DeepSeek,Copilot,Gemini,Llama, andGrok;text-to-imageartificial intelligence image generationsystems such asStable Diffusion,Midjourney, andDALL-E; andtext-to-videoAI generators such asSora.[9][10][11][12]Technology companies developing generative AI includeOpenAI,Anthropic,Microsoft,Google,DeepSeek, andBaidu.[13][14][15] Generative AI has raised many ethical questions. It can be used forcybercrime, or to deceive or manipulate people throughfake newsordeepfakes.[16]Even if used ethically, it may lead tomass replacement of human jobs.[17]The tools themselves have been criticized as violating intellectual property laws, since they are trained on and emulate copyrighted works of art.[18] Generative AI is used across many industries. Examples include software development,[19]healthcare,[20]finance,[21]entertainment,[22]customer service,[23]sales and marketing,[24]art, writing,[25]fashion,[26]and product design.[27] The first example of an algorithmically generated media is likely theMarkov chain. Markov chains have long been used to model natural languages since their development by Russian mathematicianAndrey Markovin the early 20th century. Markov published his first paper on the topic in 1906,[28][29]and analyzed the pattern of vowels and consonants in the novelEugeny Oneginusing Markov chains. Once a Markov chain is learned on atext corpus, it can then be used as a probabilistic text generator.[30][31] Computers were needed to go beyond Markov chains. By the early 1970s,Harold Cohenwas creating and exhibiting generative AI works created byAARON, the computer program Cohen created to generate paintings.[32] The terms generative AI planning or generative planning were used in the 1980s and 1990s to refer toAI planningsystems, especiallycomputer-aided process planning, used to generate sequences of actions to reach a specified goal.[33][34]Generative AI planning systems usedsymbolic AImethods such asstate space searchandconstraint satisfactionand were a "relatively mature" technology by the early 1990s. They were used to generate crisis action plans for military use,[35]process plans for manufacturing[33]and decision plans such as in prototype autonomous spacecraft.[36] Since its inception, the field ofmachine learninghas used bothdiscriminative modelsandgenerative modelsto model and predict data. Beginning in the late 2000s, the emergence ofdeep learningdrove progress, and research inimage classification,speech recognition,natural language processingand other tasks.Neural networksin this era were typically trained asdiscriminativemodels due to the difficulty of generative modeling.[37] In 2014, advancements such as thevariational autoencoderandgenerative adversarial networkproduced the first practical deep neural networks capable of learning generative models, as opposed to discriminative ones, for complex data such as images. These deep generative models were the first to output not only class labels for images but also entire images. In 2017, theTransformernetwork enabled advancements in generative models compared to olderLong-Short Term Memorymodels,[38]leading to the firstgenerative pre-trained transformer(GPT), known asGPT-1, in 2018.[39]This was followed in 2019 byGPT-2, which demonstrated the ability to generalize unsupervised to many different tasks as aFoundation model.[40] The new generative models introduced during this period allowed for large neural networks to be trained usingunsupervised learningorsemi-supervised learning, rather than thesupervised learningtypical of discriminative models. Unsupervised learning removed the need for humans tomanually label data, allowing for larger networks to be trained.[41] In March 2020, the release of15.ai, a freeweb applicationcreated by an anonymousMITresearcher that could generate convincing character voices using minimal training data, marked one of the earliest popular use cases of generative AI.[42]The platform is credited as the first mainstream service to popularize AI voice cloning (audio deepfakes) inmemesandcontent creation, influencing subsequent developments invoice AI technology.[43][44] In 2021, the emergence ofDALL-E, atransformer-based pixel generative model, marked an advance in AI-generated imagery.[45]This was followed by the releases ofMidjourneyandStable Diffusionin 2022, which further democratized access to high-qualityartificial intelligence artcreation fromnatural language prompts.[46]These systems demonstrated unprecedented capabilities in generating photorealistic images, artwork, and designs based on text descriptions, leading to widespread adoption among artists, designers, and the general public. In late 2022, the public release ofChatGPTrevolutionized the accessibility andapplication of generative AIfor general-purpose text-based tasks.[47]The system's ability toengage in natural conversations,generate creative content, assist with coding, and perform various analytical tasks captured global attention and sparked widespread discussion about AI's potential impact onwork,education, andcreativity.[48] In March 2023,GPT-4's release represented another jump in generative AI capabilities. A team fromMicrosoft Researchcontroversially argued that it "could reasonably be viewed as an early (yet still incomplete) version of anartificial general intelligence(AGI) system."[49]However, this assessment was contested by other scholars who maintained that generative AI remained "still far from reaching the benchmark of 'general human intelligence'" as of 2023.[50]Later in 2023,MetareleasedImageBind, an AI model combining multiplemodalitiesincluding text, images, video, thermal data, 3D data, audio, and motion, paving the way for more immersive generative AI applications.[51] In December 2023,GoogleunveiledGemini, a multimodal AI model available in four versions: Ultra, Pro, Flash, and Nano.[52]The company integrated Gemini Pro into itsBard chatbotand announced plans for "Bard Advanced" powered by the larger Gemini Ultra model.[53]In February 2024, Google unified Bard and Duet AI under the Gemini brand, launching a mobile app onAndroidand integrating the service into the Google app oniOS.[54] In March 2024,Anthropicreleased theClaude3 family of large language models, including Claude 3 Haiku, Sonnet, and Opus.[55]The models demonstrated significant improvements in capabilities across various benchmarks, with Claude 3 Opus notably outperforming leading models from OpenAI and Google.[56]In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated improved performance compared to the larger Claude 3 Opus, particularly in areas such as coding, multistep workflows, and image analysis.[57] According to a survey bySASand Coleman Parkes Research,Chinahas emerged as a global leader in generative AI adoption, with 83% of Chinese respondents using the technology, exceeding both the global average of 54% and the U.S. rate of 65%. This leadership is further evidenced by China'sintellectual propertydevelopments in the field, with aUNreport revealing that Chinese entities filed over 38,000 generative AIpatentsfrom 2014 to 2023, substantially surpassing the United States in patent applications.[58] A generative AI system is constructed by applyingunsupervised machine learning(invoking for instanceneural networkarchitectures such asgenerative adversarial networks(GANs),variation autoencoders(VAEs),transformers, orself-supervisedmachine learning trained on adataset. The capabilities of a generative AI system depend on the output (modality) of the data set used. Generative AI can be eitherunimodalormultimodal; unimodal systems take only one type of input, whereas multimodal systems can take more than one type of input.[59]For example, one version ofOpenAI's GPT-4 accepts both text and image inputs.[60] Generative AI has made its appearance in a wide variety of industries, radically changing the dynamics of content creation, analysis, and delivery. In healthcare,[61]generative AI is instrumental in acceleratingdrug discoveryby creating molecular structures with target characteristics[62]and generatingradiologyimages for training diagnostic models. This extraordinary ability not only enables faster and cheaper development but also enhances medical decision-making. In finance, generative AI is invaluable as it generates datasets to train models and automates report generation with natural language summarization capabilities. It automates content creation, produces synthetic financial data, and tailors customer communications. It also powers chatbots and virtual agents. Collectively, these technologies enhance efficiency, reduce operational costs, and support data-driven decision-making in financial institutions.[63]The media industry makes use of generative AI for numerous creative activities such as music composition, scriptwriting, video editing, and digital art. The educational sector is impacted as well, since the tools make learning personalized through creating quizzes, study aids, and essay composition. Both the teachers and the learners benefit from AI-based platforms that suit various learning patterns.[64] Generative AI systems trained on words orword tokensincludeGPT-3,GPT-4,GPT-4o,LaMDA,LLaMA,BLOOM,Geminiand others (seeList of large language models). They are capable ofnatural language processing,machine translation, andnatural language generationand can be used asfoundation modelsfor other tasks.[66]Data sets includeBookCorpus,Wikipedia, and others (seeList of text corpora). In addition tonatural languagetext, large language models can be trained onprogramming languagetext, allowing them to generatesource codefor newcomputer programs.[67]Examples includeOpenAI Codex,Tabnine,GitHub Copilot,Microsoft Copilot, andVS CodeforkCursor.[68] Some AI assistants help candidates cheat during onlinecoding interviewsby providing code, improvements, and explanations. Their clandestine interfaces minimize the need for eye movements that would expose cheating to the interviewer.[69] Producing high-quality visual art is a prominent application of generative AI.[70]Generative AI systems trained on sets of images withtext captionsincludeImagen,DALL-E,Midjourney,Adobe Firefly,FLUX.1, Stable Diffusion and others (seeArtificial intelligence art,Generative art, andSynthetic media). They are commonly used fortext-to-imagegeneration andneural style transfer.[71]Datasets includeLAION-5Band others (seeList of datasets in computer vision and image processing). Generative AI can also be trained extensively on audio clips to produce natural-soundingspeech synthesisandtext-to-speechcapabilities. An early pioneer in this field was15.ai, launched in March 2020, which demonstrated the ability to clone character voices using as little as 15 seconds of training data.[72]The website gained widespread attention for its ability to generate emotionally expressive speech for various fictional characters, though it was later taken offline in 2022 due to copyright concerns.[73][74][75]Commercial alternatives subsequently emerged, includingElevenLabs' context-aware synthesis tools andMeta Platform's Voicebox.[76] Generative AI systems such asMusicLM[77]and MusicGen[78]can also be trained on the audio waveforms of recorded music along with text annotations, in order to generate new musical samples based on text descriptions such asa calming violin melody backed by a distorted guitar riff. Audio deepfakesof musiclyricshave been generated, like the song Savages, which used AI to mimic rapperJay-Z's vocals. Music artist's instrumentals and lyrics are copyrighted but their voices are not protected from regenerative AI yet, raising a debate about whether artists should get royalties from audio deepfakes.[79] Many AI music generators have been created that can be generated using a text phrase,genreoptions, andloopedlibrariesofbarsandriffs.[80] Generative AI trained on annotated video cangeneratetemporally-coherent, detailed andphotorealisticvideo clips. Examples includeSorabyOpenAI,[12]Runway,[81]and Make-A-Video byMeta Platforms.[82] Generative AI can also be trained on the motions of aroboticsystem to generate new trajectories formotion planningornavigation. For example, UniPi from Google Research uses prompts like"pick up blue bowl"or"wipe plate with yellow sponge"to control movements of a robot arm.[83]Multimodal "vision-language-action" models such as Google's RT-2 can perform rudimentary reasoning in response to user prompts and visual input, such as picking up a toydinosaurwhen given the promptpick up the extinct animalat a table filled with toy animals and other objects.[84] Artificially intelligentcomputer-aided design(CAD) can use text-to-3D, image-to-3D, and video-to-3D toautomate3D modeling.[85]AI-basedCAD librariescould also be developed usinglinkedopen dataofschematicsanddiagrams.[86]AI CADassistantsare used as tools to help streamline workflow.[87] Generative AI models are used to powerchatbotproducts such asChatGPT,programming toolssuch asGitHub Copilot,[88]text-to-imageproducts such as Midjourney, and text-to-video products such asRunwayGen-2.[89]Generative AI features have been integrated into a variety of existing commercially available products such asMicrosoft Office(Microsoft Copilot),[90]Google Photos,[91]and theAdobe Suite(Adobe Firefly).[92]Many generative AI models are also available asopen-source software, including Stable Diffusion and the LLaMA[93]language model. Smaller generative AI models with up to a few billion parameters can run onsmartphones, embedded devices, andpersonal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on aRaspberry Pi 4[94]and one version of Stable Diffusion can run on aniPhone 11.[95] Larger models with tens of billions of parameters can run onlaptopordesktop computers. To achieve an acceptable speed, models of this size may requireacceleratorssuch as theGPUchips produced byNVIDIAandAMDor the Neural Engine included inApple siliconproducts. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC.[96] The advantages of running generative AI locally include protection ofprivacyandintellectual property, and avoidance ofrate limitingandcensorship. Thesubredditr/LocalLLaMA in particular focuses on usingconsumer-grade gaminggraphics cards[97]through such techniques ascompression. That forum is one of only two sourcesAndrej Karpathytrusts forlanguage model benchmarks.[98]Yann LeCunhas advocated open-source models for their value tovertical applications[99]and for improvingAI safety.[100] Language models with hundreds of billions of parameters, such as GPT-4 orPaLM, typically run ondatacentercomputers equipped with arrays ofGPUs(such as NVIDIA'sH100) orAI acceleratorchips (such as Google'sTPU). These very large models are typically accessed ascloudservices over the Internet. In 2022, theUnited States New Export Controls on Advanced Computing and Semiconductors to Chinaimposed restrictions on exports to China ofGPUand AI accelerator chips used for generative AI.[101]Chips such as the NVIDIA A800[102]and theBiren TechnologyBR104[103]were developed to meet the requirements of the sanctions. There is free software on the market capable of recognizing text generated by generative artificial intelligence (such asGPTZero), as well as images, audio or video coming from it.[104]Potential mitigation strategies fordetecting generative AI contentincludedigital watermarking,content authentication,information retrieval, andmachine learning classifier models.[105]Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work.[106][107] Generative adversarial networks(GANs) are an influential generative modeling technique. GANs consist of two neural networks—the generator and the discriminator—trained simultaneously in a competitive setting. The generator createssynthetic databy transforming random noise into samples that resemble the training dataset. The discriminator is trained to distinguish the authentic data from synthetic data produced by the generator.[108]The two models engage in aminimaxgame: the generator aims to create increasingly realistic data to "fool" the discriminator, while the discriminator improves its ability to distinguish real from fake data. This continuous training setup enables the generator to produce high-quality and realistic outputs.[109] Variational autoencoders(VAEs) are deep learning models that probabilistically encode data. They are typically used for tasks such asnoise reductionfrom images,data compression, identifying unusual patterns, andfacial recognition. Unlikestandard autoencoders, which compress input data into a fixed latent representation, VAEs model thelatent spaceas a probability distribution,[110]allowing for smooth sampling and interpolation between data points. The encoder ("recognition model") maps input data to a latent space, producing means and variances that define a probability distribution. The decoder ("generative model") samples from this latent distribution and attempts to reconstruct the original input. VAEs optimize a loss function that includes both the reconstruction error and aKullback–Leibler divergenceterm, which ensures the latent space follows a known prior distribution. VAEs are particularly suitable for tasks that require structured but smooth latent spaces, although they may create blurrier images than GANs. They are used for applications like image generation, data interpolation andanomaly detection. Transformers became the foundation for many powerful generative models, most notably thegenerative pre-trained transformer(GPT) series developed by OpenAI. They marked a major shift in natural language processing by replacing traditionalrecurrentandconvolutionalmodels.[111]This architecture allows models to process entire sequences simultaneously and capture long-range dependencies more efficiently. Theself-attention mechanismenables the model to capture the significance of every word in a sequence when predicting the subsequent word, thus improving its contextual understanding. Unlike recurrent neural networks, transformers process all the tokens in parallel, which improves the training efficiency and scalability. Transformers are typically pre-trained on enormous corpora in aself-supervisedmanner, prior to beingfine-tuned. In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with theBiden administrationin July 2023 to watermark AI-generated content.[112]In October 2023,Executive Order 14110applied theDefense Production Actto require all US companies to report information to the federal government when training certain high-impact AI models.[113][114] In the European Union, the proposedArtificial Intelligence Actincludes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such.[115][116] In China, theInterim Measures for the Management of Generative AI Servicesintroduced by theCyberspace Administration of Chinaregulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI must "adhere to socialist core values".[117][118] Generative AI systems such asChatGPTandMidjourneyare trained on large, publicly available datasets that include copyrighted works. AI developers have argued that such training is protected underfair use, while copyright holders have argued that it infringes their rights.[119] Proponents of fair use training have argued that it is atransformative useand does not involve making copies of copyrighted works available to the public.[119]Critics have argued that image generators such asMidjourneycan create nearly-identical copies of some copyrighted images,[120]and that generative AI programs compete with the content they are trained on.[121] As of 2024, several lawsuits related to the use of copyrighted material in training are ongoing.Getty Imageshas suedStability AIover the use of its images to trainStable Diffusion.[122]Both theAuthors GuildandThe New York Timeshave suedMicrosoftandOpenAIover the use of their works to trainChatGPT.[123][124] A separate question is whether AI-generated works can qualify for copyright protection. TheUnited States Copyright Officehas ruled that works created by artificial intelligence without any human input cannot be copyrighted, because they lack human authorship.[125]Some legal professionals have suggested thatNaruto v. Slater(2018), in which theU.S. 9th Circuit Court of Appealsheld thatnon-humanscannot be copyright holders ofartistic works, could be a potential precedent in copyright litigation over works created by generative AI.[126]However, the office has also begun taking public input to determine if these rules need to be refined for generative AI.[127] In January 2025, theUnited States Copyright Office(USCO) released extensive guidance regarding the use of AI tools in the creative process, and established that "...generative AI systems also offer tools that similarly allow users to exert control. [These] can enable the user to control the selection and placement of individual creative elements. Whether such modifications rise to the minimum standard of originality required underFeistwill depend on a case-by-case determination. In those cases where they do, the output should be copyrightable"[128]Subsequently, the USCO registered the first visual artwork to be composed of entirely AI-generated materials, titled "A Single Piece of American Cheese".[129] The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls topause AI experiments, and actions by multiple governments. In a July 2023 briefing of theUnited Nations Security Council,Secretary-GeneralAntónio Guterresstated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale".[130]In addition, generative AI has a significantcarbon footprint.[131][132] From the early days of the development of AI, there have been arguments put forward byELIZAcreatorJoseph Weizenbaumand others about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculations and qualitative, value-based judgements.[134]In April 2023, it was reported that image generation AI has resulted in 70% of the jobs for video game illustrators in China being lost.[135][136]In July 2023, developments in generative AI contributed to the2023 Hollywood labor disputes.Fran Drescher, president of theScreen Actors Guild, declared that "artificial intelligence poses an existential threat to creative professions" during the2023 SAG-AFTRA strike.[137]Voice generation AI has been seen as a potential challenge to thevoice actingsector.[138][139] The intersection of AI and employment concerns among underrepresented groups globally remains a critical facet. While AI promises efficiency enhancements and skill acquisition, concerns about job displacement and biased recruiting processes persist among these groups, as outlined in surveys byFast Company. To leverage AI for a more equitable society, proactive steps encompass mitigating biases, advocating transparency, respecting privacy and consent, and embracing diverse teams and ethical considerations. Strategies involve redirecting policy emphasis on regulation, inclusive design, and education's potential for personalized teaching to maximize benefits while minimizing harms.[140] Generative AI models can reflect and amplify anycultural biaspresent in the underlying data. For example, a language model might assume that doctors and judges are male, and that secretaries or nurses are female, if those biases are common in the training data.[141]Similarly, an image model prompted with the text "a photo of a CEO" might disproportionately generate images of white male CEOs,[142]if trained on a racially biased data set. A number of methods for mitigating bias have been attempted, such as altering input prompts[143]and reweighting training data.[144] Deepfakes (aportmanteauof "deep learning" and "fake"[145]) are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness usingartificial neural networks.[146]Deepfakes have garnered widespread attention and concerns for their uses indeepfake celebrity pornographic videos,revenge porn,fake news,hoaxes, healthdisinformation,financial fraud, and covertforeign election interference.[147][148][149][150][151][152][153]This has elicited responses from both industry and government to detect and limit their use.[154][155] In July 2023, the fact-checking companyLogicallyfound that the popular generative AI modelsMidjourney,DALL-E 2andStable Diffusionwould produce plausible disinformation images when prompted to do so, such as images ofelectoral fraudin the United States and Muslim women supporting India'sHindu nationalistBharatiya Janata Party.[156][157] In April 2024, a paper proposed to useblockchain(distributed ledgertechnology) to promote "transparency, verifiability, and decentralization in AI development and usage".[158] Instances of users abusing software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals have raised ethical concerns over voice generation AI.[159][160][161][162][163][164]In response, companies such as ElevenLabs have stated that they would work on mitigating potential abuse through safeguards andidentity verification.[165] Concerns and fandoms have spawned fromAI-generated music. The same software used to clone voices has been used on famous musicians' voices to create songs that mimic their voices, gaining both tremendous popularity and criticism.[166][167][168]Similar techniques have also been used to create improved quality or full-length versions of songs that have been leaked or have yet to be released.[169] Generative AI has also been used to create new digital artist personalities, with some of these receiving enough attention to receive record deals at major labels.[170]The developers of these virtual artists have also faced their fair share of criticism for their personified programs, including backlash for "dehumanizing" an artform, and also creating artists which create unrealistic or immoral appeals to their audiences.[171] Many websites that allowexplicit AI generated images or videoshave been created,[172]and this has been used to create illegal content, such asrape,child sexual abuse material,[173][174]necrophilia, andzoophilia. Generative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, includingphishingscams.[175]Deepfakevideo and audio have been used to create disinformation and fraud. In 2020, former Googleclick fraudczarShuman Ghosemajumderargued that once deepfake videos become perfectly realistic, they would stop appearing remarkable to viewers, potentially leading to uncritical acceptance of false information.[176]Additionally,large language modelsand other forms of text-generation AI have been used to create fake reviews ofe-commercewebsites to boost ratings.[177]Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT.[178] A 2023 study showed that generative AI can be vulnerable to jailbreaks,reverse psychologyandprompt injectionattacks, enabling attackers to obtain help with harmful requests, such as for craftingsocial engineeringandphishing attacks.[179]Additionally, other researchers have demonstrated that open-source models can befine-tunedto remove their safety restrictions at low cost.[180] Trainingfrontier AI modelsrequires an enormous amount of computing power. Usually onlyBig Techcompanies have the financial resources to make such investments. Smaller start-ups such asCohereandOpenAIend up buying access todata centersfromGoogleandMicrosoftrespectively.[181] AI has a significant carbon footprint due to growing energy consumption from both training and usage.[131][132]Scientists and journalists have expressed concerns about the environmental impact that the development and deployment of generative models are having: high CO2emissions,[182][183][184]large amounts of freshwater used for data centers,[185][186]and high amounts of electricity usage.[183][187][188]There is also concern that these impacts may increase as these models are incorporated into widely used search engines such as Google Search and Bing,[187]aschatbotsand other applications become more popular,[186][187]and as models need to be retrained.[187] The carbon footprint of generative AI globally is estimated to be growing steadily, with potential annual emissions ranging from 18.21 to 245.94 million tons of CO2 by 2035,[189]with the highest estimates for 2035 nearing the impact of the United Statesbeef industryon emissions (currently estimated to emit 257.5 million tons annually as of 2024).[190] Proposed mitigation strategies include factoring potential environmental costs prior to model development or data collection,[182]increasing efficiency of data centers to reduce electricity/energy usage,[184][187][188]building more efficientmachine learning models,[183][185][186]minimizing the number of times that models need to be retrained,[184]developing a government-directed framework for auditing the environmental impact of these models,[184][185]regulating for transparency of these models,[184]regulating their energy and water usage,[185]encouraging researchers to publish data on their models' carbon footprint,[184][187]and increasing the number of subject matter experts who understand both machine learning and climate science.[184] The New York Timesdefinesslopas analogous tospam: "shoddy or unwanted A.I. content in social media, art, books and ... in search results."[191]Journalists have expressed concerns about the scale of low-quality generated content with respect to social media content moderation,[192]the monetary incentives from social media companies to spread such content,[192][193]false political messaging,[193]spamming of scientific research paper submissions,[194]increased time and effort to find higher quality or desired content on the Internet,[195]the indexing of generated content by search engines,[196]and on journalism itself.[197] A paper published by researchers at Amazon Web Services AI Labs found that over 57% of sentences from a sample of over 6 billion sentences fromCommon Crawl, a snapshot of web pages, weremachine translated. Many of these automated translations were seen as lower quality, especially for sentences that were translated across at least three languages. Many lower-resource languages (ex.Wolof,Xhosa) were translated across more languages than higher-resource languages (ex. English, French).[198][199] In September 2024,Robyn Speer, the author of wordfreq, an open source database that calculated word frequencies based on text from the Internet, announced that she had stopped updating the data for several reasons: high costs for obtaining data fromRedditandTwitter, excessive focus on generative AI compared to other methods in thenatural language processingcommunity, and that "generative AI has polluted the data".[200] The adoption of generative AI tools led to an explosion of AI-generated content across multiple domains. A study fromUniversity College Londonestimated that in 2023, more than 60,000 scholarly articles—over 1% of all publications—were likely written with LLM assistance.[201]According toStanford University's Institute for Human-Centered AI, approximately 17.5% of newly published computer science papers and 16.9% of peer review text now incorporate content generated by LLMs.[202]Many academic disciplines have concerns about the factual reliably of academic content generated by AI.[203] Visual content follows a similar trend. Since the launch ofDALL-E2 in 2022, it is estimated that an average of 34 million images have been created daily. As of August 2023, more than 15 billion images had been generated using text-to-image algorithms, with 80% of these created by models based onStable Diffusion.[204] If AI-generated content is included in new data crawls from the Internet for additional training of AI models, defects in the resulting models may occur.[205]Training an AI model exclusively on the output of another AI model produces a lower-quality model. Repeating this process, where each new model is trained on the previous model's output, leads to progressive degradation and eventually results in a "model collapse" after multiple iterations.[206]Tests have been conducted with pattern recognition of handwritten letters and with pictures of human faces.[207]As a consequence, the value of data collected from genuine human interactions with systems may become increasingly valuable in the presence of LLM-generated content in data crawled from the Internet. On the other side,synthetic datais often used as an alternative to data produced by real-world events. Such data can be deployed to validate mathematical models and to train machine learning models while preserving user privacy,[208]including for structured data.[209]The approach is not limited to text generation; image generation has been employed to train computer vision models.[210] In January 2023,Futurism.combroke the story thatCNEThad been using an undisclosed internal AI tool to write at least 77 of its stories; after the news broke, CNET posted corrections to 41 of the stories.[211] In April 2023, the German tabloidDie Aktuellepublished a fake AI-generated interview with former racing driverMichael Schumacher, who had not made any public appearances since 2013 after sustaining a brain injury in a skiing accident. The story included two possible disclosures: the cover included the line "deceptively real", and the interview included an acknowledgment at the end that it was AI-generated. The editor-in-chief was fired shortly thereafter amid the controversy.[212] Other outlets that have published articles whose content or byline have been confirmed or suspected to be created by generative AI models – often with false content, errors, or non-disclosure of generative AI use – include: In May 2024, Futurism noted that a content management system video by AdVon Commerce, who had used generative AI to produce articles for many of the aforementioned outlets, appeared to show that they "had produced tens of thousands of articles for more than 150 publishers."[221] News broadcasters in Kuwait, Greece, South Korea, India, China and Taiwan have presented news with anchors based on Generative AI models, prompting concerns about job losses for human anchors and audience trust in news that has historically been influenced byparasocial relationshipswith broadcasters, content creators or social media influencers.[242][243][244]Algorithmically generated anchors have also been used by allies ofISISfor their broadcasts.[245] In 2023, Google reportedly pitched a tool to news outlets that claimed to "produce news stories" based on input data provided, such as "details of current events". Some news company executives who viewed the pitch described it as "[taking] for granted the effort that went into producing accurate and artful news stories."[246] In February 2024, Google launched a program to pay small publishers to write three articles per day using a beta generative AI model. The program does not require the knowledge or consent of the websites that the publishers are using as sources, nor does it require the published articles to be labeled as being created or assisted by these models.[247] Many defunct news sites (The Hairpin,The Frisky,Apple Daily,Ashland Daily Tidings,Clayton County Register,Southwest Journal) and blogs (The Unofficial Apple Weblog,iLounge) have undergonecybersquatting, with articles created by generative AI.[248][249][250][251][252][253][254][255] United States SenatorsRichard BlumenthalandAmy Klobucharhave expressed concern that generative AI could have a harmful impact on local news.[256]In July 2023, OpenAI partnered with the American Journalism Project to fund local news outlets for experimenting with generative AI, with Axios noting the possibility of generative AI companies creating a dependency for these news outlets.[257] Meta AI, a chatbot based onLlama 3which summarizes news stories, was noted byThe Washington Postto copy sentences from those stories without direct attribution and to potentially further decrease the traffic of online news outlets.[258] In response to potential pitfalls around the use and misuse of generative AI in journalism and worries about declining audience trust, outlets around the world, including publications such asWired,Associated Press,The Quint,RapplerorThe Guardianhave published guidelines around how they plan to use and not use AI and generative AI in their work.[259][260][261][262] In June 2024,Reuters Institutepublished theirDigital News Report for 2024. In a survey of people in America and Europe, Reuters Institute reports that 52% and 47% respectively are uncomfortable with news produced by "mostly AI with some human oversight", and 23% and 15% respectively report being comfortable. 42% of Americans and 33% of Europeans reported that they were comfortable with news produced by "mainly human with some help from AI". The results of global surveys reported that people were more uncomfortable with news topics including politics (46%), crime (43%), and local news (37%) produced by AI than other news topics.[263]
https://en.wikipedia.org/wiki/Generative_artificial_intelligence
Synthetic media(also known asAI-generated media,[1][2]media produced bygenerative AI,[3]personalized media,personalized content,[4]and colloquially asdeepfakes[5]) is a catch-all term for the artificial production, manipulation, and modification of data and media byautomatedmeans, especially through the use ofartificial intelligencealgorithms, such as for the purpose of producing automated content or producing cultural works (e.g. text, image, sound or video) within a set of human prompted parameters automatically.[6][7][8][9]Synthetic media as a field has grown rapidly since the creation ofgenerative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more.[8]Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology (and often use "deepfakes" as a euphemism, e.g. "deepfakes for text"[citation needed]fornatural-language generation; "deepfakes for voices" for neuralvoice cloning, etc.)[10][11]Significant attention arose towards the field of synthetic media starting in 2017 whenMotherboardreported on the emergence ofAI altered pornographic videosto insert the faces of famous actresses.[12][13]Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government,[12]the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds.[14]Synthetic media is an applied form ofartificial imagination.[12] The idea of automated art dates back to theautomataofancient Greek civilization. Nearly 2000 years ago, the engineerHero of Alexandriadescribed statues that could move and mechanical theatrical devices.[15]Over the centuries, mechanical artworks drew crowds throughout Europe,[16]China,[17]India,[18]and so on. Other automated novelties such asJohann Philipp Kirnberger's "Musikalisches Würfelspiel" (Musical Dice Game) 1757 also amused audiences.[19] Despite the technical capabilities of these machines, however, none were capable of generating original content and were entirely dependent upon their mechanical designs. The field of AI research was born ata workshopatDartmouth Collegein 1956,[20]begetting the rise ofdigital computing used as a medium of artas well as the rise ofgenerative art. Initial experiments in AI-generated art included theIlliac Suite, a 1957 composition forstring quartetwhich is generally agreed to be the first score composed by anelectroniccomputer.[21]Lejaren Hiller, in collaboration withLeonard Issacson, programmed theILLIAC Icomputer at theUniversity of Illinois at Urbana–Champaign(where both composers were professors) to generate compositional material for hisString Quartet No. 4. In 1960, Russian researcher R.Kh.Zaripov published worldwide first paper on algorithmic music composing using the "Ural-1" computer.[22] In 1965, inventorRay Kurzweilpremiered a piano piece created by a computer that was capable of pattern recognition in various compositions. The computer was then able to analyze and use these patterns to create novel melodies. The computer was debuted onSteve Allen'sI've Got a Secretprogram, and stumped the hosts until film starHarry Morganguessed Ray's secret.[23] Before 1989,artificial neural networkshave been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner.[24][25] In 2014,Ian Goodfellowand his colleagues developed a new class ofmachine learningsystems:generative adversarial networks(GAN).[26]Twoneural networkscontest with each other in a game (in the sense ofgame theory, often but not always in the form of azero-sum game). Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form ofgenerative modelforunsupervised learning, GANs have also proven useful forsemi-supervised learning,[27]fullysupervised learning,[28]andreinforcement learning.[29]In a 2016 seminar,Yann LeCundescribed GANs as "the coolest idea in machine learning in the last twenty years".[30] In 2017,Googleunveiledtransformers,[31]a new type of neural network architecture specialized for language modeling that enabled for rapid advancements innatural language processing. Transformers proved capable of high levels of generalization, allowing networks such asGPT-3and Jukebox from OpenAI to synthesize text and music respectively at a level approaching humanlike ability.[32][33]There have been some attempts to use GPT-3 andGPT-2for screenplay writing, resulting in both dramatic (the Italian short filmFrammenti di Anime Meccaniche[34],written byGPT-2) and comedic narratives (the short filmSolicitorsby YouTube CreatorCalamity AI written by GPT-3).[35] Deepfakes (aportmanteauof "deep learning" and "fake"[36]) are the most prominent form of synthetic media.[37][38]Deepfakes are media productions that uses a an existing image or video and replaces the subject with someone else's likeness usingartificial neural networks.[39]They often combine and superimpose existing media onto source media using machine learning techniques known asautoencodersandgenerative adversarial networks(GANs).[40]Deepfakes have garnered widespread attention for their uses incelebrity pornographic videos,revenge porn,fake news,hoaxes, andfinancial fraud.[41][42][43][44]This has elicited responses from both industry and government to detect and limit their use.[45][46] The term deepfakes originated around the end of 2017 from aReddituser named "deepfakes".[39]He, as well as others in the Reddit community r/deepfakes, shared deepfakes they created; many videos involved celebrities' faces swapped onto the bodies of actresses in pornographic videos,[39]while non-pornographic content included many videos with actorNicolas Cage's face swapped into various movies.[47]In December 2017, Samantha Cole published an article about r/deepfakes inVicethat drew the first mainstream attention to deepfakes being shared in online communities.[48]Six weeks later, Cole wrote in a follow-up article about the large increase in AI-assisted fake pornography.[39]In February 2018, r/deepfakes was banned by Reddit for sharing involuntary pornography.[49]Other websites have also banned the use of deepfakes for involuntary pornography, including the social media platformTwitterand the pornography sitePornhub.[50]However, some websites have not yet banned Deepfake content, including4chanand8chan.[51] Non-pornographic deepfake content continues to grow in popularity with videos fromYouTubecreators such as Ctrl Shift Face and Shamook.[52][53]A mobile application, Impressions, was launched foriOSin March 2020. The app provides a platform for users to deepfake celebrity faces into videos in a matter of minutes.[54] Image synthesisis the artificial production of visual media, especially through algorithmic means. In the emerging world of synthetic media, the work of digital-image creation—once the domain of highly skilled programmers and Hollywood special-effects artists—could be automated by expert systems capable of producing realism on a vast scale.[55]One subfield of this includeshuman image synthesis, which is the use of neural networks to make believable and evenphotorealisticrenditions[56][57]of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films usingcomputer generated imageryhave featured synthetic images of human-like charactersdigitally compositedonto the real or other simulated film material. Towards the end of the2010sdeep learningartificial intelligencehas been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work. The websiteThis Person Does Not Existshowcases fully automated human image synthesis by endlessly generating images that look like facial portraits of human faces.[58] Beyond deepfakes and image synthesis, audio is another area where AI is used to create synthetic media.[59]Synthesized audio will be capable of generating any conceivable sound that can be achieved through audio waveform manipulation, which might conceivably be used to generate stock audio of sound effects or simulate audio of currently imaginary things.[60] Artificial intelligence artusually meansvisual artworkgenerated (or enhanced) through the use ofartificial intelligence(AI) programs. Artists began to create AI art in the mid to late 20th century, when the discipline was founded. Throughoutits history, AI has raised manyphilosophical concernsrelated to thehuman mind,artificial beings, and also what can be consideredartin human–AI collaboration. Since the 20th century, people have used AI to create art, some of which has been exhibited in museums and won awards.[61] There are many tools available to the artist when working with diffusion models. They can define both positive and negative prompts, but they are also afforded a choice in using (or omitting the use of)VAEs,LoRAs, hypernetworks, IP-adapter, and embedding/textual inversions. Artists can tweak settings like guidance scale (which balances creativity and accuracy), seed (to control randomness), and upscalers (to enhance image resolution), among others. Additional influence can be exerted during pre-inference by means of noise manipulation, while traditional post-processing techniques are frequently used post-inference. People can also train their own models. In addition, procedural "rule-based" generation of images using mathematical patterns, algorithms that simulate brush strokes and other painted effects, and deep learning algorithms such as generative adversarial networks (GANs) and transformers have been developed. Several companies have released apps and websites that allow one to forego all the options mentioned entirely while solely focusing on the positive prompt. There also exist programs which transform photos into art-like images in the style of well-known sets of paintings.[64][65] The capacity to generate music through autonomous, non-programmable means has long been sought after since the days of Antiquity, and with developments in artificial intelligence, two particular domains have arisen: Speech synthesis has been identified as a popular branch of synthetic media[72]and is defined as the artificial production of humanspeech. A computer system used for this purpose is called aspeech computerorspeech synthesizer, and can be implemented insoftwareorhardwareproducts. Atext-to-speech(TTS) system converts normal language text into speech; other systems rendersymbolic linguistic representationslikephonetic transcriptionsinto speech.[73] Synthesized speech can be created by concatenating pieces of recorded speech that are stored in adatabase. Systems differ in the size of the stored speech units; a system that storesphonesordiphonesprovides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of thevocal tractand other human voice characteristics to create a completely "synthetic" voice output.[74] Virtual assistants such as Siri and Alexa have the ability to turn text into audio and synthesize speech.[75] In 2016,Google DeepMindunveiled WaveNet, a deep generative model of raw audio waveforms that could learn to understand which waveforms best resembled human speech as well as musical instrumentation.[76]Some projects offer real-time generations of synthetic speech using deep learning, such as15.ai, aweb applicationtext-to-speech tool developed by anMITresearch scientist.[77][78][79][80] Natural-language generation(NLG, sometimes synonymous withtext synthesis) is a software process that transforms structured data into natural language. It can be used to produce long form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (achatbot) which might even be read out by atext-to-speechsystem. Interest in natural-language generation increased in 2019 afterOpenAIunveiled GPT2, an AI system that generates text matching its input in subject and tone.[81]GPT2 is a transformer, a deepmachine learningmodel introduced in 2017 used primarily in the field ofnatural language processing(NLP).[82] AI-generated media can be used to develop a hybrid graphics system that could be used in video games, movies, and virtual reality,[83]as well as text-based games such as AI Dungeon 2, which uses either GPT-2 or GPT-3 to allow for near-infinite possibilities that are otherwise impossible to create through traditional game development methods.[84][85][86]Computer hardware companyNvidiahas also worked on developed AI-generated video game demos, such as a model that can generate an interactive game based on non-interactive videos.[87] Apart from organizational attack, political organizations and leaders are more suffered from such deep fake videos. In 2022, a deep fake was released where Ukraine president was calling for a surrender the fight against Russia. The video shows Ukrainian president telling his soldiers to lay down their arms and surrender.[88] Deepfakes have been used to misrepresent well-known politicians in videos. In separate videos, the face of the Argentine PresidentMauricio Macrihas been replaced by the face ofAdolf Hitler, andAngela Merkel's face has been replaced withDonald Trump's.[89][90] In June 2019, a downloadableWindowsandLinuxapplication called DeepNude was released which used neural networks, specificallygenerative adversarial networks, to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50.[91][92]On June 27 the creators removed the application and refunded consumers.[93] The US Congress held a senate meeting discussing the widespread impacts of synthetic media, including deepfakes, describing it as having the "potential to be used to undermine national security, erode public trust in our democracy and other nefarious reasons."[94] In 2019, voice cloning technology was used to successfully impersonate a chief executive's voice and demand a fraudulent transfer of €220,000.[95]The case raised concerns about the lack of encryption methods over telephones as well as the unconditional trust often given to voice and to media in general.[96] Starting in November 2019, multiple social media networks began banning synthetic media used for purposes of manipulation in the lead-up to the2020 United States presidential election.[97] In 2024, Elon Musk shared a parody without clarifying that it’s a satire but raised his voice against AI in politics.[98]The shared containsKamala Harrissaying things she never said in real life. A few lines from the video transcription include, “I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” The voice then says that Kamala is a “Diversity hire”, and that she has no idea about “the first thing about running the country”.[99] These are some examples of synthetic media potentially affecting the public reaction to celebrities, political party or organizations, business or MNCs. The potential to harm their image and reputation is concerning. It may also erode social trust in public and private institutions, and it will be harder to maintain a belief in their ability to verify or authenticate "true" over "fake" content.[100][9]Citron (2019) lists the public officials who may be most affected are, “elected officials, appointed officials, judges, juries, legislators, staffers, and agencies.” Even private institutions will have to develop an awareness and policy responses to this new media form, particularly if they have a wider impact on society.[101]Citron (2019) further states, “religious institutions are an obvious target, as are politically engaged entities ranging from Planned Parenthood to the NRA.[102]” Indeed, researchers are concerned that synthetic media may deepen and extend social hierarchy or class differences which gave rise to them in the first place.[103][9]The major concern tends to revolve around synthetic media is that it isn’t only a matter of proving something that is wrong, it’s also a concern of proving that something is original.[104]For example, a recent study shows that two out three cyber security professionals noticed that deepfakes used as part of disinformation against business in 2022, which is apparently a 13% increase in number from the previous year.[105] Synthetic media techniques involve generating, manipulating, and alteringdatato emulate creative processes on a much faster and more accurate scale.[106]As a result, the potential uses are as wide as human creativity itself, ranging from revolutionizing theentertainment industryto accelerating the research and production of academia. The initial application has been to synchronize lip-movements to increase the engagement of normal dubbing[107]that is growing fast with the rise ofOTTs.[108]News organizations have explored ways to use video synthesis and other synthetic media technologies to become more efficient and engaging.[109][110]Potential future hazards include the use of a combination of different subfields to generatefake news,[111]natural-language bot swarms generating trends andmemes, false evidence being generated, and potentially addiction to personalized content and a retreat into AI-generated fantasy worlds within virtual reality.[14] Advanced text-generatingbotscould potentially be used to manipulate social media platforms through tactics such asastroturfing.[112][113] Deep reinforcement learning-based natural-language generators could potentially be used to create advanced chatbots that could imitate natural human speech.[114] One use case for natural-language generation is to generate or assist with writing novels and short stories,[115]while other potential developments are that of stylistic editors to emulate professional writers.[116] Image synthesis tools may be able to streamline or even completely automate the creation of certain aspects of visual illustrations, such asanimated cartoons,comic books, andpolitical cartoons.[117]Because the automation process takes away the need for teams of designers, artists, and others involved in the making of entertainment, costs could plunge to virtually nothing and allow for the creation of "bedroom multimedia franchises" where singular people can generate results indistinguishable from the highest budget productions for little more than the cost of running their computer.[118]Character and scene creation tools will no longer be based on premade assets, thematic limitations, or personal skill but instead based on tweaking certain parameters and giving enough input.[119] A combination of speech synthesis and deepfakes has been used to automatically redub an actor's speech into multiple languages without the need for reshoots or language classes.[118]It can also be used by companies for employee onboarding, eLearning, explainer and how-to videos.[120] An increase in cyberattacks has also been feared due to methods ofphishing,catfishing, andsocial hackingbeing more easily automated by new technological methods.[96] Natural-language generation bots mixed with image synthesis networks may theoretically be used to clog search results, fillingsearch engineswith trillions of otherwise useless but legitimate-seeming blogs, websites, and marketing spam.[121] There has been speculation about deepfakes being used for creating digital actors for future films. Digitally constructed/altered humans have already been used infilmsbefore, and deepfakes could contribute new developments in the near future.[122]Amateur deepfake technology has already been used to insert faces into existing films, such as the insertion ofHarrison Ford's young face onto Han Solo's face inSolo: A Star Wars Story,[123]and techniques similar to those used by deepfakes were used for the acting of Princess Leia inRogue One.[124] GANs can be used to create photos of imaginary fashion models, with no need to hire a model, photographer, makeup artist, or pay for a studio and transportation.[125]GANs can be used to create fashion advertising campaigns including more diverse groups of models, which may increase intent to buy among people resembling the models[126]or family members.[127]GANs can also be used to create portraits, landscapes and album covers. The ability for GANs to generate photorealistic human bodies presents a challenge to industries such asfashion modeling, which may be at heightened risk of being automated.[128][129] In 2019, Dadabots unveiled an AI-generated stream of death metal which remains ongoing with no pauses.[130] Musical artists and their respective brands may also conceivably be generated from scratch, including AI-generated music, videos, interviews, and promotional material. Conversely, existing music can be completely altered at will, such as changing lyrics, singers, instrumentation, and composition.[131]In 2018, using a process by WaveNet for timbre musical transfer, researchers were able to shift entire genres from one to another.[132]Through the use of artificial intelligence, old bands and artists may be "revived" to release new material without pause, which may even include "live" concerts and promotional images. Neural network-poweredphoto manipulationalso has the potential to support problematic behavior of various state actors, not justtotalitarianandabsolutistregimes.[133] A sufficiently technically competent government or community may use synthetic media to engage in a rewrite of history using various synthetic technologies, fabricating history and personalities as well as changing ways of thinking – a form of potentialepistemicide. Even in otherwise rational and democratic societies, certain social and political groups may use synthetic media to craft cultural, political, and scientificfilter-bubblesthat greatly reduce or even altogether undermine the ability of the public to agree on basic objective facts. Conversely, the existence of synthetic media may be used to discredit factual news sources and scientific facts as "potentially fabricated."[55][9]
https://en.wikipedia.org/wiki/Synthetic_media
Inprobabilityandstatistics, the class ofexponential dispersion models(EDM), also calledexponential dispersion family(EDF), is a set ofprobability distributionsthat represents a generalisation of thenatural exponential family.[1][2][3]Exponential dispersion models play an important role instatistical theory, in particular ingeneralized linear modelsbecause they have a special structure which enables deductions to be made about appropriatestatistical inference. There are two versions to formulate an exponential dispersion model. In the univariate case, a real-valued random variableX{\displaystyle X}belongs to theadditive exponential dispersion modelwith canonical parameterθ{\displaystyle \theta }and index parameterλ{\displaystyle \lambda },X∼ED∗(θ,λ){\displaystyle X\sim \mathrm {ED} ^{*}(\theta ,\lambda )}, if itsprobability density functioncan be written as The distribution of the transformed random variableY=Xλ{\displaystyle Y={\frac {X}{\lambda }}}is calledreproductive exponential dispersion model,Y∼ED(μ,σ2){\displaystyle Y\sim \mathrm {ED} (\mu ,\sigma ^{2})}, and is given by withσ2=1λ{\displaystyle \sigma ^{2}={\frac {1}{\lambda }}}andμ=A′(θ){\displaystyle \mu =A'(\theta )}, implyingθ=(A′)−1(μ){\displaystyle \theta =(A')^{-1}(\mu )}. The terminologydispersion modelstems from interpretingσ2{\displaystyle \sigma ^{2}}asdispersion parameter. For fixed parameterσ2{\displaystyle \sigma ^{2}}, theED(μ,σ2){\displaystyle \mathrm {ED} (\mu ,\sigma ^{2})}is anatural exponential family. In the multivariate case, then-dimensional random variableX{\displaystyle \mathbf {X} }has a probability density function of the following form[1] where the parameterθ{\displaystyle {\boldsymbol {\theta }}}has the same dimension asX{\displaystyle \mathbf {X} }. Thecumulant-generating functionofY∼ED(μ,σ2){\displaystyle Y\sim \mathrm {ED} (\mu ,\sigma ^{2})}is given by withθ=(A′)−1(μ){\displaystyle \theta =(A')^{-1}(\mu )} Mean and variance ofY∼ED(μ,σ2){\displaystyle Y\sim \mathrm {ED} (\mu ,\sigma ^{2})}are given by with unit variance functionV(μ)=A″((A′)−1(μ)){\displaystyle V(\mu )=A''((A')^{-1}(\mu ))}. IfY1,…,Yn{\displaystyle Y_{1},\ldots ,Y_{n}}arei.i.d.withYi∼ED(μ,σ2wi){\displaystyle Y_{i}\sim \mathrm {ED} \left(\mu ,{\frac {\sigma ^{2}}{w_{i}}}\right)}, i.e. same meanμ{\displaystyle \mu }and different weightswi{\displaystyle w_{i}}, the weighted mean is again anED{\displaystyle \mathrm {ED} }with withw∙=∑i=1nwi{\displaystyle w_{\bullet }=\sum _{i=1}^{n}w_{i}}. ThereforeYi{\displaystyle Y_{i}}are calledreproductive. Theprobability density functionof anED(μ,σ2){\displaystyle \mathrm {ED} (\mu ,\sigma ^{2})}can also be expressed in terms of theunitdevianced(y,μ){\displaystyle d(y,\mu )}as where the unit deviance takes the special formd(y,μ)=yf(μ)+g(μ)+h(y){\displaystyle d(y,\mu )=yf(\mu )+g(\mu )+h(y)}or in terms of the unit variance function asd(y,μ)=2∫μyy−tV(t)dt{\displaystyle d(y,\mu )=2\int _{\mu }^{y}\!{\frac {y-t}{V(t)}}\,dt}. Many very common probability distributions belong to the class of EDMs, among them are:normal distribution,binomial distribution,Poisson distribution,negative binomial distribution,gamma distribution,inverse Gaussian distribution, andTweedie distribution.
https://en.wikipedia.org/wiki/Exponential_dispersion_model
Inphysicsandmathematics, theGibbs measure, named afterJosiah Willard Gibbs, is aprobability measurefrequently seen in many problems ofprobability theoryandstatistical mechanics. It is a generalization of thecanonical ensembleto infinite systems. The canonical ensemble gives the probability of the systemXbeing in statex(equivalently, of therandom variableXhaving valuex) as Here,Eis a function from the space of states to the real numbers; in physics applications,E(x)is interpreted as the energy of the configurationx. The parameterβis a free parameter; in physics, it is theinverse temperature. Thenormalizing constantZ(β)is thepartition function. However, in infinite systems, the total energy is no longer a finite number and cannot be used in the traditional construction of the probability distribution of a canonical ensemble. Traditional approaches in statistical physics studied the limit ofintensive propertiesas the size of a finite system approaches infinity (thethermodynamic limit). When the energy function can be written as a sum of terms that each involve only variables from a finite subsystem, the notion of a Gibbs measure provides an alternative approach. Gibbs measures were proposed by probability theorists such asDobrushin,Lanford, andRuelleand provided a framework to directly study infinite systems, instead of taking the limit of finite systems. A measure is a Gibbs measure if the conditional probabilities it induces on each finite subsystem satisfy a consistency condition: if all degrees of freedom outside the finite subsystem are frozen, the canonical ensemble for the subsystem subject to theseboundary conditionsmatches the probabilities in the Gibbs measureconditionalon the frozen degrees of freedom. TheHammersley–Clifford theoremimplies that any probability measure that satisfies aMarkov propertyis a Gibbs measure for an appropriate choice of (locally defined) energy function. Therefore, the Gibbs measure applies to widespread problems outside ofphysics, such asHopfield networks,Markov networks,Markov logic networks, andboundedly rational potential gamesin game theory and economics. A Gibbs measure in a system with local (finite-range) interactions maximizes theentropydensity for a given expectedenergy density; or, equivalently, it minimizes thefree energydensity. The Gibbs measure of an infinite system is not necessarily unique, in contrast to the canonical ensemble of a finite system, which is unique. The existence of more than one Gibbs measure is associated with statistical phenomena such assymmetry breakingandphase coexistence. The set of Gibbs measures on a system is always convex,[1]so there is either a unique Gibbs measure (in which case the system is said to be "ergodic"), or there are infinitely many (and the system is called "nonergodic"). In the nonergodic case, the Gibbs measures can be expressed as the set ofconvex combinationsof a much smaller number of special Gibbs measures known as "pure states" (not to be confused with the related but distinct notion ofpure states in quantum mechanics). In physical applications, theHamiltonian(the energy function) usually has some sense oflocality, and the pure states have thecluster decompositionproperty that "far-separated subsystems" are independent. In practice, physically realistic systems are found in one of these pure states. If the Hamiltonian possesses a symmetry, then a unique (i.e. ergodic) Gibbs measure will necessarily be invariant under the symmetry. But in the case of multiple (i.e. nonergodic) Gibbs measures, the pure states are typicallynotinvariant under the Hamiltonian's symmetry. For example, in the infinite ferromagneticIsing modelbelow the critical temperature, there are two pure states, the "mostly-up" and "mostly-down" states, which are interchanged under the model'sZ2{\displaystyle \mathbb {Z} _{2}}symmetry. An example of theMarkov propertycan be seen in the Gibbs measure of theIsing model. The probability for a given spinσkto be in statescould, in principle, depend on the states of all other spins in the system. Thus, we may write the probability as However, in an Ising model with only finite-range interactions (for example, nearest-neighbor interactions), we actually have whereNkis a neighborhood of the sitek. That is, the probability at sitekdependsonlyon the spins in a finite neighborhood. This last equation is in the form of a localMarkov property. Measures with this property are sometimes calledMarkov random fields. More strongly, the converse is also true:anypositive probability distribution (nonzero density everywhere) having the Markov property can be represented as a Gibbs measure for an appropriate energy function.[2]This is theHammersley–Clifford theorem. What follows is a formal definition for the special case of a random field on a lattice. The idea of a Gibbs measure is, however, much more general than this. The definition of aGibbs random fieldon alatticerequires some terminology: We interpretΦAas the contribution to the total energy (the Hamiltonian) associated to the interaction among all the points of finite setA. ThenHΛΦ(ω){\displaystyle H_{\Lambda }^{\Phi }(\omega )}as the contribution to the total energy of all the finite setsAthat meetΛ{\displaystyle \Lambda }. Note that the total energy is typically infinite, but when we "localize" to eachΛ{\displaystyle \Lambda }it may be finite, we hope. To help understand the above definitions, here are the corresponding quantities in the important example of theIsing modelwith nearest-neighbor interactions (coupling constantJ) and a magnetic field (h), onZd:
https://en.wikipedia.org/wiki/Gibbs_measure
Inprobability theoryandstatistics, themodified half-normal distribution (MHN)[1][2][3][4][5][6][7][8]is a three-parameter family of continuousprobability distributionssupported on the positive part of the real line. It can be viewed as a generalization of multiple families, including thehalf-normal distribution,truncated normal distribution,gamma distribution, and square root of the gamma distribution, all of which are special cases of the MHN distribution. Therefore, it is a flexible probability model for analyzing real-valued positive data. The name of the distribution is motivated by the similarities of its density function with that of the half-normal distribution. In addition to being used as a probability model, MHN distribution also appears inMarkov chain Monte Carlo(MCMC)-basedBayesianprocedures, including Bayesian modeling of the directional data,[4]Bayesianbinary regression, and Bayesiangraphical modeling. In Bayesian analysis, new distributions often appear as a conditionalposterior distribution; usage for many such probability distributions are too contextual, and they may not carry significance in a broader perspective. Additionally, many such distributions lack a tractable representation of its distributional aspects, such as the known functional form of the normalizing constant. However, the MHN distribution occurs in diverse areas of research, signifying its relevance to contemporary Bayesian statistical modeling and the associated computation.[clarification needed] Themoments(includingvarianceandskewness) of the MHN distribution can be represented via theFox–Wright Psi functions. There exists a recursive relation between the three consecutive moments of the distribution; this is helpful in developing an efficient approximation for the mean of the distribution, as well as constructing a moment-based estimation of its parameters. The probability density function of the modified half-normal distribution isf(x)=2βα/2xα−1exp⁡(−βx2+γx)Ψ(α2,γβ)forx>0{\displaystyle f(x)={\frac {2\beta ^{\alpha /2}x^{\alpha -1}\exp(-\beta x^{2}+\gamma x)}{\Psi \left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}{\text{ for }}x>0}whereΨ(α2,γβ)=1Ψ1[(α2,12)(1,0);γβ]{\displaystyle \Psi \left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)={}_{1}\Psi _{1}\left[{\begin{matrix}({\frac {\alpha }{2}},{\frac {1}{2}})\\(1,0)\end{matrix}};{\frac {\gamma }{\sqrt {\beta }}}\right]}denotes theFox–Wright Psi function.[9][10][11]The connection between the normalizing constant of the distribution and the Fox–Wright function in provided in Sun, Kong, Pal.[1] Thecumulative distribution function(CDF) isFMHN(x∣α,β,γ)=2βα/2Ψ(α2,γβ)∑i=0∞γi2i!β−(α+i)/2γ(α+i2,βx2)forx≥0,{\displaystyle F_{_{\text{MHN}}}(x\mid \alpha ,\beta ,\gamma )={\frac {2\beta ^{\alpha /2}}{\Psi \left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}\sum _{i=0}^{\infty }{\frac {\gamma ^{i}}{2i!}}\beta ^{-(\alpha +i)/2}\gamma \left({\frac {\alpha +i}{2}},\beta x^{2}\right){\text{ for }}x\geq 0,}whereγ(s,y)=∫0yts−1e−tdt{\displaystyle \gamma (s,y)=\int _{0}^{y}t^{s-1}e^{-t}\,dt}denotes the lowerincomplete gamma function. The modified half-normal distribution is anexponential familyof distributions, and thus inherits the properties of exponential families. LetX∼MHN(α,β,γ){\displaystyle X\sim {\text{MHN}}(\alpha ,\beta ,\gamma )}. Choose a real valuek≥0{\displaystyle k\geq 0}such thatα+k>0{\displaystyle \alpha +k>0}. Then thek{\displaystyle k}thmomentisE(Xk)=Ψ(α+k2,γβ)βk/2Ψ(α2,γβ).{\displaystyle E(X^{k})={\frac {\Psi \left({\frac {\alpha +k}{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}{\beta ^{k/2}\Psi \left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}.}Additionally,E(Xk+2)=α+k2βE(Xk)+γ2βE(Xk+1).{\displaystyle E(X^{k+2})={\frac {\alpha +k}{2\beta }}E(X^{k})+{\frac {\gamma }{2\beta }}E(X^{k+1}).}The variance of the distribution isVar⁡(X)=α2β+E(X)(γ2β−E(X)).{\displaystyle \operatorname {Var} (X)={\frac {\alpha }{2\beta }}+E(X)\left({\frac {\gamma }{2\beta }}-E(X)\right).}The moment generating function of the MHN distribution is given asMX(t)=Ψ(α2,γ+tβ)Ψ(α2,γβ).{\displaystyle M_{X}(t)={\frac {\Psi \left({\frac {\alpha }{2}},{\frac {\gamma +t}{\sqrt {\beta }}}\right)}{\Psi \left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}.} ConsiderMHN(α,β,γ){\displaystyle {\text{MHN}}(\alpha ,\beta ,\gamma )}withα>0{\displaystyle \alpha >0},β>0{\displaystyle \beta >0}, andγ∈R{\displaystyle \gamma \in \mathbb {R} }. LetX∼MHN(α,β,γ){\displaystyle X\sim {\text{MHN}}(\alpha ,\beta ,\gamma )}forα≥1{\displaystyle \alpha \geq 1},β>0{\displaystyle \beta >0}, andγ∈R{\displaystyle \gamma \in \mathbb {R} {}}, and let the mode of the distribution be denoted byXmode=γ+γ2+8β(α−1)4β.{\displaystyle X_{\text{mode}}={\frac {\gamma +{\sqrt {\gamma ^{2}+8\beta (\alpha -1)}}}{4\beta }}.} Ifα>1{\displaystyle \alpha >1}, thenXmode≤E(X)≤γ+γ2+8αβ4β{\displaystyle X_{\text{mode}}\leq E(X)\leq {\frac {\gamma +{\sqrt {\gamma ^{2}+8\alpha \beta }}}{4\beta }}}for allγ∈R{\displaystyle \gamma \in \mathbb {R} }. Asα{\displaystyle \alpha }gets larger, the difference between the upper and lower bounds approaches zero. Therefore, this also provides a high precision approximation ofE(X){\displaystyle E(X)}whenα{\displaystyle \alpha }is large. On the other hand, ifγ>0{\displaystyle \gamma >0}andα≥4{\displaystyle \alpha \geq 4}, thenlog⁡(Xmode)≤E(log⁡(X))≤log⁡(γ+γ2+8αβ4β).{\displaystyle \log(X_{\text{mode}})\leq E(\log(X))\leq \log \left({\frac {\gamma +{\sqrt {\gamma ^{2}+8\alpha \beta }}}{4\beta }}\right).}For allα>0{\displaystyle \alpha >0},β>0{\displaystyle \beta >0}, andγ∈R{\displaystyle \gamma \in \mathbb {R} },Var(X)≤12β{\displaystyle {\text{Var}}(X)\leq {\frac {1}{2\beta }}}. Also, the conditionα≥4{\displaystyle \alpha \geq 4}is a sufficient condition for its validity. The fact thatXmode≤E(X){\displaystyle X_{\text{mode}}\leq E(X)}implies the distribution is positively skewed. LetX∼MHN⁡(α,β,γ){\displaystyle X\sim \operatorname {MHN} (\alpha ,\beta ,\gamma )}. Ifγ>0{\displaystyle \gamma >0}, then there exists a random variableV{\displaystyle V}such thatV∣X∼Poisson⁡(γX){\displaystyle V\mid X\sim \operatorname {Poisson} (\gamma X)}andX2∣V∼Gamma⁡(α+V2,β){\displaystyle X^{2}\mid V\sim \operatorname {Gamma} \left({\frac {\alpha +V}{2}},\beta \right)}. On the contrary, ifγ<0{\displaystyle \gamma <0}then there exists a random variableU{\displaystyle U}such thatU∣X∼GIG(12,1,γ2X2){\displaystyle U\mid X\sim {\text{GIG}}\left({\frac {1}{2}},1,\gamma ^{2}X^{2}\right)}andX2∣U∼Gamma(α2,(β+γ2U)){\displaystyle X^{2}\mid U\sim {\text{Gamma}}\left({\frac {\alpha }{2}},\left(\beta +{\frac {\gamma ^{2}}{U}}\right)\right)}, whereGIG{\displaystyle {\text{GIG}}}denotes thegeneralized inverse Gaussian distribution.
https://en.wikipedia.org/wiki/Modified_half-normal_distribution
Inprobabilityandstatistics, anatural exponential family(NEF) is a class ofprobability distributionsthat is a special case of anexponential family(EF). The natural exponential families (NEF) are a subset of theexponential families. A NEF is an exponential family in which the natural parameterηand the natural statisticT(x) are both the identity. A distribution in anexponential familywith parameterθcan be written withprobability density function(PDF)fX(x∣θ)=h(x)exp⁡(η(θ)T(x)−A(θ)),{\displaystyle f_{X}(x\mid \theta )=h(x)\ \exp {\Big (}\ \eta (\theta )T(x)-A(\theta )\ {\Big )}\,\!,}whereh(x){\displaystyle h(x)}andA(θ){\displaystyle A(\theta )}are known functions. A distribution in a natural exponential family with parameter θ can thus be written with PDFfX(x∣θ)=h(x)exp⁡(θx−A(θ)).{\displaystyle f_{X}(x\mid \theta )=h(x)\ \exp {\Big (}\ \theta x-A(\theta )\ {\Big )}\,\!.}[Note that slightly different notation is used by the originator of the NEF, Carl Morris.[1]Morris usesωinstead ofηandψinstead ofA.] Suppose thatx∈X⊆Rp{\displaystyle \mathbf {x} \in {\mathcal {X}}\subseteq \mathbb {R} ^{p}}, then a natural exponential family of orderphas density or mass function of the form:fX(x∣θ)=h(x)exp⁡(θTx−A(θ)),{\displaystyle f_{X}(\mathbf {x} \mid {\boldsymbol {\theta }})=h(\mathbf {x} )\ \exp {\Big (}{\boldsymbol {\theta }}^{\rm {T}}\mathbf {x} -A({\boldsymbol {\theta }})\ {\Big )}\,\!,}where in this case the parameterθ∈Rp.{\displaystyle {\boldsymbol {\theta }}\in \mathbb {R} ^{p}.} A member of a natural exponential family hasmoment generating function(MGF) of the formMX(t)=exp⁡(A(θ+t)−A(θ)).{\displaystyle M_{X}(\mathbf {t} )=\exp {\Big (}\ A({\boldsymbol {\theta }}+\mathbf {t} )-A({\boldsymbol {\theta }})\ {\Big )}\,.} Thecumulant generating functionis by definition the logarithm of the MGF, so it isKX(t)=A(θ+t)−A(θ).{\displaystyle K_{X}(\mathbf {t} )=A({\boldsymbol {\theta }}+\mathbf {t} )-A({\boldsymbol {\theta }})\,.} TheKullback–Leibler divergenceof two natural exponential families with parametersθ{\displaystyle \theta }andλ{\displaystyle \lambda }is The five most important univariate cases are: These five examples – Poisson, binomial, negative binomial, normal, and gamma – are a special subset of NEF, called NEF with quadraticvariance function(NEF-QVF) because the variance can be written as a quadratic function of the mean. NEF-QVF are discussed below. Distributions such as theexponential,Bernoulli, andgeometric distributionsare special cases of the above five distributions. For example, theBernoulli distributionis abinomial distributionwithn= 1 trial, theexponential distributionis a gamma distribution with shape parameter α = 1 (ork= 1 ), and thegeometric distributionis a special case of thenegative binomial distribution. Some exponential family distributions are not NEF. ThelognormalandBeta distributionare in the exponential family, but not the natural exponential family. Thegamma distributionwith two parameters is an exponential family but not a NEF and thechi-squared distributionis a special case of thegamma distributionwith fixed scale parameter, and thus is also an exponential family but not a NEF (note that only agamma distributionwith fixed shape parameter is a NEF). Theinverse Gaussian distributionis a NEF with a cubic variance function. The parameterization of most of the above distributions has been written differently from the parameterization commonly used in textbooks and the above linked pages. For example, the above parameterization differs from the parameterization in the linked article in the Poisson case. The two parameterizations are related byθ=log⁡(λ){\displaystyle \theta =\log(\lambda )}, where λ is the mean parameter, and so that the density may be written asf(k;θ)=1k!exp⁡(θk−exp⁡(θ)),{\displaystyle f(k;\theta )={\frac {1}{k!}}\exp {\Big (}\ \theta \ k-\exp(\theta )\ {\Big )}\ ,}forθ∈R{\displaystyle \theta \in \mathbb {R} }, soh(k)=1k!,andA(θ)=exp⁡(θ).{\displaystyle h(k)={\frac {1}{k!}},{\text{ and }}A(\theta )=\exp(\theta )\ .} This alternative parameterization can greatly simplify calculations inmathematical statistics. For example, inBayesian inference, aposterior probability distributionis calculated as the product of two distributions. Normally this calculation requires writing out the probability distribution functions (PDF) and integrating; with the above parameterization, however, that calculation can be avoided. Instead, relationships between distributions can be abstracted due to the properties of the NEF described below. An example of the multivariate case is themultinomial distributionwith known number of trials. The properties of the natural exponential family can be used to simplify calculations involving these distributions. In the multivariate case, the mean vector and covariance matrix are[citation needed]E⁡[X]=∇A(θ)andCov⁡[X]=∇∇TA(θ),{\displaystyle \operatorname {E} [X]=\nabla A({\boldsymbol {\theta }}){\text{ and }}\operatorname {Cov} [X]=\nabla \nabla ^{\rm {T}}A({\boldsymbol {\theta }})\,,}where∇{\displaystyle \nabla }is thegradientand∇∇T{\displaystyle \nabla \nabla ^{\rm {T}}}is theHessian matrix. A special case of the natural exponential families are those with quadratic variance functions. Six NEFs have quadratic variance functions (QVF) in which the variance of the distribution can be written as a quadratic function of the mean. These are called NEF-QVF. The properties of these distributions were first described byCarl Morris.[3] Var⁡(X)=V(μ)=ν0+ν1μ+ν2μ2.{\displaystyle \operatorname {Var} (X)=V(\mu )=\nu _{0}+\nu _{1}\mu +\nu _{2}\mu ^{2}.} The six NEF-QVF are written here in increasing complexity of the relationship between variance and mean. The properties of NEF-QVF can simplify calculations that use these distributions. Givenindependent identically distributed(iid)X1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}with distribution from a NEF-QVF. A convolution of a linear transformation of an NEF-QVF is also an NEF-QVF. LetY=∑i=1n(Xi−b)/c{\displaystyle Y=\sum _{i=1}^{n}(X_{i}-b)/c\,}be the convolution of a linear transformation ofX. The mean ofYisμ∗=n(μ−b)/c{\displaystyle \mu ^{*}=n(\mu -b)/c\,}. The variance ofYcan be written in terms of the variance function of the original NEF-QVF. If the original NEF-QVF had variance functionVar⁡(X)=V(μ)=ν0+ν1μ+ν2μ2,{\displaystyle \operatorname {Var} (X)=V(\mu )=\nu _{0}+\nu _{1}\mu +\nu _{2}\mu ^{2},}then the new NEF-QVF has variance functionVar⁡(Y)=V∗(μ∗)=ν0∗+ν1∗μ+ν2∗μ2,{\displaystyle \operatorname {Var} (Y)=V^{*}(\mu ^{*})=\nu _{0}^{*}+\nu _{1}^{*}\mu +\nu _{2}^{*}\mu ^{2},}whereν0∗=nV(b)/c2,{\displaystyle \nu _{0}^{*}=nV(b)/c^{2}\,,}ν1∗=V′(b)/c,{\displaystyle \nu _{1}^{*}=V'(b)/c\,,}
https://en.wikipedia.org/wiki/Natural_exponential_family
Inprobability theoryandstatistics, thebinomial distributionwith parametersnandpis thediscrete probability distributionof the number of successes in a sequence ofnindependentexperiments, each asking ayes–no question, and each with its ownBoolean-valuedoutcome:success(with probabilityp) orfailure(with probabilityq= 1 −p). A single success/failure experiment is also called aBernoulli trialor Bernoulli experiment, and a sequence of outcomes is called aBernoulli process; for a single trial, i.e.,n= 1, the binomial distribution is aBernoulli distribution. The binomial distribution is the basis for thebinomial testofstatistical significance.[1] The binomial distribution is frequently used to model the number of successes in a sample of sizendrawnwith replacementfrom a population of sizeN. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is ahypergeometric distribution, not a binomial one. However, forNmuch larger thann, the binomial distribution remains a good approximation, and is widely used. If therandom variableXfollows the binomial distribution with parametersn∈N{\displaystyle \mathbb {N} }andp∈[0, 1], we writeX~B(n,p). The probability of getting exactlyksuccesses innindependent Bernoulli trials (with the same ratep) is given by theprobability mass function: fork= 0, 1, 2, ...,n, where is thebinomial coefficient. The formula can be understood as follows:pkqn−kis the probability of obtaining the sequence ofnindependent Bernoulli trials in whichktrials are "successes" and the remainingn−ktrials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence ofntrials withksuccesses (andn−kfailures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are(nk){\textstyle {\binom {n}{k}}}such sequences, since the binomial coefficient(nk){\textstyle {\binom {n}{k}}}counts the number of ways to choose the positions of theksuccesses among thentrials. The binomial distribution is concerned with the probability of obtaininganyof these sequences, meaning the probability of obtaining one of them (pkqn−k) must be added(nk){\textstyle {\binom {n}{k}}}times, hencePr(X=k)=(nk)pk(1−p)n−k{\textstyle \Pr(X=k)={\binom {n}{k}}p^{k}(1-p)^{n-k}}. In creating reference tables for binomial distribution probability, usually, the table is filled in up ton/2values. This is because fork>n/2, the probability can be calculated by its complement as Looking at the expressionf(k,n,p)as a function ofk, there is akvalue that maximizes it. Thiskvalue can be found by calculating and comparing it to 1. There is always an integerMthat satisfies[2] f(k,n,p)is monotone increasing fork<Mand monotone decreasing fork>M, with the exception of the case where(n+ 1)pis an integer. In this case, there are two values for whichfis maximal:(n+ 1)pand(n+ 1)p− 1.Mis themost probableoutcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called themode. Equivalently,M−p<np≤M+ 1 −p. Taking thefloor function, we obtainM= floor(np).[note 1] Suppose abiased coincomes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is Thecumulative distribution functioncan be expressed as: where⌊k⌋{\displaystyle \lfloor k\rfloor }is the "floor" underk, i.e. thegreatest integerless than or equal tok. It can also be represented in terms of theregularized incomplete beta function, as follows:[3] which is equivalent to thecumulative distribution functionsof thebeta distributionand of theF-distribution:[4] Some closed-form bounds for the cumulative distribution function are givenbelow. IfX~B(n,p), that is,Xis a binomially distributed random variable,nbeing the total number of experiments andpthe probability of each experiment yielding a successful result, then theexpected valueofXis:[5] This follows from the linearity of the expected value along with the fact thatXis the sum ofnidentical Bernoulli random variables, each with expected valuep. In other words, ifX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}are identical (and independent) Bernoulli random variables with parameterp, thenX=X1+ ... +Xnand Thevarianceis: This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances. The first 6central moments, defined asμc=E⁡[(X−E⁡[X])c]{\displaystyle \mu _{c}=\operatorname {E} \left[(X-\operatorname {E} [X])^{c}\right]}, are given by The non-central moments satisfy and in general[6][7] where{ck}{\displaystyle \textstyle \left\{{c \atop k}\right\}}are theStirling numbers of the second kind, andnk_=n(n−1)⋯(n−k+1){\displaystyle n^{\underline {k}}=n(n-1)\cdots (n-k+1)}is thek{\displaystyle k}thfalling powerofn{\displaystyle n}. A simple bound[8]follows by bounding the Binomial moments via thehigher Poisson moments: This shows that ifc=O(np){\displaystyle c=O({\sqrt {np}})}, thenE⁡[Xc]{\displaystyle \operatorname {E} [X^{c}]}is at most a constant factor away fromE⁡[X]c{\displaystyle \operatorname {E} [X]^{c}} Usually themodeof a binomialB(n,p)distribution is equal to⌊(n+1)p⌋{\displaystyle \lfloor (n+1)p\rfloor }, where⌊⋅⌋{\displaystyle \lfloor \cdot \rfloor }is thefloor function. However, when(n+ 1)pis an integer andpis neither 0 nor 1, then the distribution has two modes:(n+ 1)pand(n+ 1)p− 1. Whenpis equal to 0 or 1, the mode will be 0 andncorrespondingly. These cases can be summarized as follows: Proof:Let Forp=0{\displaystyle p=0}onlyf(0){\displaystyle f(0)}has a nonzero value withf(0)=1{\displaystyle f(0)=1}. Forp=1{\displaystyle p=1}we findf(n)=1{\displaystyle f(n)=1}andf(k)=0{\displaystyle f(k)=0}fork≠n{\displaystyle k\neq n}. This proves that the mode is 0 forp=0{\displaystyle p=0}andn{\displaystyle n}forp=1{\displaystyle p=1}. Let0<p<1{\displaystyle 0<p<1}. We find From this follows So when(n+1)p−1{\displaystyle (n+1)p-1}is an integer, then(n+1)p−1{\displaystyle (n+1)p-1}and(n+1)p{\displaystyle (n+1)p}is a mode. In the case that(n+1)p−1∉Z{\displaystyle (n+1)p-1\notin \mathbb {Z} }, then only⌊(n+1)p−1⌋+1=⌊(n+1)p⌋{\displaystyle \lfloor (n+1)p-1\rfloor +1=\lfloor (n+1)p\rfloor }is a mode.[9] In general, there is no single formula to find themedianfor a binomial distribution, and it may even be non-unique. However, several special results have been established: Fork≤np, upper bounds can be derived for the lower tail of the cumulative distribution functionF(k;n,p)=Pr(X≤k){\displaystyle F(k;n,p)=\Pr(X\leq k)}, the probability that there are at mostksuccesses. SincePr(X≥k)=F(n−k;n,1−p){\displaystyle \Pr(X\geq k)=F(n-k;n,1-p)}, these bounds can also be seen as bounds for the upper tail of the cumulative distribution function fork≥np. Hoeffding's inequalityyields the simple bound which is however not very tight. In particular, forp= 1, we have thatF(k;n,p) = 0(for fixedk,nwithk<n), but Hoeffding's bound evaluates to a positive constant. A sharper bound can be obtained from theChernoff bound:[15] whereD(a∥p)is therelative entropy (or Kullback-Leibler divergence)between ana-coin and ap-coin (i.e. between theBernoulli(a)andBernoulli(p)distribution): Asymptotically, this bound is reasonably tight; see[15]for details. One can also obtainlowerbounds on the tailF(k;n,p), known as anti-concentration bounds. By approximating the binomial coefficient withStirling's formulait can be shown that[16] which implies the simpler but looser bound Forp= 1/2andk≥ 3n/8for evenn, it is possible to make the denominator constant:[17] Whennis known, the parameterpcan be estimated using the proportion of successes: This estimator is found usingmaximum likelihood estimatorand also themethod of moments. This estimator isunbiasedand uniformly withminimum variance, proven usingLehmann–Scheffé theorem, since it is based on aminimal sufficientandcompletestatistic (i.e.:x). It is alsoconsistentboth in probability and inMSE. This statistic isasymptoticallynormalthanks to thecentral limit theorem, because it is the same as taking themeanover Bernoulli samples. It has a variance ofvar(p^)=p(1−p)n{\displaystyle var({\widehat {p}})={\frac {p(1-p)}{n}}}, a property which is used in various ways, such as inWald's confidence intervals. A closed formBayes estimatorforpalso exists when using theBeta distributionas aconjugateprior distribution. When using a generalBeta⁡(α,β){\displaystyle \operatorname {Beta} (\alpha ,\beta )}as a prior, theposterior meanestimator is: The Bayes estimator isasymptotically efficientand as the sample size approaches infinity (n→ ∞), it approaches theMLEsolution.[18]The Bayes estimator isbiased(how much depends on the priors),admissibleandconsistentin probability. Using the Bayesian estimator with the Beta distribution can be used withThompson sampling. For the special case of using thestandard uniform distributionas anon-informative prior,Beta⁡(α=1,β=1)=U(0,1){\displaystyle \operatorname {Beta} (\alpha =1,\beta =1)=U(0,1)}, the posterior mean estimator becomes: (Aposterior modeshould just lead to the standard estimator.) This method is called therule of succession, which was introduced in the 18th century byPierre-Simon Laplace. When relying onJeffreys prior, the prior isBeta⁡(α=12,β=12){\displaystyle \operatorname {Beta} (\alpha ={\frac {1}{2}},\beta ={\frac {1}{2}})},[19]which leads to the estimator: When estimatingpwith very rare events and a smalln(e.g.: ifx= 0), then using the standard estimator leads top^=0,{\displaystyle {\widehat {p}}=0,}which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators.[20]One way is to use the Bayes estimatorp^b{\displaystyle {\widehat {p}}_{b}}, leading to: Another method is to use the upper bound of theconfidence intervalobtained using therule of three: Even for quite large values ofn, the actual distribution of the mean is significantly nonnormal.[21]Because of this problem several methods to estimate confidence intervals have been proposed. In the equations for confidence intervals below, the variables have the following meaning: Acontinuity correctionof0.5/nmay be added.[clarification needed] [22] Here the estimate ofpis modified to This method works well forn> 10andn1≠ 0,n.[23]See here forn≤10{\displaystyle n\leq 10}.[24]Forn1= 0,nuse the Wilson (score) method below. [25] The notation in the formula below differs from the previous formulas in two respects:[26] The so-called "exact" (Clopper–Pearson) method is the most conservative.[21](Exactdoes not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.) The Wald method, although commonly recommended in textbooks, is the most biased.[clarification needed] IfX~ B(n,p)andY~ B(m,p)are independent binomial variables with the same probabilityp, thenX+Yis again a binomial variable; its distribution isZ=X+Y~ B(n+m,p):[28] A Binomial distributed random variableX~ B(n,p)can be considered as the sum ofnBernoulli distributed random variables. So the sum of two Binomial distributed random variablesX~ B(n,p)andY~ B(m,p)is equivalent to the sum ofn+mBernoulli distributed random variables, which meansZ=X+Y~ B(n+m,p). This can also be proven directly using the addition rule. However, ifXandYdo not have the same probabilityp, then the variance of the sum will besmaller than the variance of a binomial variabledistributed asB(n+m,p). The binomial distribution is a special case of thePoisson binomial distribution, which is the distribution of a sum ofnindependent non-identicalBernoulli trialsB(pi).[29] This result was first derived by Katz and coauthors in 1978.[30] LetX~ B(n,p1)andY~ B(m,p2)be independent. LetT= (X/n) / (Y/m). Then log(T) is approximately normally distributed with mean log(p1/p2) and variance((1/p1) − 1)/n+ ((1/p2) − 1)/m. IfX~ B(n,p) andY|X~ B(X,q) (the conditional distribution ofY, givenX), thenYis a simple binomial random variable with distributionY~ B(n,pq). For example, imagine throwingnballs to a basketUXand taking the balls that hit and throwing them to another basketUY. Ifpis the probability to hitUXthenX~ B(n,p) is the number of balls that hitUX. Ifqis the probability to hitUYthen the number of balls that hitUYisY~ B(X,q) and thereforeY~ B(n,pq). SinceX∼B(n,p){\displaystyle X\sim B(n,p)}andY∼B(X,q){\displaystyle Y\sim B(X,q)}, by thelaw of total probability, Since(nk)(km)=(nm)(n−mk−m),{\displaystyle {\tbinom {n}{k}}{\tbinom {k}{m}}={\tbinom {n}{m}}{\tbinom {n-m}{k-m}},}the equation above can be expressed as Factoringpk=pmpk−m{\displaystyle p^{k}=p^{m}p^{k-m}}and pulling all the terms that don't depend onk{\displaystyle k}out of the sum now yields After substitutingi=k−m{\displaystyle i=k-m}in the expression above, we get Notice that the sum (in the parentheses) above equals(p−pq+1−p)n−m{\displaystyle (p-pq+1-p)^{n-m}}by thebinomial theorem. Substituting this in finally yields and thusY∼B(n,pq){\displaystyle Y\sim B(n,pq)}as desired. TheBernoulli distributionis a special case of the binomial distribution, wheren= 1. Symbolically,X~ B(1,p)has the same meaning asX~ Bernoulli(p). Conversely, any binomial distribution,B(n,p), is the distribution of the sum ofnindependentBernoulli trials,Bernoulli(p), each with the same probabilityp.[31] Ifnis large enough, then the skew of the distribution is not too great. In this case a reasonable approximation toB(n,p)is given by thenormal distribution and this basic approximation can be improved in a simple way by using a suitablecontinuity correction. The basic approximation generally improves asnincreases (at least 20) and is better whenpis not near to 0 or 1.[32]Variousrules of thumbmay be used to decide whethernis large enough, andpis far enough from the extremes of zero or one: This can be made precise using theBerry–Esseen theorem. The rulenp±3np(1−p)∈(0,n){\displaystyle np\pm 3{\sqrt {np(1-p)}}\in (0,n)}is totally equivalent to request that Moving terms around yields: Since0<p<1{\displaystyle 0<p<1}, we can apply the square power and divide by the respective factorsnp2{\displaystyle np^{2}}andn(1−p)2{\displaystyle n(1-p)^{2}}, to obtain the desired conditions: Notice that these conditions automatically imply thatn>9{\displaystyle n>9}. On the other hand, apply again the square root and divide by 3, Subtracting the second set of inequalities from the first one yields: and so, the desired first rule is satisfied, Assume that both valuesnp{\displaystyle np}andn(1−p){\displaystyle n(1-p)}are greater than 9. Since0<p<1{\displaystyle 0<p<1}, we easily have that We only have to divide now by the respective factorsp{\displaystyle p}and1−p{\displaystyle 1-p}, to deduce the alternative form of the 3-standard-deviation rule: The following is an example of applying acontinuity correction. Suppose one wishes to calculatePr(X≤ 8)for a binomial random variableX. IfYhas a distribution given by the normal approximation, thenPr(X≤ 8)is approximated byPr(Y≤ 8.5). The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results. This approximation, known asde Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with largenare very onerous); historically, it was the first use of the normal distribution, introduced inAbraham de Moivre's bookThe Doctrine of Chancesin 1738. Nowadays, it can be seen as a consequence of thecentral limit theoremsinceB(n,p)is a sum ofnindependent, identically distributedBernoulli variableswith parameterp. This fact is the basis of ahypothesis test, a "proportion z-test", for the value ofpusingx/n, the sample proportion and estimator ofp, in acommon test statistic.[35] For example, suppose one randomly samplesnpeople out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups ofnpeople were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportionpof agreement in the population and with standard deviation The binomial distribution converges towards thePoisson distributionas the number of trials goes to infinity while the productnpconverges to a finite limit. Therefore, the Poisson distribution with parameterλ=npcan be used as an approximation toB(n,p)of the binomial distribution ifnis sufficiently large andpis sufficiently small. According to rules of thumb, this approximation is good ifn≥ 20andp≤ 0.05[36]such thatnp≤ 1, or ifn> 50andp< 0.1such thatnp< 5,[37]or ifn≥ 100andnp≤ 10.[38][39] Concerning the accuracy of Poisson approximation, see Novak,[40]ch. 4, and references therein. The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is thePMFofksuccesses givennindependent events each with a probabilitypof success. Mathematically, whenα=k+ 1andβ=n−k+ 1, the beta distribution and the binomial distribution are related by[clarification needed]a factor ofn+ 1: Beta distributionsalso provide a family ofprior probability distributionsfor binomial distributions inBayesian inference:[41] Given a uniform prior, the posterior distribution for the probability of successpgivennindependent events withkobserved successes is a beta distribution.[42] Methods forrandom number generationwhere themarginal distributionis a binomial distribution are well-established.[43][44]One way to generaterandom variatessamples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability thatPr(X=k)for all valueskfrom0throughn. (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using apseudorandom number generatorto generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step. This distribution was derived byJacob Bernoulli. He considered the case wherep=r/(r+s)wherepis the probability of success andrandsare positive integers.Blaise Pascalhad earlier considered the case wherep= 1/2, tabulating the corresponding binomial coefficients in what is now recognized asPascal's triangle.[45]
https://en.wikipedia.org/wiki/Binomial_distribution
Inprobability theory, acompound Poisson distributionis theprobability distributionof the sum of a number ofindependent identically-distributed random variables, where the number of terms to be added is itself aPoisson-distributedvariable. The result can be either acontinuousor adiscrete distribution. Suppose that i.e.,Nis arandom variablewhose distribution is aPoisson distributionwithexpected valueλ, and that are identically distributed random variables that are mutually independent and also independent ofN. Then the probability distribution of the sum ofN{\displaystyle N}i.i.d. random variables is a compound Poisson distribution. In the caseN= 0, then this is a sum of 0 terms, so the value ofYis 0. Hence the conditional distribution ofYgiven thatN= 0 is adegenerate distribution. The compound Poisson distribution is obtained by marginalising the joint distribution of (Y,N) overN, and this joint distribution can be obtained by combining the conditional distributionY|Nwith the marginal distribution ofN. Theexpected valueand thevarianceof the compound distribution can be derived in a simple way fromlaw of total expectationand thelaw of total variance. Thus Then, since E(N) = Var(N) ifNis Poisson-distributed, these formulae can be reduced to The probability distribution ofYcan be determined in terms ofcharacteristic functions: and hence, using theprobability-generating functionof the Poisson distribution, we have An alternative approach is viacumulant generating functions: Via thelaw of total cumulanceit can be shown that, if the mean of the Poisson distributionλ= 1, thecumulantsofYare the same as themomentsofX1.[citation needed] Everyinfinitely divisibleprobability distribution is a limit of compound Poisson distributions.[1]And compound Poisson distributions is infinitely divisible by the definition. WhenX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\dots }are positive integer-valued i.i.d random variables withP(X1=k)=αk,(k=1,2,…){\displaystyle P(X_{1}=k)=\alpha _{k},\ (k=1,2,\ldots )}, then this compound Poisson distribution is nameddiscrete compound Poisson distribution[2][3][4](or stuttering-Poisson distribution[5]) . We say that the discrete random variableY{\displaystyle Y}satisfyingprobability generating functioncharacterization has a discrete compound Poisson(DCP) distribution with parameters(α1λ,α2λ,…)∈R∞{\displaystyle (\alpha _{1}\lambda ,\alpha _{2}\lambda ,\ldots )\in \mathbb {R} ^{\infty }}(where∑i=1∞αi=1{\textstyle \sum _{i=1}^{\infty }\alpha _{i}=1}, withαi≥0,λ>0{\textstyle \alpha _{i}\geq 0,\lambda >0}), which is denoted by Moreover, ifX∼DCP(λα1,…,λαr){\displaystyle X\sim {\operatorname {DCP} }(\lambda {\alpha _{1}},\ldots ,\lambda {\alpha _{r}})}, we sayX{\displaystyle X}has a discrete compound Poisson distribution of orderr{\displaystyle r}. Whenr=1,2{\displaystyle r=1,2}, DCP becomesPoisson distributionandHermite distribution, respectively. Whenr=3,4{\displaystyle r=3,4}, DCP becomes triple stuttering-Poisson distribution and quadruple stuttering-Poisson distribution, respectively.[6]Other special cases include: shiftgeometric distribution,negative binomial distribution,Geometric Poisson distribution,Neyman type A distribution, Luria–Delbrück distribution inLuria–Delbrück experiment. For more special case of DCP, see the reviews paper[7]and references therein. Feller's characterization of the compound Poisson distribution states that a non-negative integer valued r.v.X{\displaystyle X}isinfinitely divisibleif and only if its distribution is a discrete compound Poisson distribution.[8]Thenegative binomial distributionis discreteinfinitely divisible, i.e., ifXhas a negative binomial distribution, then for any positive integern, there exist discrete i.i.d. random variablesX1, ...,Xnwhose sum has the same distribution thatXhas. The shiftgeometric distributionis discrete compound Poisson distribution since it is a trivial case ofnegative binomial distribution. This distribution can model batch arrivals (such as in abulk queue[5][9]). The discrete compound Poisson distribution is also widely used inactuarial sciencefor modelling the distribution of the total claim amount.[3] When someαk{\displaystyle \alpha _{k}}are negative, it is the discrete pseudo compound Poisson distribution.[3]We define that any discrete random variableY{\displaystyle Y}satisfyingprobability generating functioncharacterization has a discrete pseudo compound Poisson distribution with parameters(λ1,λ2,…)=:(α1λ,α2λ,…)∈R∞{\displaystyle (\lambda _{1},\lambda _{2},\ldots )=:(\alpha _{1}\lambda ,\alpha _{2}\lambda ,\ldots )\in \mathbb {R} ^{\infty }}where∑i=1∞αi=1{\textstyle \sum _{i=1}^{\infty }{\alpha _{i}}=1}and∑i=1∞|αi|<∞{\textstyle \sum _{i=1}^{\infty }{\left|{\alpha _{i}}\right|}<\infty }, withαi∈R,λ>0{\displaystyle {\alpha _{i}}\in \mathbb {R} ,\lambda >0}. IfXhas agamma distribution, of which theexponential distributionis a special case, then the conditional distribution ofY|Nis again a gamma distribution. The marginal distribution ofYis aTweedie distributionwith variance power 1 <p< 2 (proof via comparison ofcharacteristic function).[10]To be more explicit, if and i.i.d., then the distribution of is a reproductiveexponential dispersion modelED(μ,σ2){\displaystyle ED(\mu ,\sigma ^{2})}with The mapping of parameters Tweedie parameterμ,σ2,p{\displaystyle \mu ,\sigma ^{2},p}to the Poisson and Gamma parametersλ,α,β{\displaystyle \lambda ,\alpha ,\beta }is the following: Acompound Poisson processwith rateλ>0{\displaystyle \lambda >0}and jump size distributionGis a continuous-timestochastic process{Y(t):t≥0}{\displaystyle \{\,Y(t):t\geq 0\,\}}given by where the sum is by convention equal to zero as long asN(t) = 0. Here,{N(t):t≥0}{\displaystyle \{\,N(t):t\geq 0\,\}}is aPoisson processwith rateλ{\displaystyle \lambda }, and{Di:i≥1}{\displaystyle \{\,D_{i}:i\geq 1\,\}}are independent and identically distributed random variables, with distribution functionG, which are also independent of{N(t):t≥0}.{\displaystyle \{\,N(t):t\geq 0\,\}.\,}[11] For the discrete version of compound Poisson process, it can be used insurvival analysisfor the frailty models.[12] A compound Poisson distribution, in which the summands have anexponential distribution, was used by Revfeim to model the distribution of the total rainfall in a day, where each day contains a Poisson-distributed number of events each of which provides an amount of rainfall which has an exponential distribution.[13]Thompson applied the same model to monthly total rainfalls.[14] There have been applications toinsurance claims[15][16]andx-ray computed tomography.[17][18][19]
https://en.wikipedia.org/wiki/Compound_Poisson_distribution
Inprobability theoryandstatistics, theConway–Maxwell–Poisson (CMP or COM–Poisson) distributionis adiscrete probability distributionnamed afterRichard W. Conway,William L. Maxwell, andSiméon Denis Poissonthat generalizes thePoisson distributionby adding a parameter to modeloverdispersionandunderdispersion. It is a member of theexponential family,[1]has the Poisson distribution andgeometric distributionasspecial casesand theBernoulli distributionas alimiting case.[2] The CMP distribution was originally proposed by Conway and Maxwell in 1962[3]as a solution to handlingqueueing systemswith state-dependent service rates. The CMP distribution was introduced into the statistics literature by Boatwright et al. 2003[4]and Shmueli et al. (2005).[2]The first detailed investigation into the probabilistic and statistical properties of the distribution was published by Shmueli et al. (2005).[2]Some theoretical probability results of COM-Poisson distribution is studied and reviewed by Li et al. (2019),[5]especially the characterizations of COM-Poisson distribution. The CMP distribution is defined to be the distribution withprobability mass function where : The functionZ(λ,ν){\displaystyle Z(\lambda ,\nu )}serves as anormalization constantso the probability mass function sums to one. Note thatZ(λ,ν){\displaystyle Z(\lambda ,\nu )}does not have a closed form. The domain of admissible parameters isλ,ν>0{\displaystyle \lambda ,\nu >0}, and0<λ<1{\displaystyle 0<\lambda <1},ν=0{\displaystyle \nu =0}. The additional parameterν{\displaystyle \nu }which does not appear in thePoisson distributionallows for adjustment of the rate of decay. This rate of decay is a non-linear decrease in ratios of successive probabilities, specifically Whenν=1{\displaystyle \nu =1}, the CMP distribution becomes the standardPoisson distributionand asν→∞{\displaystyle \nu \to \infty }, the distribution approaches aBernoulli distributionwith parameterλ/(1+λ){\displaystyle \lambda /(1+\lambda )}. Whenν=0{\displaystyle \nu =0}the CMP distribution reduces to ageometric distributionwith probability of success1−λ{\displaystyle 1-\lambda }providedλ<1{\displaystyle \lambda <1}.[2] For the CMP distribution, moments can be found through the recursive formula[2] For generalν{\displaystyle \nu }, there does not exist a closed form formula for thecumulative distribution functionofX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )}. Ifν≥1{\displaystyle \nu \geq 1}is an integer, we can, however, obtain the following formula in terms of thegeneralized hypergeometric function:[6] Many important summary statistics, such as moments and cumulants, of the CMP distribution can be expressed in terms of the normalizing constantZ(λ,ν){\displaystyle Z(\lambda ,\nu )}.[2][7]Indeed, Theprobability generating functionisE⁡sX=Z(sλ,ν)/Z(λ,ν){\displaystyle \operatorname {E} s^{X}=Z(s\lambda ,\nu )/Z(\lambda ,\nu )}, and themeanandvarianceare given by Thecumulant generating functionis and thecumulantsare given by Whilst the normalizing constantZ(λ,ν)=∑i=0∞λi(i!)ν{\displaystyle Z(\lambda ,\nu )=\sum _{i=0}^{\infty }{\frac {\lambda ^{i}}{(i!)^{\nu }}}}does not in general have a closed form, there are some noteworthy special cases: Because the normalizing constant does not in general have a closed form, the followingasymptotic expansionis of interest. Fixν>0{\displaystyle \nu >0}. Then, asλ→∞{\displaystyle \lambda \rightarrow \infty },[8] where thecj{\displaystyle c_{j}}are uniquely determined by the expansion In particular,c0=1{\displaystyle c_{0}=1},c1=ν2−124{\displaystyle c_{1}={\frac {\nu ^{2}-1}{24}}},c2=ν2−11152(ν2+23){\displaystyle c_{2}={\frac {\nu ^{2}-1}{1152}}\left(\nu ^{2}+23\right)}. Furthercoefficientsare given in.[8] For general values ofν{\displaystyle \nu }, there does not exist closed form formulas for the mean, variance and moments of the CMP distribution. We do, however, have the following neat formula.[7]Let(j)r=j(j−1)⋯(j−r+1){\displaystyle (j)_{r}=j(j-1)\cdots (j-r+1)}denote thefalling factorial. LetX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )},λ,ν>0{\displaystyle \lambda ,\nu >0}. Then forr∈N{\displaystyle r\in \mathbb {N} }. Since in general closed form formulas are not available for moments and cumulants of the CMP distribution, the following asymptotic formulas are of interest. LetX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )}, whereν>0{\displaystyle \nu >0}. Denote theskewnessγ1=κ3σ3{\displaystyle \gamma _{1}={\frac {\kappa _{3}}{\sigma ^{3}}}}andexcess kurtosisγ2=κ4σ4{\displaystyle \gamma _{2}={\frac {\kappa _{4}}{\sigma ^{4}}}}, whereσ2=Var(X){\displaystyle \sigma ^{2}=\mathrm {Var} (X)}. Then, asλ→∞{\displaystyle \lambda \rightarrow \infty },[8] where The asymptotic series forκn{\displaystyle \kappa _{n}}holds for alln≥2{\displaystyle n\geq 2}, andκ1=E⁡X{\displaystyle \kappa _{1}=\operatorname {E} X}. Whenν{\displaystyle \nu }is an integer explicit formulas formomentscan be obtained. The caseν=1{\displaystyle \nu =1}corresponds to the Poisson distribution. Suppose now thatν=2{\displaystyle \nu =2}. Form∈N{\displaystyle m\in \mathbb {N} },[7] whereIr(x){\displaystyle I_{r}(x)}is themodified Bessel functionof the first kind. Using the connecting formula for moments and factorial moments gives In particular, the mean ofX{\displaystyle X}is given by Also, sinceE⁡X2=λ{\displaystyle \operatorname {E} X^{2}=\lambda }, the variance is given by Suppose now thatν≥1{\displaystyle \nu \geq 1}is an integer. Then[6] In particular, and Var(X)=λ22ν−10Fν−1(;3,…,3;λ)0Fν−1(;1,…,1;λ)+E⁡[X]−(E⁡[X])2.{\displaystyle \mathrm {Var} (X)={\frac {\lambda ^{2}}{2^{\nu -1}}}{\frac {_{0}F_{\nu -1}(;3,\ldots ,3;\lambda )}{_{0}F_{\nu -1}(;1,\ldots ,1;\lambda )}}+\operatorname {E} [X]-(\operatorname {E} [X])^{2}.} LetX∼CMP(λ,ν){\displaystyle X\sim \mathrm {CMP} (\lambda ,\nu )}. Then themodeofX{\displaystyle X}is⌊λ1/ν⌋{\displaystyle \lfloor \lambda ^{1/\nu }\rfloor }ifλ1/ν<m{\displaystyle \lambda ^{1/\nu }<m}is not an integer. Otherwise, the modes ofX{\displaystyle X}areλ1/ν{\displaystyle \lambda ^{1/\nu }}andλ1/ν−1{\displaystyle \lambda ^{1/\nu }-1}.[7] The mean deviation ofXν{\displaystyle X^{\nu }}about its meanλ{\displaystyle \lambda }is given by[7] No explicit formula is known for themedianofX{\displaystyle X}, but the following asymptotic result is available.[7]Letm{\displaystyle m}be the median ofX∼CMP(λ,ν){\displaystyle X\sim {\mbox{CMP}}(\lambda ,\nu )}. Then asλ→∞{\displaystyle \lambda \rightarrow \infty }. LetX∼CMP(λ,ν){\displaystyle X\sim {\mbox{CMP}}(\lambda ,\nu )}, and suppose thatf:Z+↦R{\displaystyle f:\mathbb {Z} ^{+}\mapsto \mathbb {R} }is such thatE⁡|f(X+1)|<∞{\displaystyle \operatorname {E} |f(X+1)|<\infty }andE⁡|Xνf(X)|<∞{\displaystyle \operatorname {E} |X^{\nu }f(X)|<\infty }. Then Conversely, suppose now thatW{\displaystyle W}is a real-valued random variable supported onZ+{\displaystyle \mathbb {Z} ^{+}}such thatE⁡[λf(W+1)−Wνf(W)]=0{\displaystyle \operatorname {E} [\lambda f(W+1)-W^{\nu }f(W)]=0}for all boundedf:Z+↦R{\displaystyle f:\mathbb {Z} ^{+}\mapsto \mathbb {R} }. ThenW∼CMP(λ,ν){\displaystyle W\sim {\mbox{CMP}}(\lambda ,\nu )}.[7] LetYn{\displaystyle Y_{n}}have theConway–Maxwell–binomial distributionwith parametersn{\displaystyle n},p=λ/nν{\displaystyle p=\lambda /n^{\nu }}andν{\displaystyle \nu }. Fixλ>0{\displaystyle \lambda >0}andν>0{\displaystyle \nu >0}. Then,Yn{\displaystyle Y_{n}}converges in distribution to theCMP(λ,ν){\displaystyle \mathrm {CMP} (\lambda ,\nu )}distribution asn→∞{\displaystyle n\rightarrow \infty }.[7]This result generalises the classical Poisson approximation of the binomial distribution. More generally, the CMP distribution arises as a limiting distribution of Conway–Maxwell–Poisson binomial distribution.[7]Apart from the fact that COM-binomial approximates to COM-Poisson, Zhang et al. (2018)[9]illustrates that COM-negative binomial distribution withprobability mass function convergents to a limiting distribution which is the COM-Poisson, asr→+∞{\displaystyle {r\to +\infty }}. There are a few methods of estimating the parameters of the CMP distribution from the data. Two methods will be discussed: weighted least squares and maximum likelihood. The weighted least squares approach is simple and efficient but lacks precision. Maximum likelihood, on the other hand, is precise, but is more complex and computationally intensive. Theweighted least squaresprovides a simple, efficient method to derive rough estimates of the parameters of the CMP distribution and determine if the distribution would be an appropriate model. Following the use of this method, an alternative method should be employed to compute more accurate estimates of the parameters if the model is deemed appropriate. This method uses the relationship of successive probabilities as discussed above. By taking logarithms of both sides of this equation, the following linear relationship arises wherepx{\displaystyle p_{x}}denotesPr(X=x){\displaystyle \Pr(X=x)}. When estimating the parameters, the probabilities can be replaced by therelative frequenciesofx{\displaystyle x}andx−1{\displaystyle x-1}. To determine if the CMP distribution is an appropriate model, these values should be plotted againstlog⁡x{\displaystyle \log x}for all ratios without zero counts. If the data appear to be linear, then the model is likely to be a good fit. Once the appropriateness of the model is determined, the parameters can be estimated by fitting a regression oflog⁡(p^x−1/p^x){\displaystyle \log({\hat {p}}_{x-1}/{\hat {p}}_{x})}onlog⁡x{\displaystyle \log x}. However, the basic assumption ofhomoscedasticityis violated, so aweighted least squaresregression must be used. The inverse weight matrix will have the variances of each ratio on the diagonal with the one-step covariances on the first off-diagonal, both given below. The CMPlikelihood functionis whereS1=∑i=1nxi{\displaystyle S_{1}=\sum _{i=1}^{n}x_{i}}andS2=∑i=1nlog⁡xi!{\displaystyle S_{2}=\sum _{i=1}^{n}\log x_{i}!}. Maximizing the likelihood yields the following two equations which do not have an analytic solution. Instead, themaximum likelihoodestimates are approximated numerically by theNewton–Raphson method. In each iteration, the expectations, variances, and covariance ofX{\displaystyle X}andlog⁡X!{\displaystyle \log X!}are approximated by using the estimates forλ{\displaystyle \lambda }andν{\displaystyle \nu }from the previous iteration in the expression This is continued until convergence ofλ^{\displaystyle {\hat {\lambda }}}andν^{\displaystyle {\hat {\nu }}}. The basic CMP distribution discussed above has also been used as the basis for ageneralized linear model(GLM) using a Bayesian formulation. A dual-link GLM based on the CMP distribution has been developed,[10]and this model has been used to evaluate traffic accident data.[11][12]The CMP GLM developed by Guikema and Coffelt (2008) is based on a reformulation of the CMP distribution above, replacingλ{\displaystyle \lambda }withμ=λ1/ν{\displaystyle \mu =\lambda ^{1/\nu }}. The integral part ofμ{\displaystyle \mu }is then the mode of the distribution. A full Bayesian estimation approach has been used withMCMCsampling implemented inWinBugswithnon-informative priorsfor the regression parameters.[10][11]This approach is computationally expensive, but it yields the full posterior distributions for the regression parameters and allows expert knowledge to be incorporated through the use of informative priors. A classical GLM formulation for a CMP regression has been developed which generalizesPoisson regressionandlogistic regression.[13]This takes advantage of theexponential familyproperties of the CMP distribution to obtain elegant model estimation (viamaximum likelihood), inference, diagnostics, and interpretation. This approach requires substantially less computational time than the Bayesian approach, at the cost of not allowing expert knowledge to be incorporated into the model.[13]In addition it yields standard errors for the regression parameters (via the Fisher Information matrix) compared to the full posterior distributions obtainable via the Bayesian formulation. It also provides astatistical testfor the level of dispersion compared to a Poisson model. Code for fitting a CMP regression, testing for dispersion, and evaluating fit is available.[14] The two GLM frameworks developed for the CMP distribution significantly extend the usefulness of this distribution for data analysis problems.
https://en.wikipedia.org/wiki/Conway%E2%80%93Maxwell%E2%80%93Poisson_distribution
TheErlang distributionis a two-parameter family of continuousprobability distributionswithsupportx∈[0,∞){\displaystyle x\in [0,\infty )}. The two parameters are: The Erlang distribution is the distribution of a sum ofk{\displaystyle k}independentexponential variableswith mean1/λ{\displaystyle 1/\lambda }each. Equivalently, it is the distribution of the time until thekth event of aPoisson processwith a rate ofλ{\displaystyle \lambda }. The Erlang and Poisson distributions are complementary, in that while the Poisson distribution counts the events that occur in a fixed amount of time, the Erlang distribution counts the amount of time until the occurrence of a fixed number of events. Whenk=1{\displaystyle k=1}, the distribution simplifies to theexponential distribution. The Erlang distribution is a special case of thegamma distributionin which the shape of the distribution is discretized. The Erlang distribution was developed byA. K. Erlangto examine the number of telephone calls that might be made at the same time to the operators of the switching stations. This work on telephonetraffic engineeringhas been expanded to consider waiting times inqueueing systemsin general. The distribution is also used in the field ofstochastic processes. Theprobability density functionof the Erlang distribution is The parameterkis called the shape parameter, and the parameterλ{\displaystyle \lambda }is called the rate parameter. An alternative, but equivalent, parametrization uses the scale parameterβ{\displaystyle \beta }, which is the reciprocal of the rate parameter (i.e.,β=1/λ{\displaystyle \beta =1/\lambda }): When the scale parameterβ{\displaystyle \beta }equals 2, the distribution simplifies to thechi-squared distributionwith 2kdegrees of freedom. It can therefore be regarded as ageneralized chi-squared distributionfor even numbers of degrees of freedom. Thecumulative distribution functionof the Erlang distribution is whereγ{\displaystyle \gamma }is the lowerincomplete gamma functionandP{\displaystyle P}is thelower regularized gamma function. The CDF may also be expressed as The Erlang-kdistribution (wherekis a positive integer)Ek(λ){\displaystyle E_{k}(\lambda )}is defined by settingkin the PDF of the Erlang distribution.[1]For instance, the Erlang-2 distribution isE2(λ)=λ2xe−λxforx,λ≥0{\displaystyle E_{2}(\lambda )={\lambda ^{2}x}e^{-\lambda x}\quad {\mbox{for }}x,\lambda \geq 0}, which is the same asf(x;2,λ){\displaystyle f(x;2,\lambda )}. An asymptotic expansion is known for the median of an Erlang distribution,[2]for which coefficients can be computed and bounds are known.[3][4]An approximation iskλ(1−13k+0.2),{\displaystyle {\frac {k}{\lambda }}\left(1-{\dfrac {1}{3k+0.2}}\right),}i.e. below the meankλ.{\displaystyle {\frac {k}{\lambda }}.}[5] Erlang-distributed random variates can be generated from uniformly distributed random numbers (U∈[0,1]{\displaystyle U\in [0,1]}) using the following formula:[6] Events that occur independently with some average rate are modeled with aPoisson process. The waiting times betweenkoccurrences of the event are Erlang distributed. (The related question of the number of events in a given amount of time is described by thePoisson distribution.) The Erlang distribution, which measures the time between incoming calls, can be used in conjunction with the expected duration of incoming calls to produce information about the traffic load measured in erlangs. This can be used to determine the probability of packet loss or delay, according to various assumptions made about whether blocked calls are aborted (Erlang B formula) or queued until served (Erlang C formula). TheErlang-BandCformulae are still in everyday use for traffic modeling for applications such as the design ofcall centers. The age distribution ofcancerincidenceoften follows the Erlang distribution, whereas the shape and scale parameters predict, respectively, the number ofdriver eventsand the time interval between them.[7][8]More generally, the Erlang distribution has been suggested as good approximation of cell cycle time distribution, as result of multi-stage models.[9][10] Thekinesinis a molecular machine with two "feet" that "walks" along a filament. The waiting time between each step is exponentially distributed. Whengreen fluorescent proteinis attached to a foot of the kinesin, then the green dot visibly moves with Erlang distribution of k = 2.[11] It has also been used in marketing for describing interpurchase times.[12]
https://en.wikipedia.org/wiki/Erlang_distribution
Inprobability theoryandstatistics, theexponential distributionornegative exponential distributionis theprobability distributionof the distance between events in aPoisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process.[1]It is a particular case of thegamma distribution. It is the continuous analogue of thegeometric distribution, and it has the key property of beingmemoryless.[2]In addition to being used for the analysis of Poisson point processes it is found in various other contexts.[3] The exponential distribution is not the same as the class ofexponential familiesof distributions. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like thenormal,binomial,gamma, andPoissondistributions.[3] Theprobability density function(pdf) of an exponential distribution is Hereλ> 0 is the parameter of the distribution, often called therate parameter. The distribution is supported on the interval[0, ∞). If arandom variableXhas this distribution, we writeX~ Exp(λ). The exponential distribution exhibitsinfinite divisibility. Thecumulative distribution functionis given by The exponential distribution is sometimes parametrized in terms of thescale parameterβ= 1/λ, which is also the mean:f(x;β)={1βe−x/βx≥0,0x<0.F(x;β)={1−e−x/βx≥0,0x<0.{\displaystyle f(x;\beta )={\begin{cases}{\frac {1}{\beta }}e^{-x/\beta }&x\geq 0,\\0&x<0.\end{cases}}\qquad \qquad F(x;\beta )={\begin{cases}1-e^{-x/\beta }&x\geq 0,\\0&x<0.\end{cases}}} The mean orexpected valueof an exponentially distributed random variableXwith rate parameterλis given byE⁡[X]=1λ.{\displaystyle \operatorname {E} [X]={\frac {1}{\lambda }}.} In light of the examples givenbelow, this makes sense; a person who receives an average of two telephone calls per hour can expect that the time between consecutive calls will be 0.5 hour, or 30 minutes. ThevarianceofXis given byVar⁡[X]=1λ2,{\displaystyle \operatorname {Var} [X]={\frac {1}{\lambda ^{2}}},}so thestandard deviationis equal to the mean. ThemomentsofX, forn∈N{\displaystyle n\in \mathbb {N} }are given byE⁡[Xn]=n!λn.{\displaystyle \operatorname {E} \left[X^{n}\right]={\frac {n!}{\lambda ^{n}}}.} Thecentral momentsofX, forn∈N{\displaystyle n\in \mathbb {N} }are given byμn=!nλn=n!λn∑k=0n(−1)kk!.{\displaystyle \mu _{n}={\frac {!n}{\lambda ^{n}}}={\frac {n!}{\lambda ^{n}}}\sum _{k=0}^{n}{\frac {(-1)^{k}}{k!}}.}where !nis thesubfactorialofn ThemedianofXis given bym⁡[X]=ln⁡(2)λ<E⁡[X],{\displaystyle \operatorname {m} [X]={\frac {\ln(2)}{\lambda }}<\operatorname {E} [X],}wherelnrefers to thenatural logarithm. Thus theabsolute differencebetween the mean and median is|E⁡[X]−m⁡[X]|=1−ln⁡(2)λ<1λ=σ⁡[X],{\displaystyle \left|\operatorname {E} \left[X\right]-\operatorname {m} \left[X\right]\right|={\frac {1-\ln(2)}{\lambda }}<{\frac {1}{\lambda }}=\operatorname {\sigma } [X],} in accordance with themedian-mean inequality. An exponentially distributed random variableTobeys the relationPr(T>s+t∣T>s)=Pr(T>t),∀s,t≥0.{\displaystyle \Pr \left(T>s+t\mid T>s\right)=\Pr(T>t),\qquad \forall s,t\geq 0.} This can be seen by considering thecomplementary cumulative distribution function:Pr(T>s+t∣T>s)=Pr(T>s+t∩T>s)Pr(T>s)=Pr(T>s+t)Pr(T>s)=e−λ(s+t)e−λs=e−λt=Pr(T>t).{\displaystyle {\begin{aligned}\Pr \left(T>s+t\mid T>s\right)&={\frac {\Pr \left(T>s+t\cap T>s\right)}{\Pr \left(T>s\right)}}\\[4pt]&={\frac {\Pr \left(T>s+t\right)}{\Pr \left(T>s\right)}}\\[4pt]&={\frac {e^{-\lambda (s+t)}}{e^{-\lambda s}}}\\[4pt]&=e^{-\lambda t}\\[4pt]&=\Pr(T>t).\end{aligned}}} WhenTis interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, ifTis conditioned on a failure to observe the event over some initial period of times, the distribution of the remaining waiting time is the same as the original unconditional distribution. For example, if an event has not occurred after 30 seconds, theconditional probabilitythat occurrence will take at least 10 more seconds is equal to the unconditional probability of observing the event more than 10 seconds after the initial time. The exponential distribution and thegeometric distributionarethe only memoryless probability distributions. The exponential distribution is consequently also necessarily the only continuous probability distribution that has a constantfailure rate. Thequantile function(inverse cumulative distribution function) for Exp(λ) isF−1(p;λ)=−ln⁡(1−p)λ,0≤p<1{\displaystyle F^{-1}(p;\lambda )={\frac {-\ln(1-p)}{\lambda }},\qquad 0\leq p<1} Thequartilesare therefore: And as a consequence theinterquartile rangeis ln(3)/λ. The conditional value at risk (CVaR) also known as theexpected shortfallor superquantile for Exp(λ) is derived as follows:[4] q¯α(X)=11−α∫α1qp(X)dp=1(1−α)∫α1−ln⁡(1−p)λdp=−1λ(1−α)∫1−α0−ln⁡(y)dy=−1λ(1−α)∫01−αln⁡(y)dy=−1λ(1−α)[(1−α)ln⁡(1−α)−(1−α)]=−ln⁡(1−α)+1λ{\displaystyle {\begin{aligned}{\bar {q}}_{\alpha }(X)&={\frac {1}{1-\alpha }}\int _{\alpha }^{1}q_{p}(X)dp\\&={\frac {1}{(1-\alpha )}}\int _{\alpha }^{1}{\frac {-\ln(1-p)}{\lambda }}dp\\&={\frac {-1}{\lambda (1-\alpha )}}\int _{1-\alpha }^{0}-\ln(y)dy\\&={\frac {-1}{\lambda (1-\alpha )}}\int _{0}^{1-\alpha }\ln(y)dy\\&={\frac {-1}{\lambda (1-\alpha )}}[(1-\alpha )\ln(1-\alpha )-(1-\alpha )]\\&={\frac {-\ln(1-\alpha )+1}{\lambda }}\\\end{aligned}}} The buffered probability of exceedance is one minus the probability level at which the CVaR equals the thresholdx{\displaystyle x}. It is derived as follows:[4] p¯x(X)={1−α|q¯α(X)=x}={1−α|−ln⁡(1−α)+1λ=x}={1−α|ln⁡(1−α)=1−λx}={1−α|eln⁡(1−α)=e1−λx}={1−α|1−α=e1−λx}=e1−λx{\displaystyle {\begin{aligned}{\bar {p}}_{x}(X)&=\{1-\alpha |{\bar {q}}_{\alpha }(X)=x\}\\&=\{1-\alpha |{\frac {-\ln(1-\alpha )+1}{\lambda }}=x\}\\&=\{1-\alpha |\ln(1-\alpha )=1-\lambda x\}\\&=\{1-\alpha |e^{\ln(1-\alpha )}=e^{1-\lambda x}\}=\{1-\alpha |1-\alpha =e^{1-\lambda x}\}=e^{1-\lambda x}\end{aligned}}} The directedKullback–Leibler divergenceinnatsofeλ{\displaystyle e^{\lambda }}("approximating" distribution) fromeλ0{\displaystyle e^{\lambda _{0}}}('true' distribution) is given byΔ(λ0∥λ)=Eλ0(log⁡pλ0(x)pλ(x))=Eλ0(log⁡λ0eλ0xλeλx)=log⁡(λ0)−log⁡(λ)−(λ0−λ)Eλ0(x)=log⁡(λ0)−log⁡(λ)+λλ0−1.{\displaystyle {\begin{aligned}\Delta (\lambda _{0}\parallel \lambda )&=\mathbb {E} _{\lambda _{0}}\left(\log {\frac {p_{\lambda _{0}}(x)}{p_{\lambda }(x)}}\right)\\&=\mathbb {E} _{\lambda _{0}}\left(\log {\frac {\lambda _{0}e^{\lambda _{0}x}}{\lambda e^{\lambda x}}}\right)\\&=\log(\lambda _{0})-\log(\lambda )-(\lambda _{0}-\lambda )E_{\lambda _{0}}(x)\\&=\log(\lambda _{0})-\log(\lambda )+{\frac {\lambda }{\lambda _{0}}}-1.\end{aligned}}} Among all continuous probability distributions withsupport[0, ∞)and meanμ, the exponential distribution withλ= 1/μhas the largestdifferential entropy. In other words, it is themaximum entropy probability distributionfor arandom variateXwhich is greater than or equal to zero and for which E[X] is fixed.[5] LetX1, ...,Xnbeindependentexponentially distributed random variables with rate parametersλ1, ...,λn. Thenmin{X1,…,Xn}{\displaystyle \min \left\{X_{1},\dotsc ,X_{n}\right\}}is also exponentially distributed, with parameterλ=λ1+⋯+λn.{\displaystyle \lambda =\lambda _{1}+\dotsb +\lambda _{n}.} This can be seen by considering thecomplementary cumulative distribution function:Pr(min{X1,…,Xn}>x)=Pr(X1>x,…,Xn>x)=∏i=1nPr(Xi>x)=∏i=1nexp⁡(−xλi)=exp⁡(−x∑i=1nλi).{\displaystyle {\begin{aligned}&\Pr \left(\min\{X_{1},\dotsc ,X_{n}\}>x\right)\\={}&\Pr \left(X_{1}>x,\dotsc ,X_{n}>x\right)\\={}&\prod _{i=1}^{n}\Pr \left(X_{i}>x\right)\\={}&\prod _{i=1}^{n}\exp \left(-x\lambda _{i}\right)=\exp \left(-x\sum _{i=1}^{n}\lambda _{i}\right).\end{aligned}}} The index of the variable which achieves the minimum is distributed according to the categorical distributionPr(Xk=min{X1,…,Xn})=λkλ1+⋯+λn.{\displaystyle \Pr \left(X_{k}=\min\{X_{1},\dotsc ,X_{n}\}\right)={\frac {\lambda _{k}}{\lambda _{1}+\dotsb +\lambda _{n}}}.} A proof can be seen by lettingI=argmini∈{1,⋯,n}⁡{X1,…,Xn}{\displaystyle I=\operatorname {argmin} _{i\in \{1,\dotsb ,n\}}\{X_{1},\dotsc ,X_{n}\}}. Then,Pr(I=k)=∫0∞Pr(Xk=x)Pr(∀i≠kXi>x)dx=∫0∞λke−λkx(∏i=1,i≠kne−λix)dx=λk∫0∞e−(λ1+⋯+λn)xdx=λkλ1+⋯+λn.{\displaystyle {\begin{aligned}\Pr(I=k)&=\int _{0}^{\infty }\Pr(X_{k}=x)\Pr(\forall _{i\neq k}X_{i}>x)\,dx\\&=\int _{0}^{\infty }\lambda _{k}e^{-\lambda _{k}x}\left(\prod _{i=1,i\neq k}^{n}e^{-\lambda _{i}x}\right)dx\\&=\lambda _{k}\int _{0}^{\infty }e^{-\left(\lambda _{1}+\dotsb +\lambda _{n}\right)x}dx\\&={\frac {\lambda _{k}}{\lambda _{1}+\dotsb +\lambda _{n}}}.\end{aligned}}} Note thatmax{X1,…,Xn}{\displaystyle \max\{X_{1},\dotsc ,X_{n}\}}is not exponentially distributed, ifX1, ...,Xndo not all have parameter 0.[6] LetX1,…,Xn{\displaystyle X_{1},\dotsc ,X_{n}}ben{\displaystyle n}independent and identically distributedexponential random variables with rate parameterλ. LetX(1),…,X(n){\displaystyle X_{(1)},\dotsc ,X_{(n)}}denote the correspondingorder statistics. Fori<j{\displaystyle i<j}, the joint momentE⁡[X(i)X(j)]{\displaystyle \operatorname {E} \left[X_{(i)}X_{(j)}\right]}of the order statisticsX(i){\displaystyle X_{(i)}}andX(j){\displaystyle X_{(j)}}is given byE⁡[X(i)X(j)]=∑k=0j−11(n−k)λE⁡[X(i)]+E⁡[X(i)2]=∑k=0j−11(n−k)λ∑k=0i−11(n−k)λ+∑k=0i−11((n−k)λ)2+(∑k=0i−11(n−k)λ)2.{\displaystyle {\begin{aligned}\operatorname {E} \left[X_{(i)}X_{(j)}\right]&=\sum _{k=0}^{j-1}{\frac {1}{(n-k)\lambda }}\operatorname {E} \left[X_{(i)}\right]+\operatorname {E} \left[X_{(i)}^{2}\right]\\&=\sum _{k=0}^{j-1}{\frac {1}{(n-k)\lambda }}\sum _{k=0}^{i-1}{\frac {1}{(n-k)\lambda }}+\sum _{k=0}^{i-1}{\frac {1}{((n-k)\lambda )^{2}}}+\left(\sum _{k=0}^{i-1}{\frac {1}{(n-k)\lambda }}\right)^{2}.\end{aligned}}} This can be seen by invoking thelaw of total expectationand the memoryless property:E⁡[X(i)X(j)]=∫0∞E⁡[X(i)X(j)∣X(i)=x]fX(i)(x)dx=∫x=0∞xE⁡[X(j)∣X(j)≥x]fX(i)(x)dx(sinceX(i)=x⟹X(j)≥x)=∫x=0∞x[E⁡[X(j)]+x]fX(i)(x)dx(by the memoryless property)=∑k=0j−11(n−k)λE⁡[X(i)]+E⁡[X(i)2].{\displaystyle {\begin{aligned}\operatorname {E} \left[X_{(i)}X_{(j)}\right]&=\int _{0}^{\infty }\operatorname {E} \left[X_{(i)}X_{(j)}\mid X_{(i)}=x\right]f_{X_{(i)}}(x)\,dx\\&=\int _{x=0}^{\infty }x\operatorname {E} \left[X_{(j)}\mid X_{(j)}\geq x\right]f_{X_{(i)}}(x)\,dx&&\left({\textrm {since}}~X_{(i)}=x\implies X_{(j)}\geq x\right)\\&=\int _{x=0}^{\infty }x\left[\operatorname {E} \left[X_{(j)}\right]+x\right]f_{X_{(i)}}(x)\,dx&&\left({\text{by the memoryless property}}\right)\\&=\sum _{k=0}^{j-1}{\frac {1}{(n-k)\lambda }}\operatorname {E} \left[X_{(i)}\right]+\operatorname {E} \left[X_{(i)}^{2}\right].\end{aligned}}} The first equation follows from thelaw of total expectation. The second equation exploits the fact that once we condition onX(i)=x{\displaystyle X_{(i)}=x}, it must follow thatX(j)≥x{\displaystyle X_{(j)}\geq x}. The third equation relies on the memoryless property to replaceE⁡[X(j)∣X(j)≥x]{\displaystyle \operatorname {E} \left[X_{(j)}\mid X_{(j)}\geq x\right]}withE⁡[X(j)]+x{\displaystyle \operatorname {E} \left[X_{(j)}\right]+x}. The probability distribution function (PDF) of a sum of two independent random variables is theconvolution of their individual PDFs. IfX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}are independent exponential random variables with respective rate parametersλ1{\displaystyle \lambda _{1}}andλ2,{\displaystyle \lambda _{2},}then the probability density ofZ=X1+X2{\displaystyle Z=X_{1}+X_{2}}is given byfZ(z)=∫−∞∞fX1(x1)fX2(z−x1)dx1=∫0zλ1e−λ1x1λ2e−λ2(z−x1)dx1=λ1λ2e−λ2z∫0ze(λ2−λ1)x1dx1={λ1λ2λ2−λ1(e−λ1z−e−λ2z)ifλ1≠λ2λ2ze−λzifλ1=λ2=λ.{\displaystyle {\begin{aligned}f_{Z}(z)&=\int _{-\infty }^{\infty }f_{X_{1}}(x_{1})f_{X_{2}}(z-x_{1})\,dx_{1}\\&=\int _{0}^{z}\lambda _{1}e^{-\lambda _{1}x_{1}}\lambda _{2}e^{-\lambda _{2}(z-x_{1})}\,dx_{1}\\&=\lambda _{1}\lambda _{2}e^{-\lambda _{2}z}\int _{0}^{z}e^{(\lambda _{2}-\lambda _{1})x_{1}}\,dx_{1}\\&={\begin{cases}{\dfrac {\lambda _{1}\lambda _{2}}{\lambda _{2}-\lambda _{1}}}\left(e^{-\lambda _{1}z}-e^{-\lambda _{2}z}\right)&{\text{ if }}\lambda _{1}\neq \lambda _{2}\\[4pt]\lambda ^{2}ze^{-\lambda z}&{\text{ if }}\lambda _{1}=\lambda _{2}=\lambda .\end{cases}}\end{aligned}}}The entropy of this distribution is available in closed form: assumingλ1>λ2{\displaystyle \lambda _{1}>\lambda _{2}}(without loss of generality), thenH(Z)=1+γ+ln⁡(λ1−λ2λ1λ2)+ψ(λ1λ1−λ2),{\displaystyle {\begin{aligned}H(Z)&=1+\gamma +\ln \left({\frac {\lambda _{1}-\lambda _{2}}{\lambda _{1}\lambda _{2}}}\right)+\psi \left({\frac {\lambda _{1}}{\lambda _{1}-\lambda _{2}}}\right),\end{aligned}}}whereγ{\displaystyle \gamma }is theEuler-Mascheroni constant, andψ(⋅){\displaystyle \psi (\cdot )}is thedigamma function.[7] In the case of equal rate parameters, the result is anErlang distributionwith shape 2 and parameterλ,{\displaystyle \lambda ,}which in turn is a special case ofgamma distribution. The sum of n independent Exp(λ)exponential random variables is Gamma(n,λ)distributed. Other related distributions: Below, suppose random variableXis exponentially distributed with rate parameter λ, andx1,…,xn{\displaystyle x_{1},\dotsc ,x_{n}}arenindependent samples fromX, with sample meanx¯{\displaystyle {\bar {x}}}. Themaximum likelihoodestimator for λ is constructed as follows. Thelikelihood functionfor λ, given anindependent and identically distributedsamplex= (x1, ...,xn) drawn from the variable, is:L(λ)=∏i=1nλexp⁡(−λxi)=λnexp⁡(−λ∑i=1nxi)=λnexp⁡(−λnx¯),{\displaystyle L(\lambda )=\prod _{i=1}^{n}\lambda \exp(-\lambda x_{i})=\lambda ^{n}\exp \left(-\lambda \sum _{i=1}^{n}x_{i}\right)=\lambda ^{n}\exp \left(-\lambda n{\overline {x}}\right),} where:x¯=1n∑i=1nxi{\displaystyle {\overline {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}is the sample mean. The derivative of the likelihood function's logarithm is:ddλln⁡L(λ)=ddλ(nln⁡λ−λnx¯)=nλ−nx¯{>0,0<λ<1x¯,=0,λ=1x¯,<0,λ>1x¯.{\displaystyle {\frac {d}{d\lambda }}\ln L(\lambda )={\frac {d}{d\lambda }}\left(n\ln \lambda -\lambda n{\overline {x}}\right)={\frac {n}{\lambda }}-n{\overline {x}}\ {\begin{cases}>0,&0<\lambda <{\frac {1}{\overline {x}}},\\[8pt]=0,&\lambda ={\frac {1}{\overline {x}}},\\[8pt]<0,&\lambda >{\frac {1}{\overline {x}}}.\end{cases}}} Consequently, themaximum likelihoodestimate for the rate parameter is:λ^mle=1x¯=n∑ixi{\displaystyle {\widehat {\lambda }}_{\text{mle}}={\frac {1}{\overline {x}}}={\frac {n}{\sum _{i}x_{i}}}} This isnotanunbiased estimatorofλ,{\displaystyle \lambda ,}althoughx¯{\displaystyle {\overline {x}}}isan unbiased[10]MLE[11]estimator of1/λ{\displaystyle 1/\lambda }and the distribution mean. The bias ofλ^mle{\displaystyle {\widehat {\lambda }}_{\text{mle}}}is equal toB≡E⁡[(λ^mle−λ)]=λn−1{\displaystyle B\equiv \operatorname {E} \left[\left({\widehat {\lambda }}_{\text{mle}}-\lambda \right)\right]={\frac {\lambda }{n-1}}}which yields thebias-corrected maximum likelihood estimatorλ^mle∗=λ^mle−B.{\displaystyle {\widehat {\lambda }}_{\text{mle}}^{*}={\widehat {\lambda }}_{\text{mle}}-B.} An approximate minimizer ofmean squared error(see also:bias–variance tradeoff) can be found, assuming a sample size greater than two, with a correction factor to the MLE:λ^=(n−2n)(1x¯)=n−2∑ixi{\displaystyle {\widehat {\lambda }}=\left({\frac {n-2}{n}}\right)\left({\frac {1}{\bar {x}}}\right)={\frac {n-2}{\sum _{i}x_{i}}}}This is derived from the mean and variance of theinverse-gamma distribution,Inv-Gamma(n,λ){\textstyle {\mbox{Inv-Gamma}}(n,\lambda )}.[12] TheFisher information, denotedI(λ){\displaystyle {\mathcal {I}}(\lambda )}, for an estimator of the rate parameterλ{\displaystyle \lambda }is given as:I(λ)=E⁡[(∂∂λlog⁡f(x;λ))2|λ]=∫(∂∂λlog⁡f(x;λ))2f(x;λ)dx{\displaystyle {\mathcal {I}}(\lambda )=\operatorname {E} \left[\left.\left({\frac {\partial }{\partial \lambda }}\log f(x;\lambda )\right)^{2}\right|\lambda \right]=\int \left({\frac {\partial }{\partial \lambda }}\log f(x;\lambda )\right)^{2}f(x;\lambda )\,dx} Plugging in the distribution and solving gives:I(λ)=∫0∞(∂∂λlog⁡λe−λx)2λe−λxdx=∫0∞(1λ−x)2λe−λxdx=λ−2.{\displaystyle {\mathcal {I}}(\lambda )=\int _{0}^{\infty }\left({\frac {\partial }{\partial \lambda }}\log \lambda e^{-\lambda x}\right)^{2}\lambda e^{-\lambda x}\,dx=\int _{0}^{\infty }\left({\frac {1}{\lambda }}-x\right)^{2}\lambda e^{-\lambda x}\,dx=\lambda ^{-2}.} This determines the amount of information each independent sample of an exponential distribution carries about the unknown rate parameterλ{\displaystyle \lambda }. An exact 100(1 − α)% confidence interval for the rate parameter of an exponential distribution is given by:[13]2nλ^mleχα2,2n2<1λ<2nλ^mleχ1−α2,2n2,{\displaystyle {\frac {2n}{{\widehat {\lambda }}_{\textrm {mle}}\chi _{{\frac {\alpha }{2}},2n}^{2}}}<{\frac {1}{\lambda }}<{\frac {2n}{{\widehat {\lambda }}_{\textrm {mle}}\chi _{1-{\frac {\alpha }{2}},2n}^{2}}}\,,}which is also equal to2nx¯χα2,2n2<1λ<2nx¯χ1−α2,2n2,{\displaystyle {\frac {2n{\overline {x}}}{\chi _{{\frac {\alpha }{2}},2n}^{2}}}<{\frac {1}{\lambda }}<{\frac {2n{\overline {x}}}{\chi _{1-{\frac {\alpha }{2}},2n}^{2}}}\,,}whereχ2p,vis the100(p)percentileof thechi squared distributionwithvdegrees of freedom, n is the number of observations and x-bar is the sample average. A simple approximation to the exact interval endpoints can be derived using a normal approximation to theχ2p,vdistribution. This approximation gives the following values for a 95% confidence interval:λlower=λ^(1−1.96n)λupper=λ^(1+1.96n){\displaystyle {\begin{aligned}\lambda _{\text{lower}}&={\widehat {\lambda }}\left(1-{\frac {1.96}{\sqrt {n}}}\right)\\\lambda _{\text{upper}}&={\widehat {\lambda }}\left(1+{\frac {1.96}{\sqrt {n}}}\right)\end{aligned}}} This approximation may be acceptable for samples containing at least 15 to 20 elements.[14] Theconjugate priorfor the exponential distribution is thegamma distribution(of which the exponential distribution is a special case). The following parameterization of the gamma probability density function is useful: Gamma⁡(λ;α,β)=βαΓ(α)λα−1exp⁡(−λβ).{\displaystyle \operatorname {Gamma} (\lambda ;\alpha ,\beta )={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\lambda ^{\alpha -1}\exp(-\lambda \beta ).} Theposterior distributionpcan then be expressed in terms of the likelihood function defined above and a gamma prior: p(λ)∝L(λ)Γ(λ;α,β)=λnexp⁡(−λnx¯)βαΓ(α)λα−1exp⁡(−λβ)∝λ(α+n)−1exp⁡(−λ(β+nx¯)).{\displaystyle {\begin{aligned}p(\lambda )&\propto L(\lambda )\Gamma (\lambda ;\alpha ,\beta )\\&=\lambda ^{n}\exp \left(-\lambda n{\overline {x}}\right){\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\lambda ^{\alpha -1}\exp(-\lambda \beta )\\&\propto \lambda ^{(\alpha +n)-1}\exp(-\lambda \left(\beta +n{\overline {x}}\right)).\end{aligned}}} Now the posterior densityphas been specified up to a missing normalizing constant. Since it has the form of a gamma pdf, this can easily be filled in, and one obtains: p(λ)=Gamma⁡(λ;α+n,β+nx¯).{\displaystyle p(\lambda )=\operatorname {Gamma} (\lambda ;\alpha +n,\beta +n{\overline {x}}).} Here thehyperparameterαcan be interpreted as the number of prior observations, andβas the sum of the prior observations. The posterior mean here is:α+nβ+nx¯.{\displaystyle {\frac {\alpha +n}{\beta +n{\overline {x}}}}.} The exponential distribution occurs naturally when describing the lengths of the inter-arrival times in a homogeneousPoisson process. The exponential distribution may be viewed as a continuous counterpart of thegeometric distribution, which describes the number ofBernoulli trialsnecessary for adiscreteprocess to change state. In contrast, the exponential distribution describes the time for a continuous process to change state. In real-world scenarios, the assumption of a constant rate (or probability per unit time) is rarely satisfied. For example, the rate of incoming phone calls differs according to the time of day. But if we focus on a time interval during which the rate is roughly constant, such as from 2 to 4 p.m. during work days, the exponential distribution can be used as a good approximate model for the time until the next phone call arrives. Similar caveats apply to the following examples which yield approximately exponentially distributed variables: Exponential variables can also be used to model situations where certain events occur with a constant probability per unit length, such as the distance betweenmutationson aDNAstrand, or betweenroadkillson a given road. Inqueuing theory, the service times of agents in a system (e.g. how long it takes for a bank teller etc. to serve a customer) are often modeled as exponentially distributed variables. (The arrival of customers for instance is also modeled by thePoisson distributionif the arrivals are independent and distributed identically.) The length of a process that can be thought of as a sequence of several independent tasks follows theErlang distribution(which is the distribution of the sum of several independent exponentially distributed variables).Reliability theoryandreliability engineeringalso make extensive use of the exponential distribution. Because of the memoryless property of this distribution, it is well-suited to model the constanthazard rateportion of thebathtub curveused in reliability theory. It is also very convenient because it is so easy to addfailure ratesin a reliability model. The exponential distribution is however not appropriate to model the overall lifetime of organisms or technical devices, because the "failure rates" here are not constant: more failures occur for very young and for very old systems. Inphysics, if you observe agasat a fixedtemperatureandpressurein a uniformgravitational field, the heights of the various molecules also follow an approximate exponential distribution, known as theBarometric formula. This is a consequence of the entropy property mentioned below. Inhydrology, the exponential distribution is used to analyze extreme values of such variables as monthly and annual maximum values of daily rainfall and river discharge volumes.[16] In operating-rooms management, the distribution of surgery duration for a category of surgeries withno typical work-content(like in an emergency room, encompassing all types of surgeries). Having observed a sample ofndata points from an unknown exponential distribution a common task is to use these samples to make predictions about future data from the same source. A common predictive distribution over future samples is the so-called plug-in distribution, formed by plugging a suitable estimate for the rate parameterλinto the exponential density function. A common choice of estimate is the one provided by the principle of maximum likelihood, and using this yields the predictive density over a future samplexn+1, conditioned on the observed samplesx= (x1, ...,xn) given bypML(xn+1∣x1,…,xn)=(1x¯)exp⁡(−xn+1x¯).{\displaystyle p_{\rm {ML}}(x_{n+1}\mid x_{1},\ldots ,x_{n})=\left({\frac {1}{\overline {x}}}\right)\exp \left(-{\frac {x_{n+1}}{\overline {x}}}\right).} The Bayesian approach provides a predictive distribution which takes into account the uncertainty of the estimated parameter, although this may depend crucially on the choice of prior. A predictive distribution free of the issues of choosing priors that arise under the subjective Bayesian approach is pCNML(xn+1∣x1,…,xn)=nn+1(x¯)n(nx¯+xn+1)n+1,{\displaystyle p_{\rm {CNML}}(x_{n+1}\mid x_{1},\ldots ,x_{n})={\frac {n^{n+1}\left({\overline {x}}\right)^{n}}{\left(n{\overline {x}}+x_{n+1}\right)^{n+1}}},} which can be considered as The accuracy of a predictive distribution may be measured using the distance or divergence between the true exponential distribution with rate parameter,λ0, and the predictive distribution based on the samplex. TheKullback–Leibler divergenceis a commonly used, parameterisation free measure of the difference between two distributions. Letting Δ(λ0||p) denote the Kullback–Leibler divergence between an exponential with rate parameterλ0and a predictive distributionpit can be shown that Eλ0⁡[Δ(λ0∥pML)]=ψ(n)+1n−1−log⁡(n)Eλ0⁡[Δ(λ0∥pCNML)]=ψ(n)+1n−log⁡(n){\displaystyle {\begin{aligned}\operatorname {E} _{\lambda _{0}}\left[\Delta (\lambda _{0}\parallel p_{\rm {ML}})\right]&=\psi (n)+{\frac {1}{n-1}}-\log(n)\\\operatorname {E} _{\lambda _{0}}\left[\Delta (\lambda _{0}\parallel p_{\rm {CNML}})\right]&=\psi (n)+{\frac {1}{n}}-\log(n)\end{aligned}}} where the expectation is taken with respect to the exponential distribution with rate parameterλ0∈ (0, ∞), andψ( · )is the digamma function. It is clear that the CNML predictive distribution is strictly superior to the maximum likelihood plug-in distribution in terms of average Kullback–Leibler divergence for all sample sizesn> 0. A conceptually very simple method for generating exponentialvariatesis based oninverse transform sampling: Given a random variateUdrawn from theuniform distributionon the unit interval(0, 1), the variate T=F−1(U){\displaystyle T=F^{-1}(U)} has an exponential distribution, whereF−1is thequantile function, defined by F−1(p)=−ln⁡(1−p)λ.{\displaystyle F^{-1}(p)={\frac {-\ln(1-p)}{\lambda }}.} Moreover, ifUis uniform on (0, 1), then so is 1 −U. This means one can generate exponential variates as follows: T=−ln⁡(U)λ.{\displaystyle T={\frac {-\ln(U)}{\lambda }}.} Other methods for generating exponential variates are discussed by Knuth[20]and Devroye.[21] A fast method for generating a set of ready-ordered exponential variates without using a sorting routine is also available.[21]
https://en.wikipedia.org/wiki/Exponential_distribution
Inprobability theoryandstatistics, thegamma distributionis a versatile two-parameterfamily of continuousprobability distributions.[1]Theexponential distribution,Erlang distribution, andchi-squared distributionare special cases of the gamma distribution.[2]There are two equivalent parameterizations in common use: In each of these forms, both parameters are positive real numbers. The distribution has important applications in various fields, includingeconometrics,Bayesian statistics, and life testing.[3]In econometrics, the (α,θ) parameterization is common for modeling waiting times, such as the time until death, where it often takes the form of anErlang distributionfor integerαvalues. Bayesian statisticians prefer the (α,λ) parameterization, utilizing the gamma distribution as aconjugate priorfor several inverse scale parameters, facilitating analytical tractability in posterior distribution computations. The probability density and cumulative distribution functions of the gamma distribution vary based on the chosen parameterization, both offering insights into the behavior of gamma-distributed random variables. The gamma distribution is integral to modeling a range of phenomena due to its flexible shape, which can capture various statistical distributions, including the exponential and chi-squared distributions under specific conditions. Its mathematical properties, such as mean, variance, skewness, and higher moments, provide a toolset for statistical analysis and inference. Practical applications of the distribution span several disciplines, underscoring its importance in theoretical and applied statistics.[4] The gamma distribution is themaximum entropy probability distribution(both with respect to a uniform base measure and a1/x{\displaystyle 1/x}base measure) for a random variableXfor whichE[X] =αθ=α/λis fixed and greater than zero, andE[lnX] =ψ(α) + lnθ=ψ(α) − lnλis fixed (ψis thedigamma function).[5] The parameterization withαandθappears to be more common ineconometricsand other applied fields, where the gamma distribution is frequently used to model waiting times. For instance, inlife testing, the waiting time until death is arandom variablethat is frequently modeled with a gamma distribution. See Hogg and Craig[6]for an explicit motivation. The parameterization withαandλis more common inBayesian statistics, where the gamma distribution is used as aconjugate priordistribution for various types of inverse scale (rate) parameters, such as theλof anexponential distributionor aPoisson distribution[7]– or for that matter, theλof the gamma distribution itself. The closely relatedinverse-gamma distributionis used as a conjugate prior for scale parameters, such as thevarianceof anormal distribution. Ifαis a positiveinteger, then the distribution represents anErlang distribution; i.e., the sum ofαindependentexponentially distributedrandom variables, each of which has a mean ofθ. The gamma distribution can be parameterized in terms of ashape parameterαand an inverse scale parameterλ= 1/θ, called arate parameter. A random variableXthat is gamma-distributed with shapeαand rateλis denoted X∼Γ(α,λ)≡Gamma⁡(α,λ){\displaystyle X\sim \Gamma (\alpha ,\lambda )\equiv \operatorname {Gamma} (\alpha ,\lambda )} The corresponding probability density function in the shape-rate parameterization is f(x;α,λ)=xα−1e−λxλαΓ(α)forx>0α,λ>0,{\displaystyle {\begin{aligned}f(x;\alpha ,\lambda )&={\frac {x^{\alpha -1}e^{-\lambda x}\lambda ^{\alpha }}{\Gamma (\alpha )}}\quad {\text{ for }}x>0\quad \alpha ,\lambda >0,\\[6pt]\end{aligned}}} whereΓ(α){\displaystyle \Gamma (\alpha )}is thegamma function. For all positive integers,Γ(α)=(α−1)!{\displaystyle \Gamma (\alpha )=(\alpha -1)!}. Thecumulative distribution functionis the regularized gamma function: F(x;α,λ)=∫0xf(u;α,λ)du=γ(α,λx)Γ(α),{\displaystyle F(x;\alpha ,\lambda )=\int _{0}^{x}f(u;\alpha ,\lambda )\,du={\frac {\gamma (\alpha ,\lambda x)}{\Gamma (\alpha )}},} whereγ(α,λx){\displaystyle \gamma (\alpha ,\lambda x)}is the lowerincomplete gamma function. Ifαis a positiveinteger(i.e., the distribution is anErlang distribution), the cumulative distribution function has the following series expansion:[8] F(x;α,λ)=1−∑i=0α−1(λx)ii!e−λx=e−λx∑i=α∞(λx)ii!.{\displaystyle F(x;\alpha ,\lambda )=1-\sum _{i=0}^{\alpha -1}{\frac {(\lambda x)^{i}}{i!}}e^{-\lambda x}=e^{-\lambda x}\sum _{i=\alpha }^{\infty }{\frac {(\lambda x)^{i}}{i!}}.} A random variableXthat is gamma-distributed with shapeαand scaleθis denoted by X∼Γ(α,θ)≡Gamma⁡(α,θ){\displaystyle X\sim \Gamma (\alpha ,\theta )\equiv \operatorname {Gamma} (\alpha ,\theta )} Theprobability density functionusing the shape-scale parametrization is f(x;α,θ)=xα−1e−x/θθαΓ(α)forx>0andα,θ>0.{\displaystyle f(x;\alpha ,\theta )={\frac {x^{\alpha -1}e^{-x/\theta }}{\theta ^{\alpha }\Gamma (\alpha )}}\quad {\text{ for }}x>0{\text{ and }}\alpha ,\theta >0.} HereΓ(α)is thegamma functionevaluated atα. Thecumulative distribution functionis the regularized gamma function: F(x;α,θ)=∫0xf(u;α,θ)du=γ(α,xθ)Γ(α),{\displaystyle F(x;\alpha ,\theta )=\int _{0}^{x}f(u;\alpha ,\theta )\,du={\frac {\gamma \left(\alpha ,{\frac {x}{\theta }}\right)}{\Gamma (\alpha )}},} whereγ(α,xθ){\displaystyle \gamma \left(\alpha ,{\frac {x}{\theta }}\right)}is the lowerincomplete gamma function. It can also be expressed as follows, ifαis a positiveinteger(i.e., the distribution is anErlang distribution):[8] F(x;α,θ)=1−∑i=0α−11i!(xθ)ie−x/θ=e−x/θ∑i=α∞1i!(xθ)i.{\displaystyle F(x;\alpha ,\theta )=1-\sum _{i=0}^{\alpha -1}{\frac {1}{i!}}\left({\frac {x}{\theta }}\right)^{i}e^{-x/\theta }=e^{-x/\theta }\sum _{i=\alpha }^{\infty }{\frac {1}{i!}}\left({\frac {x}{\theta }}\right)^{i}.} Both parametrizations are common because either can be more convenient depending on the situation. The mean of gamma distribution is given by the product of its shape and scale parameters:μ=αθ=α/λ{\displaystyle \mu =\alpha \theta =\alpha /\lambda }The variance is:σ2=αθ2=α/λ2{\displaystyle \sigma ^{2}=\alpha \theta ^{2}=\alpha /\lambda ^{2}}The square root of the inverse shape parameter gives thecoefficient of variation:σ/μ=α−0.5=1/α{\displaystyle \sigma /\mu =\alpha ^{-0.5}=1/{\sqrt {\alpha }}} Theskewnessof the gamma distribution only depends on its shape parameter,α, and it is equal to2/α.{\displaystyle 2/{\sqrt {\alpha }}.} Then-thraw momentis given by:E[Xn]=θnΓ(α+n)Γ(α)=θn∏i=1n(α+i−1)forn=1,2,….{\displaystyle \mathrm {E} [X^{n}]=\theta ^{n}{\frac {\Gamma (\alpha +n)}{\Gamma (\alpha )}}=\theta ^{n}\prod _{i=1}^{n}(\alpha +i-1)\;{\text{ for }}n=1,2,\ldots .} Unlike the mode and the mean, which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is the valueν{\displaystyle \nu }such that1Γ(α)θα∫0νxα−1e−x/θdx=12.{\displaystyle {\frac {1}{\Gamma (\alpha )\theta ^{\alpha }}}\int _{0}^{\nu }x^{\alpha -1}e^{-x/\theta }dx={\frac {1}{2}}.} A rigorous treatment of the problem of determining an asymptotic expansion and bounds for the median of the gamma distribution was handled first by Chen and Rubin, who proved that (forθ=1{\displaystyle \theta =1})α−13<ν(α)<α,{\displaystyle \alpha -{\frac {1}{3}}<\nu (\alpha )<\alpha ,}whereμ(α)=α{\displaystyle \mu (\alpha )=\alpha }is the mean andν(α){\displaystyle \nu (\alpha )}is the median of theGamma(α,1){\displaystyle {\text{Gamma}}(\alpha ,1)}distribution.[9]For other values of the scale parameter, the mean scales toμ=αθ{\displaystyle \mu =\alpha \theta }, and the median bounds and approximations would be similarly scaled byθ. K. P. Choi found the first five terms in aLaurent seriesasymptotic approximation of the median by comparing the median toRamanujan'sθ{\displaystyle \theta }function.[10]Berg and Pedersen found more terms:[11]ν(α)=α−13+8405α+18425515α2+22483444525α3−1900640815345358875α4−O(1α5)+⋯{\displaystyle \nu (\alpha )=\alpha -{\frac {1}{3}}+{\frac {8}{405\alpha }}+{\frac {184}{25515\alpha ^{2}}}+{\frac {2248}{3444525\alpha ^{3}}}-{\frac {19006408}{15345358875\alpha ^{4}}}-O\left({\frac {1}{\alpha ^{5}}}\right)+\cdots } Partial sums of these series are good approximations for high enoughα; they are not plotted in the figure, which is focused on the low-αregion that is less well approximated. Berg and Pedersen also proved many properties of the median, showing that it is a convex function ofα,[12]and that the asymptotic behavior nearα=0{\displaystyle \alpha =0}isν(α)≈e−γ2−1/α{\displaystyle \nu (\alpha )\approx e^{-\gamma }2^{-1/\alpha }}(whereγis theEuler–Mascheroni constant), and that for allα>0{\displaystyle \alpha >0}the median is bounded byα2−1/α<ν(α)<ke−1/3k{\displaystyle \alpha 2^{-1/\alpha }<\nu (\alpha )<ke^{-1/3k}}.[11] A closer linear upper bound, forα≥1{\displaystyle \alpha \geq 1}only, was provided in 2021 by Gaunt and Merkle,[13]relying on the Berg and Pedersen result that the slope ofν(α){\displaystyle \nu (\alpha )}is everywhere less than 1:ν(α)≤α−1+log⁡2{\displaystyle \nu (\alpha )\leq \alpha -1+\log 2~~}forα≥1{\displaystyle \alpha \geq 1}(with equality atα=1{\displaystyle \alpha =1}) which can be extended to a bound for allα>0{\displaystyle \alpha >0}by taking the max with the chord shown in the figure, since the median was proved convex.[12] An approximation to the median that is asymptotically accurate at highαand reasonable down toα=0.5{\displaystyle \alpha =0.5}or a bit lower follows from theWilson–Hilferty transformation:ν(α)=α(1−19α)3{\displaystyle \nu (\alpha )=\alpha \left(1-{\frac {1}{9\alpha }}\right)^{3}}which goes negative forα<1/9{\displaystyle \alpha <1/9}. In 2021, Lyon proposed several approximations of the formν(α)≈2−1/α(A+Bα){\displaystyle \nu (\alpha )\approx 2^{-1/\alpha }(A+B\alpha )}. He conjectured values ofAandBfor which this approximation is an asymptotically tight upper or lower bound for allα>0{\displaystyle \alpha >0}.[14]In particular, he proposed these closed-form bounds, which he proved in 2023:[15] νL∞(α)=2−1/α(log⁡2−13+α){\displaystyle \nu _{L\infty }(\alpha )=2^{-1/\alpha }(\log 2-{\frac {1}{3}}+\alpha )\quad }is a lower bound, asymptotically tight asα→∞{\displaystyle \alpha \to \infty }νU(α)=2−1/α(e−γ+α){\displaystyle \nu _{U}(\alpha )=2^{-1/\alpha }(e^{-\gamma }+\alpha )\quad }is an upper bound, asymptotically tight asα→0{\displaystyle \alpha \to 0} Lyon also showed (informally in 2021, rigorously in 2023) two other lower bounds that are notclosed-form expressions, including this one involving thegamma function, based on solving the integral expression substituting 1 fore−x{\displaystyle e^{-x}}:ν(α)>(2Γ(α+1))−1/α{\displaystyle \nu (\alpha )>\left({\frac {2}{\Gamma (\alpha +1)}}\right)^{-1/\alpha }\quad }(approaching equality ask→0{\displaystyle k\to 0}) and the tangent line atα=1{\displaystyle \alpha =1}where the derivative was found to beν′(1)≈0.9680448{\displaystyle \nu ^{\prime }(1)\approx 0.9680448}:ν(α)≥ν(1)+(α−1)ν′(1){\displaystyle \nu (\alpha )\geq \nu (1)+(\alpha -1)\nu ^{\prime }(1)\quad }(with equality atk=1{\displaystyle k=1})ν(α)≥log⁡2+(α−1)(γ−2Ei⁡(−log⁡2)−log⁡log⁡2){\displaystyle \nu (\alpha )\geq \log 2+(\alpha -1)(\gamma -2\operatorname {Ei} (-\log 2)-\log \log 2)}where Ei is theexponential integral.[14][15] Additionally, he showed that interpolations between bounds could provide excellent approximations or tighter bounds to the median, including an approximation that is exact atα=1{\displaystyle \alpha =1}(whereν(1)=log⁡2{\displaystyle \nu (1)=\log 2}) and has a maximum relative error less than 0.6%. Interpolated approximations and bounds are all of the formν(α)≈g~(α)νL∞(α)+(1−g~(α))νU(α){\displaystyle \nu (\alpha )\approx {\tilde {g}}(\alpha )\nu _{L\infty }(\alpha )+(1-{\tilde {g}}(\alpha ))\nu _{U}(\alpha )}whereg~{\displaystyle {\tilde {g}}}is an interpolating function running monotonially from 0 at lowαto 1 at highα, approximating an ideal, or exact, interpolatorg(α){\displaystyle g(\alpha )}:g(α)=νU(α)−ν(α)νU(α)−νL∞(α){\displaystyle g(\alpha )={\frac {\nu _{U}(\alpha )-\nu (\alpha )}{\nu _{U}(\alpha )-\nu _{L\infty }(\alpha )}}}For the simplest interpolating function considered, a first-order rational functiong~1(α)=αb0+α{\displaystyle {\tilde {g}}_{1}(\alpha )={\frac {\alpha }{b_{0}+\alpha }}}the tightest lower bound hasb0=8405+e−γlog⁡2−log2⁡22e−γ−log⁡2+13−log⁡2≈0.143472{\displaystyle b_{0}={\frac {{\frac {8}{405}}+e^{-\gamma }\log 2-{\frac {\log ^{2}2}{2}}}{e^{-\gamma }-\log 2+{\frac {1}{3}}}}-\log 2\approx 0.143472}and the tightest upper bound hasb0=e−γ−log⁡2+131−e−γπ212≈0.374654{\displaystyle b_{0}={\frac {e^{-\gamma }-\log 2+{\frac {1}{3}}}{1-{\frac {e^{-\gamma }\pi ^{2}}{12}}}}\approx 0.374654}The interpolated bounds are plotted (mostly inside the yellow region) in thelog–log plotshown. Even tighter bounds are available using different interpolating functions, but not usually with closed-form parameters like these.[14] IfXihas aGamma(αi,θ)distribution fori= 1, 2, ...,N(i.e., all distributions have the same scale parameterθ), then ∑i=1NXi∼Gamma(∑i=1Nαi,θ){\displaystyle \sum _{i=1}^{N}X_{i}\sim \mathrm {Gamma} \left(\sum _{i=1}^{N}\alpha _{i},\theta \right)} provided allXiareindependent. For the cases where theXiareindependentbut have different scale parameters, see Mathai[16]or Moschopoulos.[17] The gamma distribution exhibitsinfinite divisibility. IfX∼Gamma(α,θ),{\displaystyle X\sim \mathrm {Gamma} (\alpha ,\theta ),} then, for anyc> 0, cX∼Gamma(α,cθ),{\displaystyle cX\sim \mathrm {Gamma} (\alpha ,c\,\theta ),}by moment generating functions, or equivalently, if X∼Gamma(α,λ){\displaystyle X\sim \mathrm {Gamma} \left(\alpha ,\lambda \right)}(shape-rate parameterization) cX∼Gamma(α,λc),{\displaystyle cX\sim \mathrm {Gamma} \left(\alpha ,{\frac {\lambda }{c}}\right),} Indeed, we know that ifXis anexponential r.v.with rateλ, thencXis an exponential r.v. with rateλ/c; the same thing is valid with Gamma variates (and this can be checked using themoment-generating function, see, e.g.,these notes, 10.4-(ii)): multiplication by a positive constantcdivides the rate (or, equivalently, multiplies the scale). The gamma distribution is a two-parameterexponential familywithnatural parametersα− 1and−1/θ(equivalently,α− 1and−λ), andnatural statisticsXandlnX. If the shape parameterαis held fixed, the resulting one-parameter family of distributions is anatural exponential family. One can show that E⁡[ln⁡X]=ψ(α)−ln⁡λ{\displaystyle \operatorname {E} [\ln X]=\psi (\alpha )-\ln \lambda } or equivalently, E⁡[ln⁡X]=ψ(α)+ln⁡θ{\displaystyle \operatorname {E} [\ln X]=\psi (\alpha )+\ln \theta } whereψis thedigamma function. Likewise, var⁡[ln⁡X]=ψ(1)(α){\displaystyle \operatorname {var} [\ln X]=\psi ^{(1)}(\alpha )} whereψ(1){\displaystyle \psi ^{(1)}}is thetrigamma function. This can be derived using theexponential familyformula for themoment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution islnx. Theinformation entropyis H⁡(X)=E⁡[−ln⁡p(X)]=E⁡[−αln⁡λ+ln⁡Γ(α)−(α−1)ln⁡X+λX]=α−ln⁡λ+ln⁡Γ(α)+(1−α)ψ(α).{\displaystyle {\begin{aligned}\operatorname {H} (X)&=\operatorname {E} [-\ln p(X)]\\[4pt]&=\operatorname {E} [-\alpha \ln \lambda +\ln \Gamma (\alpha )-(\alpha -1)\ln X+\lambda X]\\[4pt]&=\alpha -\ln \lambda +\ln \Gamma (\alpha )+(1-\alpha )\psi (\alpha ).\end{aligned}}} In theα,θparameterization, theinformation entropyis given by H⁡(X)=α+ln⁡θ+ln⁡Γ(α)+(1−α)ψ(α).{\displaystyle \operatorname {H} (X)=\alpha +\ln \theta +\ln \Gamma (\alpha )+(1-\alpha )\psi (\alpha ).} TheKullback–Leibler divergence(KL-divergence), ofGamma(αp,λp)("true" distribution) fromGamma(αq,λq)("approximating" distribution) is given by[18] DKL(αp,λp;αq,λq)=(αp−αq)ψ(αp)−log⁡Γ(αp)+log⁡Γ(αq)+αq(log⁡λp−log⁡λq)+αpλq−λpλp.{\displaystyle {\begin{aligned}D_{\mathrm {KL} }(\alpha _{p},\lambda _{p};\alpha _{q},\lambda _{q})={}&(\alpha _{p}-\alpha _{q})\psi (\alpha _{p})-\log \Gamma (\alpha _{p})+\log \Gamma (\alpha _{q})\\&{}+\alpha _{q}(\log \lambda _{p}-\log \lambda _{q})+\alpha _{p}{\frac {\lambda _{q}-\lambda _{p}}{\lambda _{p}}}.\end{aligned}}} Written using theα,θparameterization, the KL-divergence ofGamma(αp,θp)fromGamma(αq,θq)is given by DKL(αp,θp;αq,θq)=(αp−αq)ψ(αp)−log⁡Γ(αp)+log⁡Γ(αq)+αq(log⁡θq−log⁡θp)+αpθp−θqθq.{\displaystyle {\begin{aligned}D_{\mathrm {KL} }(\alpha _{p},\theta _{p};\alpha _{q},\theta _{q})={}&(\alpha _{p}-\alpha _{q})\psi (\alpha _{p})-\log \Gamma (\alpha _{p})+\log \Gamma (\alpha _{q})\\&{}+\alpha _{q}(\log \theta _{q}-\log \theta _{p})+\alpha _{p}{\frac {\theta _{p}-\theta _{q}}{\theta _{q}}}.\end{aligned}}} TheLaplace transformof the gamma PDF, which is themoment-generating functionof the gamma distribution, is F(s)=E⁡(esX)=1(1−θs)α=(λλ−s)α{\displaystyle F(s)=\operatorname {E} \left(e^{sX}\right)={\frac {1}{(1-\theta s)^{\alpha }}}=\left({\frac {\lambda }{\lambda -s}}\right)^{\alpha }} (whereX{\textstyle X}is a random variable with that distribution). If the shape parameter of the gamma distribution is known, but the inverse-scale parameter is unknown, then a gamma distribution for the inverse scale forms a conjugate prior. Thecompound distribution, which results from integrating out the inverse scale, has a closed-form solution known as thecompound gamma distribution.[22] If, instead, the shape parameter is known but the mean is unknown, with the prior of the mean being given by another gamma distribution, then it results inK-distribution. The gamma distributionf(x;α)(α>1){\displaystyle f(x;\alpha )\,(\alpha >1)}can be expressed as the product distribution of aWeibull distributionand a variant form of thestable count distribution. Its shape parameterα{\displaystyle \alpha }can be regarded as the inverse of Lévy's stability parameter in the stable count distribution:f(x;α)=∫0∞1uWk(xu)[kuα−1N1α(uα)]du,{\displaystyle f(x;\alpha )=\int _{0}^{\infty }{\frac {1}{u}}\,W_{k}\left({\frac {x}{u}}\right)\left[ku^{\alpha -1}\,{\mathfrak {N}}_{\frac {1}{\alpha }}\left(u^{\alpha }\right)\right]\,du,}whereNα(ν){\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}is a standard stable count distribution of shapeα{\displaystyle \alpha }, andWα(x){\displaystyle W_{\alpha }(x)}is a standard Weibull distribution of shapeα{\displaystyle \alpha }. The likelihood function forNiidobservations(x1, ...,xN)is L(α,θ)=∏i=1Nf(xi;α,θ){\displaystyle L(\alpha ,\theta )=\prod _{i=1}^{N}f(x_{i};\alpha ,\theta )} from which we calculate the log-likelihood function ℓ(α,θ)=(α−1)∑i=1Nln⁡xi−∑i=1Nxiθ−Nαln⁡θ−Nln⁡Γ(α){\displaystyle \ell (\alpha ,\theta )=(\alpha -1)\sum _{i=1}^{N}\ln x_{i}-\sum _{i=1}^{N}{\frac {x_{i}}{\theta }}-N\alpha \ln \theta -N\ln \Gamma (\alpha )} Finding the maximum with respect toθby taking the derivative and setting it equal to zero yields themaximum likelihoodestimator of theθparameter, which equals thesample meanx¯{\displaystyle {\bar {x}}}divided by the shape parameterα: θ^=1αN∑i=1Nxi=x¯α{\displaystyle {\hat {\theta }}={\frac {1}{\alpha N}}\sum _{i=1}^{N}x_{i}={\frac {\bar {x}}{\alpha }}} Substituting this into the log-likelihood function gives ℓ(α)=(α−1)∑i=1Nln⁡xi−Nα−Nαln⁡(∑xiαN)−Nln⁡Γ(α){\displaystyle \ell (\alpha )=(\alpha -1)\sum _{i=1}^{N}\ln x_{i}-N\alpha -N\alpha \ln \left({\frac {\sum x_{i}}{\alpha N}}\right)-N\ln \Gamma (\alpha )} We need at least two samples:N≥2{\displaystyle N\geq 2}, because forN=1{\displaystyle N=1}, the functionℓ(α){\displaystyle \ell (\alpha )}increases without bounds asα→∞{\displaystyle \alpha \to \infty }. Forα>0{\displaystyle \alpha >0}, it can be verified thatℓ(α){\displaystyle \ell (\alpha )}is strictlyconcave, by usinginequality properties of the polygamma function. Finding the maximum with respect toαby taking the derivative and setting it equal to zero yields ln⁡α−ψ(α)=ln⁡(1N∑i=1Nxi)−1N∑i=1Nln⁡xi=ln⁡x¯−ln⁡x¯{\displaystyle \ln \alpha -\psi (\alpha )=\ln \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)-{\frac {1}{N}}\sum _{i=1}^{N}\ln x_{i}=\ln {\bar {x}}-{\overline {\ln x}}} whereψis thedigamma functionandln⁡x¯{\displaystyle {\overline {\ln x}}}is the sample mean oflnx. There is no closed-form solution forα. The function is numerically very well behaved, so if a numerical solution is desired, it can be found using, for example,Newton's method. An initial value ofkcan be found either using themethod of moments, or using the approximation ln⁡α−ψ(α)≈12α(1+16α+1){\displaystyle \ln \alpha -\psi (\alpha )\approx {\frac {1}{2\alpha }}\left(1+{\frac {1}{6\alpha +1}}\right)} If we let s=ln⁡(1N∑i=1Nxi)−1N∑i=1Nln⁡xi=ln⁡x¯−ln⁡x¯{\displaystyle s=\ln \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)-{\frac {1}{N}}\sum _{i=1}^{N}\ln x_{i}=\ln {\bar {x}}-{\overline {\ln x}}} thenαis approximately k≈3−s+(s−3)2+24s12s{\displaystyle k\approx {\frac {3-s+{\sqrt {(s-3)^{2}+24s}}}{12s}}} which is within 1.5% of the correct value.[23]An explicit form for the Newton–Raphson update of this initial guess is:[24] α←α−ln⁡α−ψ(k)−s1α−ψ′(α).{\displaystyle \alpha \leftarrow \alpha -{\frac {\ln \alpha -\psi (k)-s}{{\frac {1}{\alpha }}-\psi \prime (\alpha )}}.} At the maximum-likelihood estimate(α^,θ^){\displaystyle ({\hat {\alpha }},{\hat {\theta }})}, the expected values forxandln⁡x{\displaystyle \ln x}agree with the empirical averages:α^θ^=x¯andψ(α^)+ln⁡θ^=ln⁡x¯.{\displaystyle {\begin{aligned}{\hat {\alpha }}{\hat {\theta }}&={\bar {x}}&&{\text{and}}&\psi ({\hat {\alpha }})+\ln {\hat {\theta }}&={\overline {\ln x}}.\end{aligned}}} For data,(x1,…,xN){\displaystyle (x_{1},\ldots ,x_{N})}, that is represented in afloating pointformat that underflows to 0 for values smaller thanε{\displaystyle \varepsilon }, the logarithms that are needed for the maximum-likelihood estimate will cause failure if there are any underflows. If we assume the data was generated by a gamma distribution with cdfF(x;α,θ){\displaystyle F(x;\alpha ,\theta )}, then the probability that there is at least one underflow is:P(underflow)=1−(1−F(ε;α,θ))N{\displaystyle P({\text{underflow}})=1-(1-F(\varepsilon ;\alpha ,\theta ))^{N}}This probability will approach 1 for smallαand largeN. For example, atα=10−2{\displaystyle \alpha =10^{-2}},N=104{\displaystyle N=10^{4}}andε=2.25×10−308{\displaystyle \varepsilon =2.25\times 10^{-308}},P(underflow)≈0.9998{\displaystyle P({\text{underflow}})\approx 0.9998}. A workaround is to instead have the data in logarithmic format. In order to test an implementation of a maximum-likelihood estimator that takes logarithmic data as input, it is useful to be able to generate non-underflowing logarithms of random gamma variates, whenα<1{\displaystyle \alpha <1}. Following the implementation inscipy.stats.loggamma, this can be done as follows:[25]sampleY∼Gamma(α+1,θ){\displaystyle Y\sim {\text{Gamma}}(\alpha +1,\theta )}andU∼Uniform{\displaystyle U\sim {\text{Uniform}}}independently. Then the required logarithmic sample isZ=ln⁡(Y)+ln⁡(U)/α{\displaystyle Z=\ln(Y)+\ln(U)/\alpha }, so thatexp⁡(Z)∼Gamma(k,θ){\displaystyle \exp(Z)\sim {\text{Gamma}}(k,\theta )}. There exist consistent closed-form estimators ofαandθthat are derived from the likelihood of thegeneralized gamma distribution.[26] The estimate for the shapeαis α^=N∑i=1NxiN∑i=1Nxiln⁡xi−∑i=1Nxi∑i=1Nln⁡xi{\displaystyle {\hat {\alpha }}={\frac {N\sum _{i=1}^{N}x_{i}}{N\sum _{i=1}^{N}x_{i}\ln x_{i}-\sum _{i=1}^{N}x_{i}\sum _{i=1}^{N}\ln x_{i}}}} and the estimate for the scaleθis θ^=1N2(N∑i=1Nxiln⁡xi−∑i=1Nxi∑i=1Nln⁡xi){\displaystyle {\hat {\theta }}={\frac {1}{N^{2}}}\left(N\sum _{i=1}^{N}x_{i}\ln x_{i}-\sum _{i=1}^{N}x_{i}\sum _{i=1}^{N}\ln x_{i}\right)} Using the sample mean ofx, the sample mean oflnx, and the sample mean of the productx·lnxsimplifies the expressions to: α^=x¯/θ^{\displaystyle {\hat {\alpha }}={\bar {x}}/{\hat {\theta }}}θ^=xln⁡x¯−x¯ln⁡x¯.{\displaystyle {\hat {\theta }}={\overline {x\ln x}}-{\bar {x}}{\overline {\ln x}}.} If the rate parameterization is used, the estimate ofλ^=1/θ^{\displaystyle {\hat {\lambda }}=1/{\hat {\theta }}}. These estimators are not strictly maximum likelihood estimators, but are instead referred to as mixed type log-moment estimators. They have however similar efficiency as the maximum likelihood estimators. Although these estimators are consistent, they have a small bias. A bias-corrected variant of the estimator for the scaleθis θ~=NN−1θ^{\displaystyle {\tilde {\theta }}={\frac {N}{N-1}}{\hat {\theta }}} A bias correction for the shape parameterαis given as[27] α~=α^−1N(3α^−23(α^1+α^)−45α^(1+α^)2){\displaystyle {\tilde {\alpha }}={\hat {\alpha }}-{\frac {1}{N}}\left(3{\hat {\alpha }}-{\frac {2}{3}}\left({\frac {\hat {\alpha }}{1+{\hat {\alpha }}}}\right)-{\frac {4}{5}}{\frac {\hat {\alpha }}{(1+{\hat {\alpha }})^{2}}}\right)} With knownαand unknownθ, the posterior density function for theta (using the standard scale-invariantpriorforθ) is P(θ∣α,x1,…,xN)∝1θ∏i=1Nf(xi;α,θ){\displaystyle P(\theta \mid \alpha ,x_{1},\dots ,x_{N})\propto {\frac {1}{\theta }}\prod _{i=1}^{N}f(x_{i};\alpha ,\theta )} Denoting y≡∑i=1Nxi,P(θ∣α,x1,…,xN)=C(xi)θ−Nα−1e−y/θ{\displaystyle y\equiv \sum _{i=1}^{N}x_{i},\qquad P(\theta \mid \alpha ,x_{1},\dots ,x_{N})=C(x_{i})\theta ^{-N\alpha -1}e^{-y/\theta }} where theC(integration) constant does not depend onθ. The form of the posterior density reveals that1 /θis gamma-distributed with shape parameterNα+ 2and rate parametery. Integration with respect toθcan be carried out using a change of variables to find the integration constant ∫0∞θ−Nα−1+me−y/θdθ=∫0∞xNα−1−me−xydx=y−(Nα−m)Γ(Nα−m){\displaystyle \int _{0}^{\infty }\theta ^{-N\alpha -1+m}e^{-y/\theta }\,d\theta =\int _{0}^{\infty }x^{N\alpha -1-m}e^{-xy}\,dx=y^{-(N\alpha -m)}\Gamma (N\alpha -m)\!} The moments can be computed by taking the ratio (mbym= 0) E⁡[xm]=Γ(Nα−m)Γ(Nα)ym{\displaystyle \operatorname {E} [x^{m}]={\frac {\Gamma (N\alpha -m)}{\Gamma (N\alpha )}}y^{m}} which shows that the mean ± standard deviation estimate of the posterior distribution forθis yNα−1±y2(Nα−1)2(Nα−2).{\displaystyle {\frac {y}{N\alpha -1}}\pm {\sqrt {\frac {y^{2}}{(N\alpha -1)^{2}(N\alpha -2)}}}.} InBayesian inference, thegamma distributionis theconjugate priorto many likelihood distributions: thePoisson,exponential,normal(with known mean),Pareto, gamma with known shapeσ,inverse gammawith known shape parameter, andGompertzwith known scale parameter. The gamma distribution'sconjugate prioris:[28] p(α,θ∣p,q,r,s)=1Zpα−1e−θ−1qΓ(α)rθαs,{\displaystyle p(\alpha ,\theta \mid p,q,r,s)={\frac {1}{Z}}{\frac {p^{\alpha -1}e^{-\theta ^{-1}q}}{\Gamma (\alpha )^{r}\theta ^{\alpha s}}},} whereZis the normalizing constant with no closed-form solution. The posterior distribution can be found by updating the parameters as follows: p′=p∏ixi,q′=q+∑ixi,r′=r+n,s′=s+n,{\displaystyle {\begin{aligned}p'&=p\prod \nolimits _{i}x_{i},\\q'&=q+\sum \nolimits _{i}x_{i},\\r'&=r+n,\\s'&=s+n,\end{aligned}}} wherenis the number of observations, andxiis thei-th observation from the gamma distribution. Consider a sequence of events, with the waiting time for each event being an exponential distribution with rateλ. Then the waiting time for then-th event to occur is the gamma distribution with integer shapeα=n{\displaystyle \alpha =n}. This construction of the gamma distribution allows it to model a wide variety of phenomena where several sub-events, each taking time with exponential distribution, must happen in sequence for a major event to occur.[29]Examples include the waiting time ofcell-division events,[30]number of compensatory mutations for a given mutation,[31]waiting time until a repair is necessary for a hydraulic system,[32]and so on. In biophysics, the dwell time between steps of a molecular motor likeATP synthaseis nearly exponential at constant ATP concentration, revealing that each step of the motor takes a single ATP hydrolysis. If there were n ATP hydrolysis events, then it would be a gamma distribution with degree n.[33] The gamma distribution has been used to model the size ofinsurance claims[34]and rainfalls.[35]This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by agamma process– much like theexponential distributiongenerates aPoisson process. The gamma distribution is also used to model errors in multi-levelPoisson regressionmodels because amixtureofPoisson distributionswith gamma-distributed rates has a known closed form distribution, callednegative binomial. In wireless communication, the gamma distribution is used to model themulti-path fadingof signal power;[citation needed]see alsoRayleigh distributionandRician distribution. Inoncology, the age distribution ofcancerincidenceoften follows the gamma distribution, wherein the shape and scale parameters predict, respectively, the number ofdriver eventsand the time interval between them.[36][37] Inneuroscience, the gamma distribution is often used to describe the distribution ofinter-spike intervals.[38][39] Inbacterialgene expressionwhere protein production can occur in bursts, the copy number of a given protein often follows the gamma distribution, where the shape and scale parameters are, respectively, the mean number of bursts per cell cycle and the mean number ofprotein moleculesproduced per burst.[40] Ingenomics, the gamma distribution was applied inpeak callingstep (i.e., in recognition of signal) inChIP-chip[41]andChIP-seq[42]data analysis. In Bayesian statistics, the gamma distribution is widely used as aconjugate prior. It is the conjugate prior for theprecision(i.e. inverse of the variance) of anormal distribution. It is also the conjugate prior for theexponential distribution. Inphylogenetics, the gamma distribution is the most commonly used approach to model among-sites rate variation[43]whenmaximum likelihood,Bayesian, ordistance matrix methodsare used to estimate phylogenetic trees. Phylogenetic analyzes that use the gamma distribution to model rate variation estimate a single parameter from the data because they limit consideration to distributions whereα=λ. This parameterization means that the mean of this distribution is 1 and the variance is1/α. Maximum likelihood and Bayesian methods typically use a discrete approximation to the continuous gamma distribution.[44][45] Given the scaling property above, it is enough to generate gamma variables withθ= 1, as we can later convert to any value ofλwith a simple division. Suppose we wish to generate random variables fromGamma(n+δ, 1), where n is a non-negative integer and0 <δ< 1. Using the fact that aGamma(1, 1)distribution is the same as anExp(1)distribution, and noting the method ofgenerating exponential variables, we conclude that ifUisuniformly distributedon (0, 1], then−lnUis distributedGamma(1, 1)(i.e.inverse transform sampling). Now, using the "α-addition" property of gamma distribution, we expand this result: −∑k=1nln⁡Uk∼Γ(n,1){\displaystyle -\sum _{k=1}^{n}\ln U_{k}\sim \Gamma (n,1)} whereUkare all uniformly distributed on (0, 1] andindependent. All that is left now is to generate a variable distributed asGamma(δ, 1)for0 <δ< 1and apply the "α-addition" property once more. This is the most difficult part. Random generation of gamma variates is discussed in detail by Devroye,[46]: 401–428noting that none are uniformly fast for all shape parameters. For small values of the shape parameter, the algorithms are often not valid.[46]: 406For arbitrary values of the shape parameter, one can apply the Ahrens and Dieter[47]modified acceptance-rejection method Algorithm GD (shapeα≥ 1), or transformation method[48]when0 <α< 1. Also see Cheng and Feast Algorithm GKM 3[49]or Marsaglia's squeeze method.[50] The following is a version of the Ahrens-Dieteracceptance–rejection method:[47] A summary of this isθ(ξ−∑i=1⌊α⌋ln⁡Ui)∼Γ(α,θ){\displaystyle \theta \left(\xi -\sum _{i=1}^{\lfloor \alpha \rfloor }\ln U_{i}\right)\sim \Gamma (\alpha ,\theta )}where⌊α⌋{\displaystyle \scriptstyle \lfloor \alpha \rfloor }is the integer part ofα,ξis generated via the algorithm above withδ= {α}(the fractional part ofα) and theUkare all independent. While the above approach is technically correct, Devroye notes that it is linear in the value ofαand generally is not a good choice. Instead, he recommends using either rejection-based or table-based methods, depending on context.[46]: 401–428 For example, Marsaglia's simple transformation-rejection method relying on one normal variateXand one uniform variateU:[25] With1≤a=α{\displaystyle 1\leq a=\alpha }generates a gamma distributed random number in time that is approximately constant with&alpha. The acceptance rate does depend onα, with an acceptance rate of 0.95, 0.98, and 0.99 forα= 1, 2, and 4. Forα< 1, one can useγα=γ1+αU1/α{\displaystyle \gamma _{\alpha }=\gamma _{1+\alpha }U^{1/\alpha }}to boostkto be usable with this method. InMatlabnumbers can be generated using the function gamrnd(), which uses theα,θrepresentation.
https://en.wikipedia.org/wiki/Gamma_distribution
Inprobability theoryandstatistics, theHermite distribution, named afterCharles Hermite, is adiscrete probability distributionused to modelcount datawith more than one parameter. This distribution is flexible in terms of its ability to allow a moderateover-dispersionin the data. The authors C. D. Kemp andA. W. Kemp[1]have called it "Hermite distribution" from the fact itsprobability functionand themoment generating functioncan be expressed in terms of the coefficients of (modified)Hermite polynomials. The distribution first appeared in the paperApplications of Mathematics to Medical Problems,[2]byAnderson Gray McKendrickin 1926. In this work the author explains several mathematical methods that can be applied to medical research. In one of this methods he considered thebivariate Poisson distributionand showed that the distribution of the sum of two correlated Poisson variables follow a distribution that later would be known as Hermite distribution. As a practical application, McKendrick considered the distribution of counts ofbacteriainleucocytes. Using themethod of momentshe fitted the data with the Hermite distribution and found the model more satisfactory than fitting it with aPoisson distribution. The distribution was formally introduced and published by C. D. Kemp and Adrienne W. Kemp in 1965 in their workSome Properties of ‘Hermite’ Distribution. The work is focused on the properties of this distribution for instance a necessary condition on the parameters and theirmaximum likelihood estimators(MLE), the analysis of theprobability generating function(PGF) and how it can be expressed in terms of the coefficients of (modified)Hermite polynomials. An example they have used in this publication is the distribution of counts of bacteria in leucocytes that used McKendrick but Kemp and Kemp estimate the model using themaximum likelihoodmethod. Hermite distribution is a special case of discretecompound Poisson distributionwith only two parameters.[3][4] The same authors published in 1966 the paperAn alternative Derivation of the Hermite Distribution.[5]In this work established that the Hermite distribution can be obtained formally by combining aPoisson distributionwith anormal distribution. In 1971, Y. C. Patel[6]did a comparative study of various estimation procedures for the Hermite distribution in his doctoral thesis. It included maximum likelihood, moment estimators, mean and zero frequency estimators and the method of even points. In 1974, Gupta and Jain[7]did a research on a generalized form of Hermite distribution. LetX1andX2be two independent Poisson variables with parametersa1anda2. Theprobability distributionof therandom variableY=X1+ 2X2is the Hermite distribution with parametersa1anda2andprobability mass functionis given by[8] where Theprobability generating functionof the probability mass is,[8] When arandom variableY=X1+ 2X2is distributed by an Hermite distribution, whereX1andX2are two independent Poisson variables with parametersa1anda2, we write Themoment generating functionof a random variableXis defined as the expected value ofet, as a function of the real parametert. For an Hermite distribution with parametersX1andX2, the moment generating function exists and is equal to Thecumulant generating functionis the logarithm of the moment generating function and is equal to[4] If we consider the coefficient of (it)rr! in the expansion ofK(t) we obtain ther-cumulant Hence themeanand the succeeding threemomentsabout it are Theskewnessis the third moment centered around the mean divided by the 3/2 power of thestandard deviation, and for the hermite distribution is,[4] Thekurtosisis the fourth moment centered around the mean, divided by the square of thevariance, and for the Hermite distribution is,[4] Theexcess kurtosisis just a correction to make the kurtosis of the normal distribution equal to zero, and it is the following, In a discrete distribution thecharacteristic functionof any real-valued random variable is defined as theexpected valueofeitX{\displaystyle e^{itX}}, whereiis the imaginary unit andt∈R This function is related to the moment-generating function viaϕx(t)=MX(it){\displaystyle \phi _{x}(t)=M_{X}(it)}. Hence for this distribution the characteristic function is,[1] Thecumulative distribution functionis,[1] Themeanand thevarianceof the Hermite distribution areμ=a1+2a2{\displaystyle \mu =a_{1}+2a_{2}}andσ2=a1+4a2{\displaystyle \sigma ^{2}=a_{1}+4a_{2}}, respectively. So we have these two equation, Solving these two equation we get the moment estimatorsa1^{\displaystyle {\hat {a_{1}}}}anda2^{\displaystyle {\hat {a_{2}}}}ofa1anda2.[6] Sincea1anda2both are positive, the estimatora1^{\displaystyle {\hat {a_{1}}}}anda2^{\displaystyle {\hat {a_{2}}}}are admissible (≥ 0) only if,x¯<σ2<2x¯{\displaystyle {\bar {x}}<\sigma ^{2}<2{\bar {x}}}. Given a sampleX1, ...,Xmareindependent random variableseach having an Hermite distribution we wish to estimate the value of the parametersa1^{\displaystyle {\hat {a_{1}}}}anda2^{\displaystyle {\hat {a_{2}}}}. We know that the mean and the variance of the distribution areμ=a1+2a2{\displaystyle \mu =a_{1}+2a_{2}}andσ2=a1+4a2{\displaystyle \sigma ^{2}=a_{1}+4a_{2}}, respectively. Using these two equation, We can parameterize the probability function by μ andd Hence thelog-likelihood functionis,[9] where From the log-likelihood function, thelikelihood equationsare,[9] Straightforward calculations show that,[9] whereθ~=d−12x¯(2−d)2{\displaystyle {\tilde {\theta }}={\frac {d-1}{2{\bar {x}}(2-d)^{2}}}} The likelihood equation does not always have a solution like as it shows the following proposition, Proposition:[9]LetX1, ...,Xmcome from a generalized Hermite distribution with fixedn. Then the MLEs of the parameters areμ^{\displaystyle {\hat {\mu }}}andd~{\displaystyle {\tilde {d}}}if only ifm(2)/x¯2>1{\displaystyle m^{(2)}/{\bar {x}}^{2}>1}, wherem(2)=∑i=1nxi(xi−1)/n{\displaystyle m^{(2)}=\sum _{i=1}^{n}x_{i}(x_{i}-1)/n}indicates the empirical factorial momement of order 2. A usual choice for discrete distributions is the zero relative frequency of the data set which is equated to the probability of zero under the assumed distribution. Observing thatf0=exp⁡(−(a1+a2)){\displaystyle f_{0}=\exp(-(a_{1}+a_{2}))}andμ=a1+2a2{\displaystyle \mu =a_{1}+2a_{2}}. Following the example of Y. C. Patel (1976) the resulting system of equations, We obtain thezero frequencyand themean estimatora1ofa1^{\displaystyle {\hat {a_{1}}}}anda2ofa2^{\displaystyle {\hat {a_{2}}}},[6] wheref0=n0n{\displaystyle f_{0}={\frac {n_{0}}{n}}}, is the zero relative frequency,n> 0 It can be seen that for distributions with a high probability at 0, the efficiency is high. When Hermite distribution is used to model a data sample is important to check if thePoisson distributionis enough to fit the data. Following the parametrizedprobability mass functionused to calculate the maximum likelihood estimator, is important to corroborate the following hypothesis, Thelikelihood-ratio teststatistic[9]for hermite distribution is, WhereL(){\displaystyle {\mathcal {L}}()}is the log-likelihood function. Asd= 1 belongs to the boundary of the domain of parameters, under the null hypothesis,Wdoes not have an asymptoticχ12{\displaystyle \chi _{1}^{2}}distribution as expected. It can be established that the asymptotic distribution ofWis a 50:50 mixture of the constant 0 and theχ12{\displaystyle \chi _{1}^{2}}. The α upper-tail percentage points for this mixture are the same as the 2α upper-tail percentage points for aχ12{\displaystyle \chi _{1}^{2}}; for instance, for α = 0.01, 0.05, and 0.10 they are 5.41189, 2.70554 and 1.64237. The score statistic is,[9] wheremis the number of observations. The asymptotic distribution of the score test statistic under the null hypothesis is aχ12{\displaystyle \chi _{1}^{2}}distribution. It may be convenient to use a signed version of the score test, that is,sgn⁡(m(2)−x¯2)S{\displaystyle \operatorname {sgn} (m^{(2)}-{\bar {x}}^{2}){\sqrt {S}}}, following asymptotically a standard normal.
https://en.wikipedia.org/wiki/Hermite_distribution
Inprobability theoryandstatistics, theindex of dispersion,[1]dispersion index,coefficient of dispersion,relative variance, orvariance-to-mean ratio(VMR), like thecoefficient of variation, is anormalizedmeasure of thedispersionof aprobability distribution: it is a measure used to quantify whether a set of observed occurrences are clustered or dispersed compared to a standard statistical model. It is defined as the ratio of thevarianceσ2{\displaystyle \sigma ^{2}}to themeanμ{\displaystyle \mu }, It is also known as theFano factor, though this term is sometimes reserved forwindoweddata (the mean and variance are computed over a subpopulation), where the index of dispersion is used in the special case where the window is infinite. Windowing data is frequently done: the VMR is frequently computed over various intervals in time or small regions in space, which may be called "windows", and the resulting statistic called the Fano factor. It is only defined when the meanμ{\displaystyle \mu }is non-zero, and is generally only used for positive statistics, such ascount dataor time between events, or where the underlying distribution is assumed to be theexponential distributionorPoisson distribution. In this context, the observed dataset may consist of the times of occurrence of predefined events, such as earthquakes in a given region over a given magnitude, or of the locations in geographical space of plants of a given species. Details of such occurrences are first converted into counts of the numbers of events or occurrences in each of a set of equal-sized time- or space-regions. The above defines adispersion index for counts.[2]A different definition applies for adispersion index for intervals,[3]where the quantities treated are the lengths of the time-intervals between the events. Common usage is that "index of dispersion" means the dispersion index for counts. Some distributions, most notably thePoisson distribution, have equal variance and mean, giving them a VMR = 1. Thegeometric distributionand thenegative binomial distributionhave VMR > 1, while thebinomial distributionhas VMR < 1, and theconstant random variablehas VMR = 0. This yields the following table: This can be considered analogous to the classification ofconic sectionsbyeccentricity; seeCumulants of particular probability distributionsfor details. The relevance of the index of dispersion is that it has a value of 1 when the probability distribution of the number of occurrences in an interval is aPoisson distribution. Thus the measure can be used to assess whether observed data can be modeled using aPoisson process. When the coefficient of dispersion is less than 1, a dataset is said to be "under-dispersed": this condition can relate to patterns of occurrence that are more regular than the randomness associated with a Poisson process. For instance, regular, periodic events will be under-dispersed. If the index of dispersion is larger than 1, a dataset is said to beover-dispersed. A sample-based estimate of the dispersion index can be used to construct a formalstatistical hypothesis testfor the adequacy of the model that a series of counts follow a Poisson distribution.[4][5]In terms of the interval-counts, over-dispersion corresponds to there being more intervals with low counts and more intervals with high counts, compared to a Poisson distribution: in contrast, under-dispersion is characterised by there being more intervals having counts close to the mean count, compared to a Poisson distribution. The VMR is also a good measure of the degree of randomness of a given phenomenon. For example, this technique is commonly used in currency management. For randomly diffusing particles (Brownian motion), the distribution of the number of particle inside a given volume is poissonian, i.e. VMR=1. Therefore, to assess if a given spatial pattern (assuming you have a way to measure it) is due purely to diffusion or if some particle-particle interaction is involved : divide the space into patches, Quadrats or Sample Units (SU), count the number of individuals in each patch or SU, and compute the VMR. VMRs significantly higher than 1 denote a clustered distribution, whererandom walkis not enough to smother the attractive inter-particle potential. The first to discuss the use of a test to detect deviations from a Poisson or binomial distribution appears to have been Lexis in 1877. One of the tests he developed was theLexis ratio. This index was first used in botany byClaphamin 1936. Hoelstudied the first four moments of its distribution.[6]He found that the approximation to the χ2statistic is reasonable ifμ> 5. For highly skewed distributions, it may be more appropriate to use a linear loss function, as opposed to a quadratic one. The analogous coefficient of dispersion in this case is the ratio of the average absolute deviation from the median to the median of the data,[7]or, in symbols: wherenis the sample size,mis the sample median and the sum taken over the whole sample.Iowa,New YorkandSouth Dakotause this linear coefficient of dispersion to estimate dues taxes.[8][9][10] For a two-sample test in which the sample sizes are large, both samples have the same median, and differ in the dispersion around it, a confidence interval for the linear coefficient of dispersion is bounded inferiorly by wheretjis the mean absolute deviation of thejthsample andzαis the confidence interval length for a normal distribution of confidenceα(e.g., forα= 0.05,zα= 1.96).[7]
https://en.wikipedia.org/wiki/Index_of_dispersion
Inprobability theoryandstatistics, thenegative binomial distributionis adiscrete probability distributionthat models the number of failures in a sequence of independent and identically distributedBernoulli trialsbefore a specified/constant/fixed number of successesr{\displaystyle r}occur.[2]For example, we can define rolling a 6 on some dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success (r=3{\displaystyle r=3}). In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution. An alternative formulation is to model the number of total trials (instead of the number of failures). In fact, for a specified (non-random) number of successes(r), the number of failures(n−r)is random because the number of total trials(n)is random. For example, we could use the negative binomial distribution to model the number of daysn(random) a certain machine works (specified byr) before it breaks down. The negative binomial distribution has a varianceμ/p{\displaystyle \mu /p}, with the distribution becoming identical to Poisson in the limitp→1{\displaystyle p\to 1}for a given meanμ{\displaystyle \mu }(i.e. when the failures are increasingly rare). Herep∈[0,1]{\displaystyle p\in [0,1]}is the success probability of each Bernoulli trial. This can make the distribution a usefuloverdispersedalternative to the Poisson distribution, for example for arobustmodification ofPoisson regression. In epidemiology, it has been used to model disease transmission for infectious diseases where the likely number of onward infections may vary considerably from individual to individual and from setting to setting.[3]More generally, it may be appropriate where events have positively correlated occurrences causing a largervariancethan if the occurrences were independent, due to a positivecovarianceterm. The term "negative binomial" is likely due to the fact that a certainbinomial coefficientthat appears in the formula for theprobability mass functionof the distribution can be written more simply with negative numbers.[4] Imagine a sequence of independentBernoulli trials: each trial has two potential outcomes called "success" and "failure." In each trial the probability of success isp{\displaystyle p}and of failure is1−p{\displaystyle 1-p}. We observe this sequence until a predefined numberr{\displaystyle r}of successes occurs. Then the random number of observed failures,X{\displaystyle X}, follows thenegative binomialdistribution: Theprobability mass functionof the negative binomial distribution is whereris the number of successes,kis the number of failures, andpis the probability of success on each trial. Here, the quantity in parentheses is thebinomial coefficient, and is equal to Note thatΓ(r)is theGamma function. There arekfailures chosen fromk+r− 1trials rather thank+rbecause the last of thek+rtrials is by definition a success. This quantity can alternatively be written in the following manner, explaining the name "negative binomial": Note that by the last expression and thebinomial series, for every0 ≤p< 1andq=1−p{\displaystyle q=1-p}, hence the terms of the probability mass function indeed add up to one as below. To understand the above definition of the probability mass function, note that the probability for every specific sequence ofrsuccesses andkfailures ispr(1 −p)k, because the outcomes of thek+rtrials are supposed to happenindependently. Since ther-th success always comes last, it remains to choose thektrials with failures out of the remainingk+r− 1trials. The above binomial coefficient, due to its combinatorial interpretation, gives precisely the number of all these sequences of lengthk+r− 1. Thecumulative distribution functioncan be expressed in terms of theregularized incomplete beta function:[2][5] (This formula is using the same parameterization as in the article's table, withrthe number of successes, andp=r/(r+μ){\displaystyle p=r/(r+\mu )}withμ{\displaystyle \mu }the mean.) It can also be expressed in terms of thecumulative distribution functionof thebinomial distribution:[6] Some sources may define the negative binomial distribution slightly differently from the primary one here. The most common variations are where the random variableXis counting different things. These variations can be seen in the table here: (using equivalent binomial) (simplified using:n=k+r{\textstyle n=k+r}) [9][10][11] Each of the four definitions of the negative binomial distribution can be expressed in slightly different but equivalent ways. The first alternative formulation is simply an equivalent form of the binomial coefficient, that is:(ab)=(aa−b)for0≤b≤a{\textstyle {\binom {a}{b}}={\binom {a}{a-b}}\quad {\text{for }}\ 0\leq b\leq a}. The second alternate formulation somewhat simplifies the expression by recognizing that the total number of trials is simply the number of successes and failures, that is:n=r+k{\textstyle n=r+k}. These second formulations may be more intuitive to understand, however they are perhaps less practical as they have more terms. In negative binomial regression,[15]the distribution is specified in terms of its mean,m=r(1−p)p{\textstyle m={\frac {r(1-p)}{p}}}, which is then related to explanatory variables as inlinear regressionor othergeneralized linear models. From the expression for the meanm, one can derivep=rm+r{\textstyle p={\frac {r}{m+r}}}and1−p=mm+r{\textstyle 1-p={\frac {m}{m+r}}}. Then, substituting these expressions inthe one for the probability mass function whenris real-valued, yields this parametrization of the probability mass function in terms ofm: The variance can then be written asm+m2r{\textstyle m+{\frac {m^{2}}{r}}}. Some authors prefer to setα=1r{\textstyle \alpha ={\frac {1}{r}}}, and express the variance asm+αm2{\textstyle m+\alpha m^{2}}. In this context, and depending on the author, either the parameterror its reciprocalαis referred to as the "dispersion parameter", "shape parameter" or "clustering coefficient",[16]or the "heterogeneity"[15]or "aggregation" parameter.[10]The term "aggregation" is particularly used in ecology when describing counts of individual organisms. Decrease of the aggregation parameterrtowards zero corresponds to increasing aggregation of the organisms; increase ofrtowards infinity corresponds to absence of aggregation, as can be described byPoisson regression. Sometimes the distribution is parameterized in terms of its meanμand varianceσ2: Another popular parameterization usesrand the failureoddsβ: Hospitallength of stayis an example of real-world data that can be modelled well with a negative binomial distribution vianegative binomial regression.[17][18] Pat Collis is required to sell candy bars to raise money for the 6th grade field trip. Pat is (somewhat harshly) not supposed to return home until five candy bars have been sold. So the child goes door to door, selling candy bars. At each house, there is a 0.6 probability of selling one candy bar and a 0.4 probability of selling nothing. What's the probability of selling the last candy bar at then-thhouse? Successfully selling candy enough times is what defines our stopping criterion (as opposed to failing to sell it), sokin this case represents the number of failures andrrepresents the number of successes. Recall that theNB(r,p)distribution describes the probability ofkfailures andrsuccesses ink+rBernoulli(p)trials with success on the last trial. Selling five candy bars means getting five successes. The number of trials (i.e. houses) this takes is thereforek+ 5 =n. The random variable we are interested in is the number of houses, so we substitutek=n− 5into aNB(5, 0.4)mass function and obtain the following mass function of the distribution of houses (forn≥ 5): What's the probability that Pat finishes on the tenth house? What's the probability that Pat finishes on or before reaching the eighth house? To finish on or before the eighth house, Pat must finish at the fifth, sixth, seventh, or eighth house. Sum those probabilities: What's the probability that Pat exhausts all 30 houses that happen to stand in the neighborhood? This can be expressed as the probability that Patdoes notfinish on the fifth through the thirtieth house: Because of the rather high probability that Pat will sell to each house (60 percent), the probability of hernotfulfilling her quest is vanishingly slim. The expected total number of trials needed to seersuccesses isrp{\displaystyle {\frac {r}{p}}}. Thus, the expected number offailureswould be this value, minus the successes: The expected total number of failures in a negative binomial distribution with parameters(r,p)isr(1 −p)/p. To see this, imagine an experiment simulating the negative binomial is performed many times. That is, a set of trials is performed untilrsuccesses are obtained, then another set of trials, and then another etc. Write down the number of trials performed in each experiment:a,b,c, ...and seta+b+c+ ... =N. Now we would expect aboutNpsuccesses in total. Say the experiment was performedntimes. Then there arenrsuccesses in total. So we would expectnr=Np, soN/n=r/p. See thatN/nis just the average number of trials per experiment. That is what we mean by "expectation". The average number of failures per experiment isN/n−r=r/p−r=r(1 −p)/p. This agrees with the mean given in the box on the right-hand side of this page. A rigorous derivation can be done by representing the negative binomial distribution as the sum of waiting times. LetXr∼NB⁡(r,p){\displaystyle X_{r}\sim \operatorname {NB} (r,p)}with the conventionX{\displaystyle X}represents the number of failures observed beforer{\displaystyle r}successes with the probability of success beingp{\displaystyle p}. And letYi∼Geom(p){\displaystyle Y_{i}\sim Geom(p)}whereYi{\displaystyle Y_{i}}represents the number of failures before seeing a success. We can think ofYi{\displaystyle Y_{i}}as the waiting time (number of failures) between thei{\displaystyle i}th and(i−1){\displaystyle (i-1)}th success. Thus The mean is which follows from the factE[Yi]=(1−p)/p{\displaystyle E[Y_{i}]=(1-p)/p}. When counting the number of failures before ther-th success, the variance isr(1 −p)/p2. When counting the number of successes before ther-th failure, as in alternative formulation (3) above, the variance isrp/(1 −p)2. SupposeYis a random variable with abinomial distributionwith parametersnandp. Assumep+q= 1, withp,q≥ 0, then UsingNewton's binomial theorem, this can equally be written as: in which the upper bound of summation is infinite. In this case, thebinomial coefficient is defined whennis a real number, instead of just a positive integer. But in our case of the binomial distribution it is zero whenk>n. We can then say, for example Now supposer> 0and we use a negative exponent: Then all of the terms are positive, and the term is just the probability that the number of failures before ther-th success is equal tok, providedris an integer. (Ifris a negative non-integer, so that the exponent is a positive non-integer, then some of the terms in the sum above are negative, so we do not have a probability distribution on the set of all nonnegative integers.) Now we also allow non-integer values ofr. Recall from above that This property persists when the definition is thus generalized, and affords a quick way to see that the negative binomial distribution isinfinitely divisible. The followingrecurrence relationshold: For the probability mass function For the momentsmk=E(Xk),{\displaystyle m_{k}=\mathbb {E} (X^{k}),} For the cumulants Consider a sequence of negative binomial random variables where the stopping parameterrgoes to infinity, while the probabilitypof success in each trial goes to one, in such a way as to keep the mean of the distribution (i.e. the expected number of failures) constant. Denoting this mean asλ, the parameterpwill bep=r/(r+λ) Under this parametrization the probability mass function will be Now if we consider the limit asr→ ∞, the second factor will converge to one, and the third to the exponent function: which is the mass function of aPoisson-distributedrandom variable with expected valueλ. In other words, the alternatively parameterized negative binomial distributionconvergesto the Poisson distribution andrcontrols the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for larger, but which has larger variance than the Poisson for smallr. The negative binomial distribution also arises as a continuous mixture ofPoisson distributions(i.e. acompound probability distribution) where the mixing distribution of the Poisson rate is agamma distribution. That is, we can view the negative binomial as aPoisson(λ)distribution, whereλis itself a random variable, distributed as a gamma distribution with shaperand scaleθ= (1 −p)/por correspondingly rateβ=p/(1 −p). To display the intuition behind this statement, consider two independent Poisson processes, "Success" and "Failure", with intensitiespand1 −p. Together, the Success and Failure processes are equivalent to a single Poisson process of intensity 1, where an occurrence of the process is a success if a corresponding independent coin toss comes up heads with probabilityp; otherwise, it is a failure. Ifris a counting number, the coin tosses show that the count of successes before ther-th failure follows a negative binomial distribution with parametersrandp. The count is also, however, the count of the Success Poisson process at the random timeTof ther-th occurrence in the Failure Poisson process. The Success count follows a Poisson distribution with meanpT, whereTis the waiting time forroccurrences in a Poisson process of intensity1 −p, i.e.,Tis gamma-distributed with shape parameterrand intensity1 −p. Thus, the negative binomial distribution is equivalent to a Poisson distribution with meanpT, where the random variateTis gamma-distributed with shape parameterrand intensity(1 −p). The preceding paragraph follows, becauseλ=pTis gamma-distributed with shape parameterrand intensity(1 −p)/p. The following formal derivation (which does not depend onrbeing a counting number) confirms the intuition. Because of this, the negative binomial distribution is also known as thegamma–Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution.[19] IfYris a random variable following the negative binomial distribution with parametersrandp, and support{0, 1, 2, ...}, thenYris a sum ofrindependentvariables following thegeometric distribution(on{0, 1, 2, ...}) with parameterp. As a result of thecentral limit theorem,Yr(properly scaled and shifted) is therefore approximatelynormalfor sufficiently larger. Furthermore, ifBs+ris a random variable following thebinomial distributionwith parameterss+randp, then In this sense, the negative binomial distribution is the "inverse" of the binomial distribution. The sum of independent negative-binomially distributed random variablesr1andr2with the same value for parameterpis negative-binomially distributed with the samepbut withr-valuer1+r2. The negative binomial distribution isinfinitely divisible, i.e., ifYhas a negative binomial distribution, then for any positive integern, there exist independent identically distributed random variablesY1, ...,Ynwhose sum has the same distribution thatYhas. The negative binomial distributionNB(r,p)can be represented as acompound Poisson distribution: Let(Yn)n∈N{\textstyle (Y_{n})_{n\,\in \,\mathbb {N} }}denote a sequence ofindependent and identically distributed random variables, each one having thelogarithmic series distributionLog(p), with probability mass function LetNbe a random variable,independentof the sequence, and suppose thatNhas aPoisson distributionwith meanλ = −rln(1 −p). Then the random sum isNB(r,p)-distributed. To prove this, we calculate theprobability generating functionGXofX, which is the composition of the probability generating functionsGNandGY1. Using and we obtain which is the probability generating function of theNB(r,p)distribution. The following table describes four distributions related to the number of successes in a sequence of draws: The negative binomial, along with the Poisson and binomial distributions, is a member of the(a,b, 0)class of distributions. All three of these distributions are special cases of thePanjer distribution. They are also members of anatural exponential family. Supposepis unknown and an experiment is conducted where it is decided ahead of time that sampling will continue untilrsuccesses are found. Asufficient statisticfor the experiment isk, the number of failures. In estimatingp, theminimum variance unbiased estimatoris Whenris known, themaximum likelihoodestimate ofpis but this is abiased estimate. Its inverse(r+k)/r, is an unbiased estimate of1/p, however.[20] Whenris unknown, the maximum likelihood estimator forpandrtogether only exists for samples for which the sample variance is larger than the sample mean.[21]Thelikelihood functionforNiidobservations(k1, ...,kN)is from which we calculate the log-likelihood function To find the maximum we take the partial derivatives with respect torandpand set them equal to zero: where Solving the first equation forpgives: Substituting this in the second equation gives: This equation cannot be solved forrinclosed form. If a numerical solution is desired, an iterative technique such asNewton's methodcan be used. Alternatively, theexpectation–maximization algorithmcan be used.[21] Letkandrbe integers withknon-negative andrpositive. In a sequence of independentBernoulli trialswith success probabilityp, the negative binomial gives the probability ofksuccesses andrfailures, with a failure on the last trial. Therefore, the negative binomial distribution represents the probability distribution of the number of successes before ther-th failure in aBernoulli process, with probabilitypof successes on each trial. Consider the following example. Suppose we repeatedly throw a die, and consider a 1 to be a failure. The probability of success on each trial is 5/6. The number of successes before the third failure belongs to the infinite set{ 0, 1, 2, 3, ... }. That number of successes is a negative-binomially distributed random variable. Whenr= 1we get the probability distribution of number of successes before the first failure (i.e. the probability of the first failure occurring on the(k+ 1)-st trial), which is ageometric distribution: The negative binomial distribution, especially in its alternative parameterization described above, can be used as an alternative to the Poisson distribution. It is especially useful for discrete data over an unbounded positive range whose samplevarianceexceeds the samplemean. In such cases, the observations areoverdispersedwith respect to a Poisson distribution, for which the mean is equal to the variance. Hence a Poisson distribution is not an appropriate model. Since the negative binomial distribution has one more parameter than the Poisson, the second parameter can be used to adjust the variance independently of the mean. SeeCumulants of some discrete probability distributions. An application of this is to annual counts oftropical cyclonesin theNorth Atlanticor to monthly to 6-monthly counts of wintertimeextratropical cyclonesover Europe, for which the variance is greater than the mean.[22][23][24]In the case of modest overdispersion, this may produce substantially similar results to an overdispersed Poisson distribution.[25][26] Negative binomial modeling is widely employed in ecology and biodiversity research for analyzing count data where overdispersion is very common. This is because overdispersion is indicative of biological aggregation, such as species or communities forming clusters. Ignoring overdispersion can lead to significantly inflated model parameters, resulting in misleading statistical inferences. The negative binomial distribution effectively addresses overdispersed counts by permitting the variance to vary quadratically with the mean. An additional dispersion parameter governs the slope of the quadratic term, determining the severity of overdispersion. The model's quadratic mean-variance relationship proves to be a realistic approach for handling overdispersion, as supported by empirical evidence from many studies. Overall, the NB model offers two attractive features: (1) the convenient interpretation of the dispersion parameter as an index of clustering or aggregation, and (2) its tractable form, featuring a closed expression for the probability mass function.[27] In genetics, the negative binomial distribution is commonly used to model data in the form of discrete sequence read counts from high-throughput RNA and DNA sequencing experiments.[28][29][30][31] In epidemiology of infectious diseases, the negative binomial has been used as a better option than the Poisson distribution to model overdispersed counts of secondary infections from one infected case (super-spreading events).[32] The negative binomial distribution has been the most effective statistical model for a broad range of multiplicity observations inparticle collisionexperiments, e.g.,pp¯,hh,hA,AA,e+e−{\displaystyle p{\bar {p}},\ hh,\ hA,\ AA,\ e^{+}e^{-}}[33][34][35][36][37](See[38]for an overview), and is argued to be ascale-invariantproperty of matter,[39][40]providing the best fit for astronomical observations, where it predicts the number of galaxies in a region of space.[41][42][43][44]The phenomenological justification for the effectiveness of the negative binomial distribution in these contexts remained unknown for fifty years, since their first observation in 1973.[45]In 2023, a proof fromfirst principleswas eventually demonstrated by Scott V. Tezlaf, where it was shown that the negative binomial distribution emerges fromsymmetriesin thedynamical equationsof acanonical ensembleof particles inMinkowski space.[46]Roughly, given an expected number of trials⟨n⟩{\displaystyle \langle n\rangle }and expected number of successes⟨r⟩{\displaystyle \langle r\rangle }, where anisomorphicset of equations can be identified with the parameters of arelativisticcurrent densityof a canonical ensemble of massive particles, via whereρ0{\displaystyle \rho _{0}}is the restdensity,⟨ρ2⟩{\displaystyle \langle \rho ^{2}\rangle }is the relativistic mean square density,⟨j2⟩{\displaystyle \langle j^{2}\rangle }is the relativistic mean square current density, and⟨βv2⟩=⟨v2⟩/c2{\displaystyle \langle \beta _{v}^{2}\rangle =\langle v^{2}\rangle /c^{2}}, where⟨v2⟩{\displaystyle \langle v^{2}\rangle }is themean square speedof the particle ensemble andc{\displaystyle c}is thespeed of light—such that one can establish the followingbijective map: A rigorous alternative proof of the above correspondence has also been demonstrated throughquantum mechanicsvia the Feynmanpath integral.[46] This distribution was first studied in 1713 byPierre Remond de Montmortin hisEssay d'analyse sur les jeux de hazard, as the distribution of the number of trials required in an experiment to obtain a given number of successes.[47]It had previously been mentioned byPascal.[48]
https://en.wikipedia.org/wiki/Negative_binomial_distribution
Poisson clumping, orPoisson bursts,[1]is a phenomenon whererandomevents may appear to occur in clusters, clumps, orbursts. Poisson clumping is named for 19th-centuryFrenchmathematicianSiméon Denis Poisson,[1]known for his work ondefinite integrals,electromagnetic theory, andprobability theory, and after whom thePoisson distributionis also named. ThePoisson processprovides a description of random independent events occurring with uniform probability through time and/or space. The expected number λ of events in a time interval or area of a given measure is proportional to that measure. Thedistributionof the number of events follows aPoisson distributionentirely determined by the parameter λ. If λ is small, events are rare, but may nevertheless occur in clumps—referred to as Poisson clumps or bursts—purely by chance.[2]In many cases there is no other cause behind such indefinite groupings besides the nature of randomness following this distribution.[3]However, obviously not all clumping in nature can be explained by this property — for example earthquakes, because of local seismic activity that causes groups of local aftershocks, in this caseWeibull distributionis proposed.[4] Poisson clumping is used to explain marked increases or decreases in the frequency of an event, such as shark attacks, "coincidences", birthdays, heads or tails from coin tosses, and e-mail correspondence.[5][6] The poisson clumping heuristic (PCH), published byDavid Aldousin 1989,[7]is a model for findingfirst-order approximationsover different areas in a large class ofstationary probability models. The probability models have a specificmonotonicity propertywith largeexclusions. The probability that this will achieve a large value isasymptotically smalland is distributed in aPoisson fashion.[8]
https://en.wikipedia.org/wiki/Poisson_clumping
a0,t+(a0,t)2−(a0,t)2=a0,t{\displaystyle a_{0,t}+(a_{0,t})^{2}-(a_{0,t})^{2}=a_{0,t}}sinceRx(t1,t2)=a0,min(t1,t2)+a0,t1a0,t2{\displaystyle R_{x}(t_{1},t_{2})=a_{0,min(t_{1},t_{2})}+a_{0,t_{1}}a_{0,t_{2}}} Inprobability theory,statisticsand related fields, aPoisson point process(also known as:Poisson random measure,Poisson random point fieldandPoisson point field) is a type ofmathematical objectthat consists ofpointsrandomly located on amathematical spacewith the essential feature that the points occur independently of one another.[1]The process's name derives from the fact that the number of points in any given finite region follows aPoisson distribution. The process and the distribution are named after French mathematicianSiméon Denis Poisson. The process itself was discovered independently and repeatedly in several settings, including experiments onradioactive decay, telephone call arrivals andactuarial science.[2][3] This point process is used as amathematical modelfor seemingly random processes in numerous disciplines includingastronomy,[4]biology,[5]ecology,[6]geology,[7]seismology,[8]physics,[9]economics,[10]image processing,[11][12]andtelecommunications.[13][14] The Poisson point process is often defined on the real number line, where it can be considered astochastic process. It is used, for example, inqueueing theory[15]to model random events distributed in time, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes. In theplane, the point process, also known as aspatial Poisson process,[16]can represent the locations of scattered objects such as transmitters in awireless network,[13][17][18][19]particlescolliding into a detector or trees in a forest.[20]The process is often used in mathematical models and in the related fields of spatial point processes,[21]stochastic geometry,[1]spatial statistics[21][22]andcontinuum percolation theory.[23] The point process depends on a single mathematical object, which, depending on the context, may be aconstant, alocally integrable functionor, in more general settings, aRadon measure.[24]In the first case, the constant, known as therateorintensity, is the averagedensityof the points in the Poisson process located in some region of space. The resulting point process is called ahomogeneousorstationary Poisson point process.[25]In the second case, the point process is called aninhomogeneousornonhomogeneousPoisson point process, and the average density of points depend on the location of the underlying space of the Poisson point process.[26]The wordpointis often omitted,[27]but there are otherPoisson processesof objects, which, instead of points, consist of more complicated mathematical objects such aslinesandpolygons, and such processes can be based on the Poisson point process.[28]Both the homogeneous and nonhomogeneous Poisson point processes are particular cases of thegeneralized renewal process. Depending on the setting, the process has several equivalent definitions[29]as well as definitions of varying generality owing to its many applications and characterizations.[30]The Poisson point process can be defined, studied and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model;[31][32]in higher dimensions such as the plane where it plays a role instochastic geometry[1]andspatial statistics;[33]or on more general mathematical spaces.[34]Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context.[35] Despite all this, the Poisson point process has two key properties—the Poisson property and the independence property— that play an essential role in all settings where the Poisson point process is used.[24][36]The two properties are not logically independent; indeed, the Poisson distribution of point counts implies the independence property,[a]while in the converse direction the assumptions that: (i) the point process is simple, (ii) has no fixed atoms, and (iii) is a.s. boundedly finite are required.[37] A Poisson point process is characterized via thePoisson distribution. The Poisson distribution is the probability distribution of arandom variableN{\textstyle N}(called aPoisson random variable) such that the probability thatN{\displaystyle \textstyle N}equalsn{\displaystyle \textstyle n}is given by: wheren!{\textstyle n!}denotesfactorialand the parameterΛ{\textstyle \Lambda }determines the shape of the distribution. (In fact,Λ{\textstyle \Lambda }equals the expected value ofN{\textstyle N}.) By definition, a Poisson point process has the property that the number of points in a bounded region of the process's underlying space is a Poisson-distributed random variable.[36] Consider a collection ofdisjointand bounded subregions of the underlying space. By definition, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. This property is known under several names such ascomplete randomness,complete independence,[38]orindependent scattering[39][40]and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general,[41]which motivates the Poisson process being sometimes called apurelyorcompletelyrandom process.[38] If a Poisson point process has a parameter of the formΛ=νλ{\textstyle \Lambda =\nu \lambda }, whereν{\textstyle \nu }is Lebesgue measure (that is, it assigns length, area, or volume to sets) andλ{\textstyle \lambda }is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, calledrateorintensity, is related to the expected (or average) number of Poisson points existing in some bounded region,[42][43]whererateis usually used when the underlying space has one dimension.[42]The parameterλ{\textstyle \lambda }can be interpreted as the average number of points per some unit of extent such aslength, area,volume, or time, depending on the underlying mathematical space, and it is also called themean densityormean rate;[44]seeTerminology. The homogeneous Poisson point process, when considered on the positive half-line, can be defined as acounting process, a type of stochastic process, which can be denoted as{N(t),t≥0}{\textstyle \{N(t),t\geq 0\}}.[29][32]A counting process represents the total number of occurrences or events that have happened up to and including timet{\textstyle t}. A counting process is a homogeneous Poisson counting process with rateλ>0{\textstyle \lambda >0}if it has the following three properties:[29][32] The last property implies: In other words, the probability of the random variableN(t){\textstyle N(t)}being equal ton{\textstyle n}is given by: The Poisson counting process can also be defined by stating that the time differences between events of the counting process are exponential variables with mean1/λ{\textstyle 1/\lambda }.[45]The time differences between the events or arrivals are known asinterarrival[46]orinteroccurrencetimes.[45] Interpreted as apoint process, a Poisson point process can be defined on thereal lineby considering the number of points of the process in the interval(a,b]{\textstyle (a,b]}. For the homogeneous Poisson point process on the real line with parameterλ>0{\textstyle \lambda >0}, the probability of this random number of points, written here asN(a,b]{\textstyle N(a,b]}, being equal to somecounting numbern{\textstyle n}is given by:[47] For some positive integerk{\textstyle k}, the homogeneous Poisson point process has the finite-dimensional distribution given by:[47] where the real numbersai<bi≤ai+1{\textstyle a_{i}<b_{i}\leq a_{i+1}}. In other words,N(a,b]{\textstyle N(a,b]}is a Poisson random variable with meanλ(b−a){\textstyle \lambda (b-a)}, wherea≤b{\textstyle a\leq b}. Furthermore, the number of points in any two disjoint intervals, say,(a1,b1]{\textstyle (a_{1},b_{1}]}and(a2,b2]{\textstyle (a_{2},b_{2}]}are independent of each other, and this extends to any finite number of disjoint intervals.[47]In the queueing theory context, one can consider a point existing (in an interval) as anevent, but this is different to the wordeventin the probability theory sense.[b]It follows thatλ{\textstyle \lambda }is the expected number ofarrivalsthat occur per unit of time.[32] The previous definition has two important features shared by Poisson point processes in general:[47][24] Furthermore, it has a third feature related to just the homogeneous Poisson point process:[48] In other words, for any finitet>0{\textstyle t>0}, the random variableN(a+t,b+t]{\textstyle N(a+t,b+t]}is independent oft{\textstyle t}, so it is also called a stationary Poisson process.[47] The quantityλ(bi−ai){\textstyle \lambda (b_{i}-a_{i})}can be interpreted as the expected or average number of points occurring in the interval(ai,bi]{\textstyle (a_{i},b_{i}]}, namely: whereE{\displaystyle \operatorname {E} }denotes theexpectationoperator. In other words, the parameterλ{\textstyle \lambda }of the Poisson process coincides with thedensityof points. Furthermore, the homogeneous Poisson point process adheres to its own form of the (strong) law of large numbers.[49]More specifically, with probability one: wherelim{\textstyle \lim }denotes thelimitof a function, andλ{\displaystyle \lambda }is expected number of arrivals occurred per unit of time. The distance between two consecutive points of a point process on the real line will be anexponential random variablewith parameterλ{\textstyle \lambda }(or equivalently, mean1/λ{\textstyle 1/\lambda }). This implies that the points have thememorylessproperty: the existence of one point existing in a finite interval does not affect the probability (distribution) of other points existing,[50][51]but this property has no natural equivalence when the Poisson process is defined on a space with higher dimensions.[52] A point process withstationary incrementsis sometimes said to beorderly[53]orregularif:[54] wherelittle-o notationis being used. A point process is called asimple point processwhen the probability of any of its two points coinciding in the same position, on the underlying space, is zero. For point processes in general on the real line, the property of orderliness implies that the process is simple,[55]which is the case for the homogeneous Poisson point process.[56] On the real line, the homogeneous Poisson point process has a connection to the theory ofmartingalesvia the following characterization: a point process is the homogeneous Poisson point process if and only if is a martingale.[57][58] On the real line, the Poisson process is a type of continuous-timeMarkov processknown as abirth process, a special case of thebirth–death process(with just births and zero deaths).[59][60]More complicated processes with theMarkov property, such asMarkov arrival processes, have been defined where the Poisson process is a special case.[45] If the homogeneous Poisson process is considered just on the half-line[0,∞){\textstyle [0,\infty )}, which can be the case whent{\textstyle t}represents time[29]then the resulting process is not truly invariant under translation.[52]In that case the Poisson process is no longer stationary, according to some definitions of stationarity.[25] There have been many applications of the homogeneous Poisson process on the real line in an attempt to model seemingly random and independent events occurring. It has a fundamental role inqueueing theory, which is the probability field of developing suitable stochastic models to represent the random arrival and departure of certain phenomena.[15][45]For example, customers arriving and being served or phone calls arriving at a phone exchange can be both studied with techniques from queueing theory. The homogeneous Poisson process on the real line is considered one of the simplest stochastic processes for counting random numbers of points.[61][62]This process can be generalized in a number of ways. One possible generalization is to extend the distribution of interarrival times from the exponential distribution to other distributions, which introduces the stochastic process known as arenewal process. Another generalization is to define the Poisson point process on higher dimensional spaces such as the plane.[63] Aspatial Poisson processis a Poisson point process defined in the planeR2{\displaystyle \textstyle \mathbb {R} ^{2}}.[57][64]For its mathematical definition, one first considers a bounded, open or closed (or more precisely,Borel measurable) regionB{\textstyle B}of the plane. The number of points of a point processN{\displaystyle \textstyle N}existing in this regionB⊂R2{\displaystyle \textstyle B\subset \mathbb {R} ^{2}}is a random variable, denoted byN(B){\displaystyle \textstyle N(B)}. If the points belong to a homogeneous Poisson process with parameterλ>0{\displaystyle \textstyle \lambda >0}, then the probability ofn{\displaystyle \textstyle n}points existing inB{\displaystyle \textstyle B}is given by: where|B|{\displaystyle \textstyle |B|}denotes the area ofB{\displaystyle \textstyle B}. For some finite integerk≥1{\displaystyle \textstyle k\geq 1}, we can give the finite-dimensional distribution of the homogeneous Poisson point process by first considering a collection of disjoint, bounded Borel (measurable) setsB1,…,Bk{\displaystyle \textstyle B_{1},\dots ,B_{k}}. The number of points of the point processN{\displaystyle \textstyle N}existing inBi{\displaystyle \textstyle B_{i}}can be written asN(Bi){\displaystyle \textstyle N(B_{i})}. Then the homogeneous Poisson point process with parameterλ>0{\displaystyle \textstyle \lambda >0}has the finite-dimensional distribution:[65] The spatial Poisson point process features prominently inspatial statistics,[21][22]stochastic geometry, andcontinuum percolation theory.[23]This point process is applied in various physical sciences such as a model developed for alpha particles being detected. In recent years, it has been frequently used to model seemingly disordered spatial configurations of certain wireless communication networks.[17][18][19]For example, models for cellular or mobile phone networks have been developed where it is assumed the phone network transmitters, known as base stations, are positioned according to a homogeneous Poisson point process. The previous homogeneous Poisson point process immediately extends to higher dimensions by replacing the notion of area with (high dimensional) volume. For some bounded regionB{\displaystyle \textstyle B}of Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, if the points form a homogeneous Poisson process with parameterλ>0{\displaystyle \textstyle \lambda >0}, then the probability ofn{\displaystyle \textstyle n}points existing inB⊂Rd{\displaystyle \textstyle B\subset \mathbb {R} ^{d}}is given by: where|B|{\displaystyle \textstyle |B|}now denotes thed{\displaystyle \textstyle d}-dimensional volume ofB{\displaystyle \textstyle B}. Furthermore, for a collection of disjoint, bounded Borel setsB1,…,Bk⊂Rd{\displaystyle \textstyle B_{1},\dots ,B_{k}\subset \mathbb {R} ^{d}}, letN(Bi){\displaystyle \textstyle N(B_{i})}denote the number of points ofN{\displaystyle \textstyle N}existing inBi{\displaystyle \textstyle B_{i}}. Then the corresponding homogeneous Poisson point process with parameterλ>0{\displaystyle \textstyle \lambda >0}has the finite-dimensional distribution:[67] Homogeneous Poisson point processes do not depend on the position of the underlying space through its parameterλ{\displaystyle \textstyle \lambda }, which implies it is both a stationary process (invariant to translation) and an isotropic (invariant to rotation) stochastic process.[25]Similarly to the one-dimensional case, the homogeneous point process is restricted to some bounded subset ofRd{\textstyle \mathbb {R} ^{d}}, then depending on some definitions of stationarity, the process is no longer stationary.[25][52] If the homogeneous point process is defined on the real line as a mathematical model for occurrences of some phenomenon, then it has the characteristic that the positions of these occurrences or events on the real line (often interpreted as time) will be uniformly distributed. More specifically, if an event occurs (according to this process) in an interval(a,b]{\displaystyle \textstyle (a,b]}wherea≤b{\displaystyle \textstyle a\leq b}, then its location will be a uniform random variable defined on that interval.[65]Furthermore, the homogeneous point process is sometimes called theuniformPoisson point process (seeTerminology). This uniformity property extends to higher dimensions in the Cartesian coordinate, but not in, for example, polar coordinates.[68][69] TheinhomogeneousornonhomogeneousPoisson point process(seeTerminology) is a Poisson point process with a Poisson parameter set as some location-dependent function in the underlying space on which the Poisson process is defined. For Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, this is achieved by introducing a locally integrable positive functionλ:Rd→[0,∞){\displaystyle \lambda \colon \mathbb {R} ^{d}\to [0,\infty )}, such that for every bounded regionB{\displaystyle \textstyle B}the (d{\displaystyle \textstyle d}-dimensional) volume integral ofλ(x){\displaystyle \textstyle \lambda (x)}over regionB{\displaystyle \textstyle B}is finite. In other words, if this integral, denoted byΛ(B){\displaystyle \textstyle \Lambda (B)}, is:[43] wheredx{\displaystyle \textstyle {\mathrm {d} x}}is a (d{\displaystyle \textstyle d}-dimensional) volume element,[c]then for every collection of disjoint boundedBorel measurablesetsB1,…,Bk{\displaystyle \textstyle B_{1},\dots ,B_{k}}, an inhomogeneous Poisson process with (intensity) functionλ(x){\displaystyle \textstyle \lambda (x)}has the finite-dimensional distribution:[67] Furthermore,Λ(B){\displaystyle \textstyle \Lambda (B)}has the interpretation of being the expected number of points of the Poisson process located in the bounded regionB{\displaystyle \textstyle B}, namely On the real line, the inhomogeneous or non-homogeneous Poisson point process has mean measure given by a one-dimensional integral. For two real numbersa{\displaystyle \textstyle a}andb{\displaystyle \textstyle b}, wherea≤b{\displaystyle \textstyle a\leq b}, denote byN(a,b]{\displaystyle \textstyle N(a,b]}the number points of an inhomogeneous Poisson process with intensity functionλ(t){\displaystyle \textstyle \lambda (t)}occurring in the interval(a,b]{\displaystyle \textstyle (a,b]}. The probability ofn{\displaystyle \textstyle n}points existing in the above interval(a,b]{\displaystyle \textstyle (a,b]}is given by: where the mean or intensity measure is: which means that the random variableN(a,b]{\displaystyle \textstyle N(a,b]}is a Poisson random variable with meanE⁡[N(a,b]]=Λ(a,b){\displaystyle \textstyle \operatorname {E} [N(a,b]]=\Lambda (a,b)}. A feature of the one-dimension setting, is that an inhomogeneous Poisson process can be transformed into a homogeneous by amonotone transformationor mapping, which is achieved with the inverse ofΛ{\displaystyle \textstyle \Lambda }.[70][71] The inhomogeneous Poisson point process, when considered on the positive half-line, is also sometimes defined as a counting process. With this interpretation, the process, which is sometimes written as{N(t),t≥0}{\displaystyle \textstyle \{N(t),t\geq 0\}}, represents the total number of occurrences or events that have happened up to and including timet{\displaystyle \textstyle t}. A counting process is said to be an inhomogeneous Poisson counting process if it has the four properties:[32][72] whereo(h){\displaystyle \textstyle o(h)}is asymptotic orlittle-o notationforo(h)/h→0{\displaystyle \textstyle o(h)/h\rightarrow 0}ash→0{\displaystyle \textstyle h\rightarrow 0}. In the case of point processes with refractoriness (e.g., neural spike trains) a stronger version of property 4 applies:[73]Pr{N(t+h)−N(t)≥2}=o(h2){\displaystyle \Pr\{N(t+h)-N(t)\geq 2\}=o(h^{2})}. The above properties imply thatN(t+h)−N(t){\displaystyle \textstyle N(t+h)-N(t)}is a Poisson random variable with the parameter (or mean) which implies An inhomogeneous Poisson process defined in the planeR2{\displaystyle \textstyle \mathbb {R} ^{2}}is called aspatial Poisson process[16]It is defined with intensity function and its intensity measure is obtained performing a surface integral of its intensity function over some region.[20][74]For example, its intensity function (as a function of Cartesian coordinatesx{\textstyle x}andy{\displaystyle \textstyle y}) can be so the corresponding intensity measure is given by the surface integral whereB{\textstyle B}is some bounded region in the planeR2{\textstyle \mathbb {R} ^{2}}. In the plane,Λ(B){\textstyle \Lambda (B)}corresponds to a surface integral while inRd{\textstyle \mathbb {R} ^{d}}the integral becomes a (d{\textstyle d}-dimensional) volume integral. When the real line is interpreted as time, the inhomogeneous process is used in the fields of counting processes and in queueing theory.[72][75]Examples of phenomena which have been represented by or appear as an inhomogeneous Poisson point process include: In the plane, the Poisson point process is important in the related disciplines of stochastic geometry[1][33]and spatial statistics.[21][22]The intensity measure of this point process is dependent on the location of underlying space, which means it can be used to model phenomena with a density that varies over some region. In other words, the phenomena can be represented as points that have a location-dependent density.[20]This processes has been used in various disciplines and uses include the study of salmon and sea lice in the oceans,[78]forestry,[6]and search problems.[79] The Poisson intensity functionλ(x){\textstyle \lambda (x)}has an interpretation, considered intuitive,[20]with the volume elementdx{\textstyle \mathrm {d} x}in the infinitesimal sense:λ(x)dx{\textstyle \lambda (x)\,\mathrm {d} x}is the infinitesimal probability of a point of a Poisson point process existing in a region of space with volumedx{\textstyle \mathrm {d} x}located atx{\textstyle x}.[20] For example, given a homogeneous Poisson point process on the real line, the probability of finding a single point of the process in a small interval of widthδ{\textstyle \delta }is approximatelyλδ{\textstyle \lambda \delta }. In fact, such intuition is how the Poisson point process is sometimes introduced and its distribution derived.[80][41][81] If a Poisson point process has an intensity measure that is a locally finite and diffuse (or non-atomic), then it is asimple point process. For a simple point process, the probability of a point existing at a single point or location in the underlying (state) space is either zero or one. This implies that, with probability one, no two (or more) points of a Poisson point process coincide in location in the underlying space.[82][18][83] Simulating a Poisson point process on a computer is usually done in a bounded region of space, known as a simulationwindow, and requires two steps: appropriately creating a random number of points and then suitably placing the points in a random manner. Both these two steps depend on the specific Poisson point process that is being simulated.[84][85] The number of pointsN{\textstyle N}in the window, denoted here byW{\textstyle W}, needs to be simulated, which is done by using a (pseudo)-random number generatingfunction capable of simulating Poisson random variables. For the homogeneous case with the constantλ{\textstyle \lambda }, the mean of the Poisson random variableN{\textstyle N}is set toλ|W|{\textstyle \lambda |W|}where|W|{\textstyle |W|}is the length, area or (d{\textstyle d}-dimensional) volume ofW{\textstyle W}. For the inhomogeneous case,λ|W|{\textstyle \lambda |W|}is replaced with the (d{\textstyle d}-dimensional) volume integral The second stage requires randomly placing theN{\displaystyle \textstyle N}points in the windowW{\displaystyle \textstyle W}. For the homogeneous case in one dimension, all points are uniformly and independently placed in the window or intervalW{\displaystyle \textstyle W}. For higher dimensions in a Cartesian coordinate system, each coordinate is uniformly and independently placed in the windowW{\displaystyle \textstyle W}. If the window is not a subspace of Cartesian space (for example, inside a unit sphere or on the surface of a unit sphere), then the points will not be uniformly placed inW{\displaystyle \textstyle W}, and suitable change of coordinates (from Cartesian) are needed.[84] For the inhomogeneous case, a couple of different methods can be used depending on the nature of the intensity functionλ(x){\displaystyle \textstyle \lambda (x)}.[84]If the intensity function is sufficiently simple, then independent and random non-uniform (Cartesian or other) coordinates of the points can be generated. For example, simulating a Poisson point process on a circular window can be done for an isotropic intensity function (in polar coordinatesr{\displaystyle \textstyle r}andθ{\displaystyle \textstyle \theta }), implying it is rotationally variant or independent ofθ{\displaystyle \textstyle \theta }but dependent onr{\displaystyle \textstyle r}, by a change of variable inr{\displaystyle \textstyle r}if the intensity function is sufficiently simple.[84] For more complicated intensity functions, one can use anacceptance-rejection method, which consists of using (or 'accepting') only certain random points and not using (or 'rejecting') the other points, based on the ratio:.[86] wherexi{\displaystyle \textstyle x_{i}}is the point under consideration for acceptance or rejection. That is, a location is uniformly randomly selected for consideration, then to determine whether to place a sample at that location a uniformly randomly drawn number in[0,1]{\displaystyle [0,1]}is compared to the probability density functionλ(x)Λ(W){\displaystyle {\frac {\lambda (x)}{\Lambda (W)}}}, accepting if it is smaller than the probability density function, and repeating until the previously chosen number of samples have been drawn. Inmeasure theory, the Poisson point process can be further generalized to what is sometimes known as thegeneral Poisson point process[20][87]orgeneral Poisson process[74]by using aRadon measureΛ{\displaystyle \textstyle \Lambda }, which is alocally finite measure. In general, this Radon measureΛ{\displaystyle \textstyle \Lambda }can be atomic, which means multiple points of the Poisson point process can exist in the same location of the underlying space. In this situation, the number of points atx{\displaystyle \textstyle x}is a Poisson random variable with meanΛ(x){\displaystyle \textstyle \Lambda ({x})}.[87]But sometimes the converse is assumed, so the Radon measureΛ{\displaystyle \textstyle \Lambda }isdiffuseor non-atomic.[20] A point processN{\displaystyle \textstyle {N}}is a general Poisson point process with intensityΛ{\displaystyle \textstyle \Lambda }if it has the two following properties:[20] The Radon measureΛ{\displaystyle \textstyle \Lambda }maintains its previous interpretation of being the expected number of points ofN{\displaystyle \textstyle {N}}located in the bounded regionB{\displaystyle \textstyle B}, namely Furthermore, ifΛ{\displaystyle \textstyle \Lambda }is absolutely continuous such that it has a density (which is theRadon–Nikodym densityor derivative) with respect to the Lebesgue measure, then for all Borel setsB{\displaystyle \textstyle B}it can be written as: where the densityλ(x){\displaystyle \textstyle \lambda (x)}is known, among other terms, as the intensity function. Despite its name, the Poisson point process was neither discovered nor studied by its namesake. It is cited as an example ofStigler's law of eponymy.[2][3]The name arises from the process's inherent relation to the Poisson distribution, derived by Poisson as a limiting case of thebinomial distribution.[88]It describes theprobabilityof the sum ofn{\displaystyle \textstyle n}Bernoulli trialswith probabilityp{\displaystyle \textstyle p}, often likened to the number of heads (or tails) aftern{\displaystyle \textstyle n}biasedcoin flipswith the probability of a head (or tail) occurring beingp{\displaystyle \textstyle p}. For some positive constantΛ>0{\displaystyle \textstyle \Lambda >0}, asn{\displaystyle \textstyle n}increases towards infinity andp{\displaystyle \textstyle p}decreases towards zero such that the productnp=Λ{\displaystyle \textstyle np=\Lambda }is fixed, the Poisson distribution more closely approximates that of the binomial.[89] Poisson derived the Poisson distribution, published in 1841, by examining the binomial distribution in thelimitofp{\displaystyle \textstyle p}(to zero) andn{\displaystyle \textstyle n}(to infinity). It only appears once in all of Poisson's work,[90]and the result was not well known during his time. Over the following years others used the distribution without citing Poisson, includingPhilipp Ludwig von SeidelandErnst Abbe.[91][2]At the end of the 19th century,Ladislaus Bortkiewiczstudied the distribution, citing Poisson, using real data on the number of deaths from horse kicks in thePrussian army.[88][92] There are a number of claims for early uses or discoveries of the Poisson point process.[2][3]For example,John Michellin 1767, a decade before Poisson was born, was interested in the probability a star being within a certain region of another star under the erroneous assumption that the stars were "scattered by mere chance", and studied an example consisting of the six brighteststarsin thePleiades, without deriving the Poisson distribution. This work inspiredSimon Newcombto study the problem and to calculate the Poisson distribution as an approximation for the binomial distribution in 1860.[3] At the beginning of the 20th century the Poisson process (in one dimension) would arise independently in different situations.[2][3]In Sweden 1903,Filip Lundbergpublished a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.[93][94] InDenmarkA.K. Erlangderived the Poisson distribution in 1909 when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang unaware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent of each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.[2] In 1910Ernest RutherfordandHans Geigerpublished experimental results on counting alpha particles. Their experimental work had mathematical contributions fromHarry Bateman, who derived Poisson probabilities as a solution to a family of differential equations, though the solution had been derived earlier, resulting in the independent discovery of the Poisson process.[2]After this time, there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.[2] The years after 1909 led to a number of studies and applications of the Poisson point process, however, its early history is complex, which has been explained by the various applications of the process in numerous fields bybiologists, ecologists, engineers and others working in thephysical sciences. The early results were published in different languages and in different settings, with no standard terminology and notation used.[2]For example, in 1922 SwedishchemistandNobel LaureateTheodor Svedbergproposed a model in which a spatial Poisson point process is the underlying process to study how plants are distributed in plant communities.[95]A number of mathematicians started studying the process in the early 1930s, and important contributions were made byAndrey Kolmogorov,William FellerandAleksandr Khinchin,[2]among others.[96]In the field ofteletraffic engineering, mathematicians and statisticians studied and used Poisson and other point processes.[97] The SwedeConny Palmin his 1943dissertationstudied the Poisson and other point processes in theone-dimensionalsetting by examining them in terms of the statistical or stochastic dependence between the points in time.[98][97]In his work exists the first known recorded use of the termpoint processesasPunktprozessein German.[98][3] It is believed[2]that William Feller was the first in print to refer to it as thePoisson processin a 1940 paper. Although the Swede Ove Lundberg used the termPoisson processin his 1940 PhD dissertation,[3]in which Feller was acknowledged as an influence,[99]it has been claimed that Feller coined the term before 1940.[89]It has been remarked that both Feller and Lundberg used the term as though it were well-known, implying it was already in spoken use by then.[3]Feller worked from 1936 to 1939 alongsideHarald CramératStockholm University, where Lundberg was a PhD student under Cramér who did not use the termPoisson processin a book by him, finished in 1936, but did in subsequent editions, which his has led to the speculation that the termPoisson processwas coined sometime between 1936 and 1939 at the Stockholm University.[3] The terminology of point process theory in general has been criticized for being too varied.[3]In addition to the wordpointoften being omitted,[63][27]the homogeneous Poisson (point) process is also called astationaryPoisson (point) process,[47]as well asuniformPoisson (point) process.[42]The inhomogeneous Poisson point process, as well as being callednonhomogeneous,[47]is also referred to as thenon-stationaryPoisson process.[72][100] The termpoint processhas been criticized, as the termprocesscan suggest over time and space, sorandom point field,[101]resulting in the termsPoisson random point fieldorPoisson point fieldbeing also used.[102]A point process is considered, and sometimes called, a random counting measure,[103]hence the Poisson point process is also referred to as aPoisson random measure,[104]a term used in the study of Lévy processes,[104][105]but some choose to use the two terms for Poisson points processes defined on two different underlying spaces.[106] The underlying mathematical space of the Poisson point process is called acarrier space,[107][108]orstate space, though the latter term has a different meaning in the context of stochastic processes. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line,[109][110]which corresponds to the index set[111]or parameter set[112]in stochastic process terminology. The measureΛ{\displaystyle \textstyle \Lambda }is called theintensity measure,[113]mean measure,[36]orparameter measure,[67]as there are no standard terms.[36]IfΛ{\displaystyle \textstyle \Lambda }has a derivative or density, denoted byλ(x){\displaystyle \textstyle \lambda (x)}, is called theintensity functionof the Poisson point process.[20]For the homogeneous Poisson point process, the derivative of the intensity measure is simply a constantλ>0{\displaystyle \textstyle \lambda >0}, which can be referred to as therate, usually when the underlying space is the real line, or theintensity.[42]It is also called themean rateor themean density[114]orrate.[32]Forλ=1{\displaystyle \textstyle \lambda =1}, the corresponding process is sometimes referred to as thestandard Poisson(point) process.[43][57][115] The extent of the Poisson point process is sometimes called theexposure.[116][117] The notation of the Poisson point process depends on its setting and the field it is being applied in. For example, on the real line, the Poisson process, both homogeneous or inhomogeneous, is sometimes interpreted as a counting process, and the notation{N(t),t≥0}{\displaystyle \textstyle \{N(t),t\geq 0\}}is used to represent the Poisson process.[29][32] Another reason for varying notation is due to the theory of point processes, which has a couple of mathematical interpretations. For example, a simple Poisson point process may be considered as a random set, which suggests the notationx∈N{\displaystyle \textstyle x\in N}, implying thatx{\displaystyle \textstyle x}is a random point belonging to or being an element of the Poisson point processN{\displaystyle \textstyle N}. Another, more general, interpretation is to consider a Poisson or any other point process as a random counting measure, so one can write the number of points of a Poisson point processN{\displaystyle \textstyle {N}}being found or located in some (Borel measurable) regionB{\displaystyle \textstyle B}asN(B){\displaystyle \textstyle N(B)}, which is a random variable. These different interpretations results in notation being used from mathematical fields such as measure theory and set theory.[118] For general point processes, sometimes a subscript on the point symbol, for examplex{\displaystyle \textstyle x}, is included so one writes (with set notation)xi∈N{\displaystyle \textstyle x_{i}\in N}instead ofx∈N{\displaystyle \textstyle x\in N}, andx{\displaystyle \textstyle x}can be used for thebound variablein integral expressions such as Campbell's theorem, instead of denoting random points.[18]Sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the pointx{\displaystyle \textstyle x}orxi{\displaystyle \textstyle x_{i}}belongs to or is a point of the point processX{\displaystyle \textstyle X}, and be written with set notation asx∈X{\displaystyle \textstyle x\in X}orxi∈X{\displaystyle \textstyle x_{i}\in X}.[110] Furthermore, the set theory and integral or measure theory notation can be used interchangeably. For example, for a point processN{\displaystyle \textstyle N}defined on the Euclidean state spaceRd{\displaystyle \textstyle {\mathbb {R} ^{d}}}and a (measurable) functionf{\displaystyle \textstyle f}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}, the expression demonstrates two different ways to write a summation over a point process (see alsoCampbell's theorem (probability)). More specifically, the integral notation on the left-hand side is interpreting the point process as a random counting measure while the sum on the right-hand side suggests a random set interpretation.[118] In probability theory, operations are applied to random variables for different purposes. Sometimes these operations are regular expectations that produce the average or variance of a random variable. Others, such as characteristic functions (or Laplace transforms) of a random variable can be used to uniquely identify or characterize random variables and prove results like the central limit theorem.[119]In the theory of point processes there exist analogous mathematical tools which usually exist in the forms of measures and functionals instead of moments and functions respectively.[120][121] For a Poisson point processN{\displaystyle \textstyle N}with intensity measureΛ{\displaystyle \textstyle \Lambda }on some spaceX{\displaystyle X}, theLaplace functionalis given by:[18] One version ofCampbell's theoreminvolves the Laplace functional of the Poisson point process. The probability generating function of non-negative integer-valued random variable leads to the probability generating functional being defined analogously with respect to any non-negative bounded functionv{\displaystyle \textstyle v}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}such that0≤v(x)≤1{\displaystyle \textstyle 0\leq v(x)\leq 1}. For a point processN{\displaystyle \textstyle {N}}the probability generating functional is defined as:[122] where the product is performed for all the points inN{\textstyle N}. If the intensity measureΛ{\displaystyle \textstyle \Lambda }ofN{\displaystyle \textstyle {N}}is locally finite, then theG{\textstyle G}is well-defined for any measurable functionu{\displaystyle \textstyle u}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}. For a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }the generating functional is given by: which in the homogeneous case reduces to For a general Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }the firstmoment measureis its intensity measure:[18][19] which for a homogeneous Poisson point process withconstantintensityλ{\displaystyle \textstyle \lambda }means: where|B|{\displaystyle \textstyle |B|}is the length, area or volume (or more generally, theLebesgue measure) ofB{\displaystyle \textstyle B}. The Mecke equation characterizes the Poisson point process. LetNσ{\displaystyle \mathbb {N} _{\sigma }}be the space of allσ{\displaystyle \sigma }-finite measures on some general spaceQ{\displaystyle {\mathcal {Q}}}. A point processη{\displaystyle \eta }with intensityλ{\displaystyle \lambda }onQ{\displaystyle {\mathcal {Q}}}is a Poisson point process if and only if for all measurable functionsf:Q×Nσ→R+{\displaystyle f:{\mathcal {Q}}\times \mathbb {N} _{\sigma }\to \mathbb {R} _{+}}the following holds For further details see.[123] For a general Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }then{\displaystyle \textstyle n}-thfactorial moment measureis given by the expression:[124] whereΛ{\displaystyle \textstyle \Lambda }is the intensity measure or first moment measure ofN{\displaystyle \textstyle {N}}, which for some Borel setB{\displaystyle \textstyle B}is given by For a homogeneous Poisson point process then{\displaystyle \textstyle n}-th factorial moment measure is simply:[18][19] where|Bi|{\displaystyle \textstyle |B_{i}|}is the length, area, or volume (or more generally, theLebesgue measure) ofBi{\displaystyle \textstyle B_{i}}. Furthermore, then{\displaystyle \textstyle n}-th factorial moment density is:[124] Theavoidance function[69]orvoid probability[118]v{\displaystyle \textstyle v}of a point processN{\displaystyle \textstyle {N}}is defined in relation to some setB{\displaystyle \textstyle B}, which is a subset of the underlying spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, as the probability of no points ofN{\displaystyle \textstyle {N}}existing inB{\displaystyle \textstyle B}. More precisely,[125]for a test setB{\displaystyle \textstyle B}, the avoidance function is given by: For a general Poisson point processN{\displaystyle \textstyle {N}}with intensity measureΛ{\displaystyle \textstyle \Lambda }, its avoidance function is given by: Simple point processes are completely characterized by their void probabilities.[126]In other words, complete information of a simple point process is captured entirely in its void probabilities, and two simple point processes have the same void probabilities if and if only if they are the same point processes. The case for Poisson process is sometimes known asRényi's theorem, which is named afterAlfréd Rényiwho discovered the result for the case of a homogeneous point process in one-dimension.[127] In one form,[127]the Rényi's theorem says for a diffuse (or non-atomic) Radon measureΛ{\displaystyle \textstyle \Lambda }onRd{\displaystyle \textstyle \mathbb {R} ^{d}}and a setA{\displaystyle \textstyle A}is a finite union of rectangles (so not Borel[d]) that ifN{\displaystyle \textstyle N}is a countable subset ofRd{\displaystyle \textstyle \mathbb {R} ^{d}}such that: thenN{\displaystyle \textstyle {N}}is a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }. Mathematical operations can be performed on point processes to get new point processes and develop new mathematical models for the locations of certain objects. One example of an operation is known as thinning which entails deleting or removing the points of some point process according to a rule, creating a new process with the remaining points (the deleted points also form a point process).[129] For the Poisson process, the independentp(x){\displaystyle \textstyle p(x)}-thinning operations results in another Poisson point process. More specifically, ap(x){\displaystyle \textstyle p(x)}-thinning operation applied to a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }gives a point process of removed points that is also Poisson point processNp{\displaystyle \textstyle {N}_{p}}with intensity measureΛp{\displaystyle \textstyle \Lambda _{p}}, which for a bounded Borel setB{\displaystyle \textstyle B}is given by: This thinning result of the Poisson point process is sometimes known asPrekopa's theorem.[130]Furthermore, after randomly thinning a Poisson point process, the kept or remaining points also form a Poisson point process, which has the intensity measure The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other.[129]In other words, if a region is known to containn{\displaystyle \textstyle n}kept points (from the original Poisson point process), then this will have no influence on the random number of removed points in the same region. This ability to randomly create two independent Poisson point processes from one is sometimes known assplitting[131][132]the Poisson point process. If there is a countable collection of point processesN1,N2,…{\displaystyle \textstyle N_{1},N_{2},\dots }, then their superposition, or, in set theory language, their union, which is[133] also forms a point process. In other words, any points located in any of the point processesN1,N2…{\displaystyle \textstyle N_{1},N_{2}\dots }will also be located in the superposition of these point processesN{\displaystyle \textstyle {N}}. Thesuperposition theoremof the Poisson point process says that the superposition of independent Poisson point processesN1,N2…{\displaystyle \textstyle N_{1},N_{2}\dots }with mean measuresΛ1,Λ2,…{\displaystyle \textstyle \Lambda _{1},\Lambda _{2},\dots }will also be a Poisson point process with mean measure[134][89] In other words, the union of two (or countably more) Poisson processes is another Poisson process. If a pointx{\textstyle x}is sampled from a countablen{\textstyle n}union of Poisson processes, then the probability that the pointx{\displaystyle \textstyle x}belongs to thej{\textstyle j}th Poisson processNj{\textstyle N_{j}}is given by: For two homogeneous Poisson processes with intensitiesλ1,λ2…{\textstyle \lambda _{1},\lambda _{2}\dots }, the two previous expressions reduce to and The operation clustering is performed when each pointx{\displaystyle \textstyle x}of some point processN{\displaystyle \textstyle {N}}is replaced by another (possibly different) point process. If the original processN{\displaystyle \textstyle {N}}is a Poisson point process, then the resulting processNc{\displaystyle \textstyle {N}_{c}}is called a Poisson cluster point process. A mathematical model may require randomly moving points of a point process to other locations on the underlying mathematical space, which gives rise to a point process operation known as displacement[135]or translation.[136]The Poisson point process has been used to model, for example, the movement of plants between generations, owing to the displacement theorem,[135]which loosely says that the random independent displacement of points of a Poisson point process (on the same underlying space) forms another Poisson point process. One version of the displacement theorem[135]involves a Poisson point processN{\displaystyle \textstyle {N}}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}with intensity functionλ(x){\displaystyle \textstyle \lambda (x)}. It is then assumed the points ofN{\displaystyle \textstyle {N}}are randomly displaced somewhere else inRd{\displaystyle \textstyle \mathbb {R} ^{d}}so that each point's displacement is independent and that the displacement of a point formerly atx{\displaystyle \textstyle x}is a random vector with a probability densityρ(x,⋅){\displaystyle \textstyle \rho (x,\cdot )}.[e]Then the new point processND{\displaystyle \textstyle N_{D}}is also a Poisson point process with intensity function If the Poisson process is homogeneous withλ(x)=λ>0{\displaystyle \textstyle \lambda (x)=\lambda >0}and ifρ(x,y){\displaystyle \rho (x,y)}is a function ofy−x{\displaystyle y-x}, then In other words, after each random and independent displacement of points, the original Poisson point process still exists. The displacement theorem can be extended such that the Poisson points are randomly displaced from one Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}to another Euclidean spaceRd′{\displaystyle \textstyle \mathbb {R} ^{d'}}, whered′≥1{\displaystyle \textstyle d'\geq 1}is not necessarily equal tod{\displaystyle \textstyle d}.[18] Another property that is considered useful is the ability to map a Poisson point process from one underlying space to another space.[137] If the mapping (or transformation) adheres to some conditions, then the resulting mapped (or transformed) collection of points also form a Poisson point process, and this result is sometimes referred to as themapping theorem.[137][138]The theorem involves some Poisson point process with mean measureΛ{\displaystyle \textstyle \Lambda }on some underlying space. If the locations of the points are mapped (that is, the point process is transformed) according to some function to another underlying space, then the resulting point process is also a Poisson point process but with a different mean measureΛ′{\displaystyle \textstyle \Lambda '}. More specifically, one can consider a (Borel measurable) functionf{\displaystyle \textstyle f}that maps a point processN{\displaystyle \textstyle {N}}with intensity measureΛ{\displaystyle \textstyle \Lambda }from one spaceS{\displaystyle \textstyle S}, to another spaceT{\displaystyle \textstyle T}in such a manner so that the new point processN′{\displaystyle \textstyle {N}'}has the intensity measure: with no atoms, whereB{\displaystyle \textstyle B}is a Borel set andf−1{\displaystyle \textstyle f^{-1}}denotes the inverse of the functionf{\displaystyle \textstyle f}. IfN{\displaystyle \textstyle {N}}is a Poisson point process, then the new processN′{\displaystyle \textstyle {N}'}is also a Poisson point process with the intensity measureΛ′{\displaystyle \textstyle \Lambda '}. The tractability of the Poisson process means that sometimes it is convenient to approximate a non-Poisson point process with a Poisson one. The overall aim is to approximate both the number of points of some point process and the location of each point by a Poisson point process.[139]There a number of methods that can be used to justify, informally or rigorously, approximating the occurrence of random events or phenomena with suitable Poisson point processes. The more rigorous methods involve deriving upper bounds on the probability metrics between the Poisson and non-Poisson point processes, while other methods can be justified by less formal heuristics.[140] One method for approximating random events or phenomena with Poisson processes is called theclumping heuristic.[141]The general heuristic or principle involves using the Poisson point process (or Poisson distribution) to approximate events, which are considered rare or unlikely, of some stochastic process. In some cases these rare events are close to being independent, hence a Poisson point process can be used. When the events are not independent, but tend to occur in clusters orclumps, then if these clumps are suitably defined such that they are approximately independent of each other, then the number of clumps occurring will be close to a Poisson random variable[140]and the locations of the clumps will be close to a Poisson process.[141] Stein's methodis a mathematical technique originally developed for approximating random variables such asGaussianand Poisson variables, which has also been applied to point processes. Stein's method can be used to derive upper bounds onprobability metrics, which give way to quantify how different two random mathematical objects vary stochastically.[139][142]Upperbounds on probability metrics such astotal variationandWasserstein distancehave been derived.[139] Researchers have applied Stein's method to Poisson point processes in a number of ways,[139]such as usingPalm calculus.[108]Techniques based on Stein's method have been developed to factor into the upper bounds the effects of certainpoint process operationssuch as thinning and superposition.[143][144]Stein's method has also been used to derive upper bounds on metrics of Poisson and other processes such as theCox point process, which is a Poisson process with a random intensity measure.[139] In general, when an operation is applied to a general point process the resulting process is usually not a Poisson point process. For example, if a point process, other than a Poisson, has its points randomly and independently displaced, then the process would not necessarily be a Poisson point process. However, under certain mathematical conditions for both the original point process and the random displacement, it has been shown via limit theorems that if the points of a point process are repeatedly displaced in a random and independent manner, then the finite-distribution of the point process will converge (weakly) to that of a Poisson point process.[145] Similar convergence results have been developed for thinning and superposition operations[145]that show that such repeated operations on point processes can, under certain conditions, result in the process converging to a Poisson point processes, provided a suitable rescaling of the intensity measure (otherwise values of the intensity measure of the resulting point processes would approach zero or infinity). Such convergence work is directly related to the results known as the Palm–Khinchin[f]equations, which has its origins in the work ofConny PalmandAleksandr Khinchin,[146]and help explains why the Poisson process can often be used as a mathematical model of various random phenomena.[145] The Poisson point process can be generalized by, for example, changing its intensity measure or defining on more general mathematical spaces. These generalizations can be studied mathematically as well as used to mathematically model or represent physical phenomena. ThePoisson-type random measures(PT) are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed underPoint process operation#Thinning. These random measures are examples of themixed binomial processand share the distributional self-similarity property of thePoisson random measure. They are the only members of the canonical non-negativepower seriesfamily of distributions to possess this property and include thePoisson distribution,negative binomial distribution, andbinomial distribution. The Poisson random measure is independent on disjoint subspaces, whereas the other PT random measures (negative binomial and binomial) have positive and negative covariances. The PT random measures are discussed[147]and include thePoisson random measure, negative binomial random measure, and binomial random measure. For mathematical models the Poisson point process is often defined in Euclidean space,[1][36]but has been generalized to more abstract spaces and plays a fundamental role in the study of random measures,[148][149]which requires an understanding of mathematical fields such as probability theory, measure theory and topology.[150] In general, the concept of distance is of practical interest for applications, while topological structure is needed for Palm distributions, meaning that point processes are usually defined on mathematical spaces with metrics.[151]Furthermore, a realization of a point process can be considered as a counting measure, so points processes are types of random measures known as random counting measures.[115]In this context, the Poisson and other point processes have been studied on a locally compact second countable Hausdorff space.[152] ACox point process,Cox processordoubly stochastic Poisson processis a generalization of the Poisson point process by letting its intensity measureΛ{\displaystyle \textstyle \Lambda }to be also random and independent of the underlying Poisson process. The process is named afterDavid Coxwho introduced it in 1955, though other Poisson processes with random intensities had been independently introduced earlier by Lucien Le Cam and Maurice Quenouille.[3]The intensity measure may be a realization of random variable or a random field. For example, if thelogarithmof the intensity measure is aGaussian random field, then the resulting process is known as alog Gaussian Cox process.[153]More generally, the intensity measures is a realization of a non-negative locally finite random measure. Cox point processes exhibit aclusteringof points, which can be shown mathematically to be larger than those of Poisson point processes. The generality and tractability of Cox processes has resulted in them being used as models in fields such as spatial statistics[154]and wireless networks.[19] For a given point process, each random point of a point process can have a random mathematical object, known as amark, randomly assigned to it. These marks can be as diverse as integers, real numbers, lines, geometrical objects or other point processes.[155][156]The pair consisting of a point of the point process and its corresponding mark is called a marked point, and all the marked points form amarked point process.[157]It is often assumed that the random marks are independent of each other and identically distributed, yet the mark of a point can still depend on the location of its corresponding point in the underlying (state) space.[158]If the underlying point process is a Poisson point process, then the resulting point process is amarked Poisson point process.[159] If a general point process is defined on somemathematical spaceand the random marks are defined on another mathematical space, then the marked point process is defined on theCartesian productof these two spaces. For a marked Poisson point process with independent and identically distributed marks, themarking theorem[158][160]states that this marked point process is also a (non-marked) Poisson point process defined on the aforementioned Cartesian product of the two mathematical spaces, which is not true for general point processes. Thecompound Poisson point processorcompound Poisson processis formed by adding random values or weights to each point of Poisson point process defined on some underlying space, so the process is constructed from a marked Poisson point process, where the marks form a collection ofindependent and identically distributednon-negative random variables. In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is formed from the sum of all the random variables corresponding to points of the Poisson process located in some region of the underlying mathematical space.[161] If there is a marked Poisson point process formed from a Poisson point processN{\displaystyle \textstyle N}(defined on, for example,Rd{\displaystyle \textstyle \mathbb {R} ^{d}}) and a collection of independent and identically distributed non-negative marks{Mi}{\displaystyle \textstyle \{M_{i}\}}such that for each pointxi{\displaystyle \textstyle x_{i}}of the Poisson processN{\displaystyle \textstyle N}there is a non-negative random variableMi{\displaystyle \textstyle M_{i}}, the resulting compound Poisson process is then:[162] whereB⊂Rd{\displaystyle \textstyle B\subset \mathbb {R} ^{d}}is a Borel measurable set. If general random variables{Mi}{\displaystyle \textstyle \{M_{i}\}}take values in, for example,d{\displaystyle \textstyle d}-dimensional Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, the resulting compound Poisson process is an example of aLévy processprovided that it is formed from a homogeneous Point processN{\displaystyle \textstyle N}defined on the non-negative numbers[0,∞){\displaystyle \textstyle [0,\infty )}.[163] The failure process with the exponential smoothing of intensity functions (FP-ESI) is an extension of the nonhomogeneous Poisson process. The intensity function of an FP-ESI is an exponential smoothing function of the intensity functions at the last time points of event occurrences and outperforms other nine stochastic processes on 8 real-world failure datasets when the models are used to fit the datasets,[164]where the model performance is measured in terms of AIC (Akaike information criterion) and BIC (Bayesian information criterion).
https://en.wikipedia.org/wiki/Poisson_point_process