text
stringlengths
11
320k
source
stringlengths
26
161
Double descentinstatisticsandmachine learningis the phenomenon where amodelwith a small number ofparametersand a model with an extremely large number of parameters both have a smalltraining error, but a model whose number of parameters is about the same as the number ofdata pointsused to train the model will have a much greatertest errorthan one with a much larger number of parameters.[2]This phenomenon has been considered surprising, as it contradicts assumptions aboutoverfittingin classical machine learning.[3] Early observations of what would later be called double descent in specific models date back to 1989.[4][5] The term "double descent" was coined by Belkin et. al.[6]in 2019,[3]when the phenomenon gained popularity as a broader concept exhibited by many models.[7][8]The latter development was prompted by a perceived contradiction between the conventional wisdom that too many parameters in the model result in a significant overfitting error (an extrapolation of thebias–variance tradeoff),[9]and the empirical observations in the 2010s that some modern machine learning techniques tend to perform better with larger models.[6][10] Double descent occurs inlinear regressionwithisotropicGaussian covariatesand isotropic Gaussian noise.[11] A model of double descent at thethermodynamic limithas been analyzed using thereplica trick, and the result has been confirmed numerically.[12] The scaling behavior of double descent has been found to follow abroken neural scaling law[13]functional form. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Double_descent
Instatistics, theGauss–Markov theorem(or simplyGauss theoremfor some authors)[1]states that theordinary least squares(OLS) estimator has the lowestsampling variancewithin theclassoflinearunbiasedestimators, if theerrorsin thelinear regression modelareuncorrelated, haveequal variancesand expectation value of zero.[2]The errors do not need to benormal, nor do they need to beindependent and identically distributed(onlyuncorrelatedwith mean zero andhomoscedasticwith finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, theJames–Stein estimator(which also drops linearity),ridge regression, or simply anydegenerateestimator. The theorem was named afterCarl Friedrich GaussandAndrey Markov, although Gauss' work significantly predates Markov's.[3]But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above.[4]A further generalization tonon-spherical errorswas given byAlexander Aitken.[5] Suppose we are given two random variable vectors,X,Y∈Rk{\displaystyle X{\text{, }}Y\in \mathbb {R} ^{k}}and that we want to find the best linear estimator ofY{\displaystyle Y}givenX{\displaystyle X}, using the best linear estimatorY^=αX+μ{\displaystyle {\hat {Y}}=\alpha X+\mu }Where the parametersα{\displaystyle \alpha }andμ{\displaystyle \mu }are both real numbers. Such an estimatorY^{\displaystyle {\hat {Y}}}would have the same mean and standard deviation asY{\displaystyle Y}, that is,μY^=μY,σY^=σY{\displaystyle \mu _{\hat {Y}}=\mu _{Y},\sigma _{\hat {Y}}=\sigma _{Y}}. Therefore, if the vectorX{\displaystyle X}has respective mean and standard deviationμx,σx{\displaystyle \mu _{x},\sigma _{x}}, the best linear estimator would be Y^=σy(X−μx)σx+μy{\displaystyle {\hat {Y}}=\sigma _{y}{\frac {(X-\mu _{x})}{\sigma _{x}}}+\mu _{y}} sinceY^{\displaystyle {\hat {Y}}}has the same mean and standard deviation asY{\displaystyle Y}. Suppose we have, in matrix notation, the linear relationship expanding to, whereβj{\displaystyle \beta _{j}}are non-random butunobservable parameters,Xij{\displaystyle X_{ij}}are non-random and observable (called the "explanatory variables"),εi{\displaystyle \varepsilon _{i}}are random, and soyi{\displaystyle y_{i}}are random. The random variablesεi{\displaystyle \varepsilon _{i}}are called the "disturbance", "noise" or simply "error" (will be contrasted with "residual" later in the article; seeerrors and residuals in statistics). Note that to include a constant in the model above, one can choose to introduce the constant as a variableβK+1{\displaystyle \beta _{K+1}}with a newly introduced last column of X being unity i.e.,Xi(K+1)=1{\displaystyle X_{i(K+1)}=1}for alli{\displaystyle i}. Note that thoughyi,{\displaystyle y_{i},}as sample responses, are observable, the following statements and arguments including assumptions, proofs and the others assume under theonlycondition of knowingXij,{\displaystyle X_{ij},}but notyi.{\displaystyle y_{i}.} TheGauss–Markovassumptions concern the set of error random variables,εi{\displaystyle \varepsilon _{i}}: Alinear estimatorofβj{\displaystyle \beta _{j}}is a linear combination in which the coefficientscij{\displaystyle c_{ij}}are not allowed to depend on the underlying coefficientsβj{\displaystyle \beta _{j}}, since those are not observable, but are allowed to depend on the valuesXij{\displaystyle X_{ij}}, since these data are observable. (The dependence of the coefficients on eachXij{\displaystyle X_{ij}}is typically nonlinear; the estimator is linear in eachyi{\displaystyle y_{i}}and hence in each randomε,{\displaystyle \varepsilon ,}which is why this is"linear" regression.) The estimator is said to beunbiasedif and only if regardless of the values ofXij{\displaystyle X_{ij}}. Now, let∑j=1Kλjβj{\textstyle \sum _{j=1}^{K}\lambda _{j}\beta _{j}}be some linear combination of the coefficients. Then themean squared errorof the corresponding estimation is in other words, it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated. (Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.) Thebest linear unbiased estimator(BLUE) of the vectorβ{\displaystyle \beta }of parametersβj{\displaystyle \beta _{j}}is one with the smallest mean squared error for every vectorλ{\displaystyle \lambda }of linear combination parameters. This is equivalent to the condition that is a positive semi-definite matrix for every other linear unbiased estimatorβ~{\displaystyle {\widetilde {\beta }}}. Theordinary least squares estimator (OLS)is the function ofy{\displaystyle y}andX{\displaystyle X}(whereXT{\displaystyle X^{\operatorname {T} }}denotes thetransposeofX{\displaystyle X}) that minimizes thesum of squares ofresiduals(misprediction amounts): The theorem now states that the OLS estimator is a best linear unbiased estimator (BLUE). The main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combinationa1y1+⋯+anyn{\displaystyle a_{1}y_{1}+\cdots +a_{n}y_{n}}whose coefficients do not depend upon the unobservableβ{\displaystyle \beta }but whose expected value is always zero. Proof that the OLS indeedminimizesthe sum of squares of residuals may proceed as follows with a calculation of theHessian matrixand showing that it is positive definite. The MSE function we want to minimize isf(β0,β1,…,βp)=∑i=1n(yi−β0−β1xi1−⋯−βpxip)2{\displaystyle f(\beta _{0},\beta _{1},\dots ,\beta _{p})=\sum _{i=1}^{n}(y_{i}-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})^{2}}for a multiple regression model withpvariables. The first derivative isddβf=−2XT(y−Xβ)=−2[∑i=1n(yi−⋯−βpxip)∑i=1nxi1(yi−⋯−βpxip)⋮∑i=1nxip(yi−⋯−βpxip)]=0p+1,{\displaystyle {\begin{aligned}{\frac {d}{d{\boldsymbol {\beta }}}}f&=-2X^{\operatorname {T} }\left(\mathbf {y} -X{\boldsymbol {\beta }}\right)\\&=-2{\begin{bmatrix}\sum _{i=1}^{n}(y_{i}-\dots -\beta _{p}x_{ip})\\\sum _{i=1}^{n}x_{i1}(y_{i}-\dots -\beta _{p}x_{ip})\\\vdots \\\sum _{i=1}^{n}x_{ip}(y_{i}-\dots -\beta _{p}x_{ip})\end{bmatrix}}\\&=\mathbf {0} _{p+1},\end{aligned}}}whereXT{\displaystyle X^{\operatorname {T} }}is the design matrixX=[1x11⋯x1p1x21⋯x2p⋮1xn1⋯xnp]∈Rn×(p+1);n≥p+1{\displaystyle X={\begin{bmatrix}1&x_{11}&\cdots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\&&\vdots \\1&x_{n1}&\cdots &x_{np}\end{bmatrix}}\in \mathbb {R} ^{n\times (p+1)};\qquad n\geq p+1} TheHessian matrixof second derivatives isH=2[n∑i=1nxi1⋯∑i=1nxip∑i=1nxi1∑i=1nxi12⋯∑i=1nxi1xip⋮⋮⋱⋮∑i=1nxip∑i=1nxipxi1⋯∑i=1nxip2]=2XTX{\displaystyle {\mathcal {H}}=2{\begin{bmatrix}n&\sum _{i=1}^{n}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}\\\sum _{i=1}^{n}x_{i1}&\sum _{i=1}^{n}x_{i1}^{2}&\cdots &\sum _{i=1}^{n}x_{i1}x_{ip}\\\vdots &\vdots &\ddots &\vdots \\\sum _{i=1}^{n}x_{ip}&\sum _{i=1}^{n}x_{ip}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}^{2}\end{bmatrix}}=2X^{\operatorname {T} }X} Assuming the columns ofX{\displaystyle X}are linearly independent so thatXTX{\displaystyle X^{\operatorname {T} }X}is invertible, letX=[v1v2⋯vp+1]{\displaystyle X={\begin{bmatrix}\mathbf {v_{1}} &\mathbf {v_{2}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}}, thenk1v1+⋯+kp+1vp+1=0⟺k1=⋯=kp+1=0{\displaystyle k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}=\mathbf {0} \iff k_{1}=\dots =k_{p+1}=0} Now letk=(k1,…,kp+1)T∈R(p+1)×1{\displaystyle \mathbf {k} =(k_{1},\dots ,k_{p+1})^{T}\in \mathbb {R} ^{(p+1)\times 1}}be an eigenvector ofH{\displaystyle {\mathcal {H}}}. k≠0⟹(k1v1+⋯+kp+1vp+1)2>0{\displaystyle \mathbf {k} \neq \mathbf {0} \implies \left(k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}\right)^{2}>0} In terms of vector multiplication, this means[k1⋯kp+1][v1⋮vp+1][v1⋯vp+1][k1⋮kp+1]=kTHk=λkTk>0{\displaystyle {\begin{bmatrix}k_{1}&\cdots &k_{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} \\\vdots \\\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}k_{1}\\\vdots \\k_{p+1}\end{bmatrix}}=\mathbf {k} ^{\operatorname {T} }{\mathcal {H}}\mathbf {k} =\lambda \mathbf {k} ^{\operatorname {T} }\mathbf {k} >0}whereλ{\displaystyle \lambda }is the eigenvalue corresponding tok{\displaystyle \mathbf {k} }. Moreover,kTk=∑i=1p+1ki2>0⟹λ>0{\displaystyle \mathbf {k} ^{\operatorname {T} }\mathbf {k} =\sum _{i=1}^{p+1}k_{i}^{2}>0\implies \lambda >0} Finally, as eigenvectork{\displaystyle \mathbf {k} }was arbitrary, it means all eigenvalues ofH{\displaystyle {\mathcal {H}}}are positive, thereforeH{\displaystyle {\mathcal {H}}}is positive definite. Thus,β=(XTX)−1XTY{\displaystyle {\boldsymbol {\beta }}=\left(X^{\operatorname {T} }X\right)^{-1}X^{\operatorname {T} }Y}is indeed a global minimum. Or, just see that for all vectorsv,vTXTXv=‖Xv‖2≥0{\displaystyle \mathbf {v} ,\mathbf {v} ^{\operatorname {T} }X^{\operatorname {T} }X\mathbf {v} =\|\mathbf {X} \mathbf {v} \|^{2}\geq 0}. So the Hessian is positive definite if full rank. Letβ~=Cy{\displaystyle {\tilde {\beta }}=Cy}be another linear estimator ofβ{\displaystyle \beta }withC=(XTX)−1XT+D{\displaystyle C=(X^{\operatorname {T} }X)^{-1}X^{\operatorname {T} }+D}whereD{\displaystyle D}is aK×n{\displaystyle K\times n}non-zero matrix. As we're restricting tounbiasedestimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that ofβ^,{\displaystyle {\widehat {\beta }},}the OLS estimator. We calculate: Therefore, sinceβ{\displaystyle \beta }isunobservable,β~{\displaystyle {\tilde {\beta }}}is unbiased if and only ifDX=0{\displaystyle DX=0}. Then: SinceDDT{\displaystyle DD^{\operatorname {T} }}is a positive semidefinite matrix,Var⁡(β~){\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)}exceedsVar⁡(β^){\displaystyle \operatorname {Var} \left({\widehat {\beta }}\right)}by a positive semidefinite matrix. As it has been stated before, the condition ofVar⁡(β~)−Var⁡(β^){\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)-\operatorname {Var} \left({\widehat {\beta }}\right)}is a positive semidefinite matrix is equivalent to the property that the best linear unbiased estimator ofℓTβ{\displaystyle \ell ^{\operatorname {T} }\beta }isℓTβ^{\displaystyle \ell ^{\operatorname {T} }{\widehat {\beta }}}(best in the sense that it has minimum variance). To see this, letℓTβ~{\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}}another linear unbiased estimator ofℓTβ{\displaystyle \ell ^{\operatorname {T} }\beta }. Moreover, equality holds if and only ifDTℓ=0{\displaystyle D^{\operatorname {T} }\ell =0}. We calculate This proves that the equality holds if and only ifℓTβ~=ℓTβ^{\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}=\ell ^{\operatorname {T} }{\widehat {\beta }}}which gives the uniqueness of the OLS estimator as a BLUE. Thegeneralized least squares(GLS), developed byAitken,[5]extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix.[6]The Aitken estimator is also a BLUE. In most treatments of OLS, the regressors (parameters of interest) in thedesign matrixX{\displaystyle \mathbf {X} }are assumed to be fixed in repeated samples. This assumption is considered inappropriate for a predominantly nonexperimental science likeeconometrics.[7]Instead, the assumptions of the Gauss–Markov theorem are stated conditional onX{\displaystyle \mathbf {X} }. The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equationy=β0+β1x2,{\displaystyle y=\beta _{0}+\beta _{1}x^{2},}qualifies as linear whiley=β0+β12x{\displaystyle y=\beta _{0}+\beta _{1}^{2}x}can be transformed to be linear by replacingβ12{\displaystyle \beta _{1}^{2}}by another parameter, sayγ{\displaystyle \gamma }. An equation with a parameter dependent on an independent variable does not qualify as linear, for exampley=β0+β1(x)⋅x{\displaystyle y=\beta _{0}+\beta _{1}(x)\cdot x}, whereβ1(x){\displaystyle \beta _{1}(x)}is a function ofx{\displaystyle x}. Data transformationsare often used to convert an equation into a linear form. For example, theCobb–Douglas function—often used in economics—is nonlinear: But it can be expressed in linear form by taking thenatural logarithmof both sides:[8] This assumption also covers specification issues: assuming that the proper functional form has been selected and there are noomitted variables. One should be aware, however, that the parameters that minimize the residuals of the transformed equation do not necessarily minimize the residuals of the original equation. For alln{\displaystyle n}observations, the expectation—conditional on the regressors—of the error term is zero:[9] wherexi=[xi1xi2⋯xik]T{\displaystyle \mathbf {x} _{i}={\begin{bmatrix}x_{i1}&x_{i2}&\cdots &x_{ik}\end{bmatrix}}^{\operatorname {T} }}is the data vector of regressors for theith observation, and consequentlyX=[x1Tx2T⋯xnT]T{\displaystyle \mathbf {X} ={\begin{bmatrix}\mathbf {x} _{1}^{\operatorname {T} }&\mathbf {x} _{2}^{\operatorname {T} }&\cdots &\mathbf {x} _{n}^{\operatorname {T} }\end{bmatrix}}^{\operatorname {T} }}is the data matrix or design matrix. Geometrically, this assumption implies thatxi{\displaystyle \mathbf {x} _{i}}andεi{\displaystyle \varepsilon _{i}}areorthogonalto each other, so that theirinner product(i.e., their cross moment) is zero. This assumption is violated if the explanatory variables aremeasured with error, or areendogenous.[10]Endogeneity can be the result ofsimultaneity, where causality flows back and forth between both the dependent and independent variable.Instrumental variabletechniques are commonly used to address this problem. The sample data matrixX{\displaystyle \mathbf {X} }must have full columnrank. OtherwiseXTX{\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {X} }is not invertible and the OLS estimator cannot be computed. A violation of this assumption isperfect multicollinearity, i.e. some explanatory variables are linearly dependent. One scenario in which this will occur is called "dummy variable trap," when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term.[11] Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate. The estimates will be less precise and highly sensitive to particular sets of data.[12]Multicollinearity can be detected fromcondition numberor thevariance inflation factor, among other tests. Theouter productof the error vector must be spherical. This implies the error term has uniform variance (homoscedasticity) and noserial correlation.[13]If this assumption is violated, OLS is still unbiased, butinefficient. The term "spherical errors" will describe themultivariate normal distribution: ifVar⁡[ε∣X]=σ2I{\displaystyle \operatorname {Var} [\,{\boldsymbol {\varepsilon }}\mid \mathbf {X} ]=\sigma ^{2}\mathbf {I} }in the multivariate normal density, then the equationf(ε)=c{\displaystyle f(\varepsilon )=c}is the formula for aballcentered at μ with radius σ in n-dimensional space.[14] Heteroskedasticityoccurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedastic can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time. This assumption is violated when there isautocorrelation. Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia." If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is one possible way to deal with autocorrelation. When the spherical errors assumption may be violated, the generalized least squares estimator can be shown to be BLUE.[6]
https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem
Inmachine learning,hyperparameter optimization[1]or tuning is the problem of choosing a set of optimalhyperparametersfor a learning algorithm. A hyperparameter is aparameterwhose value is used to control the learning process, which must be configured before the process starts.[2][3] Hyperparameter optimization determines the set of hyperparameters that yields an optimal model which minimizes a predefinedloss functionon a givendata set.[4]The objective function takes a set of hyperparameters and returns the associated loss.[4]Cross-validationis often used to estimate this generalization performance, and therefore choose the set of values for hyperparameters that maximize it.[5] The traditional method for hyperparameter optimization has beengrid search, or aparameter sweep, which is simply anexhaustive searchingthrough a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured bycross-validationon the training set[6]or evaluation on a hold-out validation set.[7] Since the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search. For example, a typical soft-marginSVMclassifierequipped with anRBF kernelhas at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constantCand a kernel hyperparameter γ. Both parameters are continuous, so to perform grid search, one selects a finite set of "reasonable" values for each, say Grid search then trains an SVM with each pair (C, γ) in theCartesian productof these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure. Grid search suffers from thecurse of dimensionality, but is oftenembarrassingly parallelbecause the hyperparameter settings it evaluates are typically independent of each other.[5] Random Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. A benefit over grid search is that random search can explore many more values than grid search could for continuous hyperparameters. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm.[5]In this case, the optimization problem is said to have a low intrinsic dimensionality.[8]Random Search is alsoembarrassingly parallel, and additionally allows the inclusion of prior knowledge by specifying the distribution from which to sample. Despite its simplicity, random search remains one of the important base-lines against which to compare the performance of new hyperparameter optimization methods. Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum. It tries to balance exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters expected close to the optimum). In practice, Bayesian optimization has been shown[9][10][11][12][13]to obtain better results in fewer evaluations compared to grid search and random search, due to the ability to reason about the quality of experiments before they are run. For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters usinggradient descent. The first usage of these techniques was focused on neural networks.[14]Since then, these methods have been extended to other models such assupport vector machines[15]or logistic regression.[16] A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm usingautomatic differentiation.[17][18][19][20]A more recent work along this direction uses theimplicit function theoremto calculate hypergradients and proposes a stable approximation of the inverse Hessian. The method scales to millions of hyperparameters and requires constant memory.[21] In a different approach,[22]a hypernetwork is trained to approximate the best response function. One of the advantages of this method is that it can handle discrete hyperparameters as well. Self-tuning networks[23]offer a memory efficient version of this approach by choosing a compact representation for the hypernetwork. More recently, Δ-STN[24]has improved this method further by a slight reparameterization of the hypernetwork which speeds up training. Δ-STN also yields a better approximation of the best-response Jacobian by linearizing the network in the weights, hence removing unnecessary nonlinear effects of large changes in the weights. Apart from hypernetwork approaches, gradient-based methods can be used to optimize discrete hyperparameters also by adopting a continuous relaxation of the parameters.[25]Such methods have been extensively used for the optimization of architecture hyperparameters inneural architecture search. Evolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization usesevolutionary algorithmsto search the space of hyperparameters for a given algorithm.[10]Evolutionary hyperparameter optimization follows aprocessinspired by the biological concept ofevolution: Evolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms,[10]automated machine learning, typical neural network[26]anddeep neural networkarchitecture search,[27][28]as well as training of the weights in deep neural networks.[29] Population Based Training (PBT) learns both hyperparameter values and network weights. Multiple learning processes operate independently, using different hyperparameters. As with evolutionary methods, poorly performing models are iteratively replaced with models that adopt modified hyperparameter values and weights based on the better performers. This replacement model warm starting is the primary differentiator between PBT and other evolutionary methods. PBT thus allows the hyperparameters to evolve and eliminates the need for manual hypertuning. The process makes no assumptions regarding model architecture, loss functions or training procedures. PBT and its variants are adaptive methods: they update hyperparameters during the training of the models. On the contrary, non-adaptive methods have the sub-optimal strategy to assign a constant set of hyperparameters for the whole training.[30] A class of early stopping-based hyperparameter optimization algorithms is purpose built for large search spaces of continuous and discrete hyperparameters, particularly when the computational cost to evaluate the performance of a set of hyperparameters is high. Irace implements the iterated racing algorithm, that focuses the search around the most promising configurations, using statistical tests to discard the ones that perform poorly.[31][32]Another early stopping hyperparameter optimization algorithm is successive halving (SHA),[33]which begins as a random search but periodically prunes low-performing models, thereby focusing computational resources on more promising models. Asynchronous successive halving (ASHA)[34]further improves upon SHA's resource utilization profile by removing the need to synchronously evaluate and prune low-performing models. Hyperband[35]is a higher level early stopping-based algorithm that invokes SHA or ASHA multiple times with varying levels of pruning aggressiveness, in order to be more widely applicable and with fewer required inputs. RBF[36]andspectral[37]approaches have also been developed. When hyperparameter optimization is done, the set of hyperparameters are often fitted on a training set and selected based on the generalization performance, or score, of a validation set. However, this procedure is at risk of overfitting the hyperparameters to the validation set. Therefore, the generalization performance score of the validation set (which can be several sets in the case of a cross-validation procedure) cannot be used to simultaneously estimate the generalization performance of the final model. In order to do so, the generalization performance has to be evaluated on a set independent (which has no intersection) of the set (or sets) used for the optimization of the hyperparameters, otherwise the performance might give a value which is too optimistic (too large). This can be done on a second test set, or through an outercross-validationprocedure called nested cross-validation, which allows an unbiased estimation of the generalization performance of the model, taking into account the bias due to the hyperparameter optimization.
https://en.wikipedia.org/wiki/Hyperparameter_optimization
The law of total varianceis a fundamental result inprobability theorythat expresses the variance of a random variableYin terms of its conditional variances and conditional means given another random variableX. Informally, it states that the overall variability ofYcan be split into an “unexplained” component (the average of within-group variances) and an “explained” component (the variance of group means). Formally, ifXandYarerandom variableson the sameprobability space, andYhas finitevariance, then: Var⁡(Y)=E⁡[Var⁡(Y∣X)]+Var(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (Y)\;=\;\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X){\bigr ]}\;+\;\operatorname {Var} \!{\bigl (}\operatorname {E} [Y\mid X]{\bigr )}.\!} This identity is also known as thevariance decomposition formula, theconditional variance formula, thelaw of iterated variances, or colloquially asEve’s law,[1]in parallel to the “Adam’s law” naming for thelaw of total expectation. Inactuarial science(particularly incredibility theory), the two termsE⁡[Var⁡(Y∣X)]{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)]}andVar⁡(E⁡[Y∣X]){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])}are called theexpected value of the process variance (EVPV)and thevariance of the hypothetical means (VHM)respectively.[2] LetYbe a random variable andXanother random variable on the same probability space. The law of total variance can be understood by noting: Adding these components yields the total varianceVar⁡(Y){\displaystyle \operatorname {Var} (Y)}, mirroring howanalysis of variancepartitions variation. Suppose five students take an exam scored 0–100. LetY= student’s score andXindicate whether the student is *international* or *domestic*: Both groups share the same mean (50), so the explained varianceVar⁡(E⁡[Y∣X]){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])}is 0, and the total variance equals the average of the within-group variances (weighted by group size), i.e. 800. LetXbe a coin flip taking valuesHeadswith probabilityhandTailswith probability1−h. Given Heads,Y~ Normal(μh,σh2{\displaystyle \mu _{h},\sigma _{h}^{2}}); given Tails,Y~ Normal(μt,σt2{\displaystyle \mu _{t},\sigma _{t}^{2}}). ThenE⁡[Var⁡(Y∣X)]=hσh2+(1−h)σt2,{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)]=h\,\sigma _{h}^{2}+(1-h)\,\sigma _{t}^{2},}Var⁡(E⁡[Y∣X])=h(1−h)(μh−μt)2,{\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])=h\,(1-h)\,(\mu _{h}-\mu _{t})^{2},}soVar⁡(Y)=hσh2+(1−h)σt2+h(1−h)(μh−μt)2.{\displaystyle \operatorname {Var} (Y)=h\,\sigma _{h}^{2}+(1-h)\,\sigma _{t}^{2}\;+\;h\,(1-h)\,(\mu _{h}-\mu _{t})^{2}.} Consider a two-stage experiment: ThenE⁡[Y∣X=i]=pi,Var⁡(Y∣X=i)=pi(1−pi).{\displaystyle \operatorname {E} [Y\mid X=i]=p_{i},\;\operatorname {Var} (Y\mid X=i)=p_{i}(1-p_{i}).}The overall variance ofYbecomesVar⁡(Y)=E⁡[pX(1−pX)]+Var⁡(pX),{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}p_{X}(1-p_{X}){\bigr ]}+\operatorname {Var} {\bigl (}p_{X}{\bigr )},}withpX{\displaystyle p_{X}}uniform on{p1,…,p6}.{\displaystyle \{p_{1},\dots ,p_{6}\}.} Let(Xi,Yi){\displaystyle (X_{i},Y_{i})},i=1,…,n{\displaystyle i=1,\ldots ,n}, be observed pairs. DefineY¯=E⁡[Y].{\displaystyle {\overline {Y}}=\operatorname {E} [Y].}ThenVar⁡(Y)=1n∑i=1n(Yi−Y¯)2=1n∑i=1n[(Yi−Y¯Xi)+(Y¯Xi−Y¯)]2,{\displaystyle \operatorname {Var} (Y)={\frac {1}{n}}\sum _{i=1}^{n}{\bigl (}Y_{i}-{\overline {Y}}{\bigr )}^{2}={\frac {1}{n}}\sum _{i=1}^{n}{\Bigl [}(Y_{i}-{\overline {Y}}_{X_{i}})+({\overline {Y}}_{X_{i}}-{\overline {Y}}){\Bigr ]}^{2},}whereY¯Xi=E⁡[Y∣X=Xi].{\displaystyle {\overline {Y}}_{X_{i}}=\operatorname {E} [Y\mid X=X_{i}].}Expanding the square and noting the cross term cancels in summation yields:Var⁡(Y)=E⁡[Var⁡(Y∣X)]+Var(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X){\bigr ]}\;+\;\operatorname {Var} \!{\bigl (}\operatorname {E} [Y\mid X]{\bigr )}.\!} UsingVar⁡(Y)=E⁡[Y2]−E⁡[Y]2{\displaystyle \operatorname {Var} (Y)=\operatorname {E} [Y^{2}]-\operatorname {E} [Y]^{2}}and thelaw of total expectation:E⁡[Y2]=E⁡[E⁡(Y2∣X)]=E⁡[Var⁡(Y∣X)+E⁡[Y∣X]2].{\displaystyle \operatorname {E} [Y^{2}]=\operatorname {E} {\bigl [}\operatorname {E} (Y^{2}\mid X){\bigr ]}=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X)+\operatorname {E} [Y\mid X]^{2}{\bigr ]}.}SubtractE⁡[Y]2=(E⁡[E⁡(Y∣X)])2{\displaystyle \operatorname {E} [Y]^{2}={\bigl (}\operatorname {E} [\operatorname {E} (Y\mid X)]{\bigr )}^{2}}and regroup to arrive atVar⁡(Y)=E⁡[Var⁡(Y∣X)]+Var(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X){\bigr ]}+\operatorname {Var} \!{\bigl (}\operatorname {E} [Y\mid X]{\bigr )}.\!} In a one-wayanalysis of variance, the total sum of squares (proportional toVar⁡(Y){\displaystyle \operatorname {Var} (Y)}) is split into a “between-group” sum of squares (Var⁡(E⁡[Y∣X]){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])}) plus a “within-group” sum of squares (E⁡[Var⁡(Y∣X)]{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)]}). TheF-testexamines whether the explained component is sufficiently large to indicateXhas a significant effect onY.[3] Inlinear regressionand related models, ifY^=E⁡[Y∣X],{\displaystyle {\hat {Y}}=\operatorname {E} [Y\mid X],}the fraction of variance explained isR2=Var⁡(Y^)Var⁡(Y)=Var⁡(E⁡[Y∣X])Var⁡(Y)=1−E⁡[Var⁡(Y∣X)]Var⁡(Y).{\displaystyle R^{2}={\frac {\operatorname {Var} ({\hat {Y}})}{\operatorname {Var} (Y)}}={\frac {\operatorname {Var} (\operatorname {E} [Y\mid X])}{\operatorname {Var} (Y)}}=1-{\frac {\operatorname {E} [\operatorname {Var} (Y\mid X)]}{\operatorname {Var} (Y)}}.}In the simple linear case (one predictor),R2{\displaystyle R^{2}}also equals the square of thePearson correlation coefficientbetweenXandY. In manyBayesianand ensemble methods, one decomposes prediction uncertainty via the law of total variance. For aBayesian neural networkwith random parametersθ{\displaystyle \theta }:Var⁡(Y)=E⁡[Var⁡(Y∣θ)]+Var⁡(E⁡[Y∣θ]),{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid \theta ){\bigr ]}+\operatorname {Var} {\bigl (}\operatorname {E} [Y\mid \theta ]{\bigr )},}often referred to as “aleatoric” (within-model) vs. “epistemic” (between-model) uncertainty.[4] Credibility theoryuses the same partitioning: the expected value of process variance (EVPV),E⁡[Var⁡(Y∣X)],{\displaystyle \operatorname {E} [\operatorname {Var} (Y\mid X)],}and the variance of hypothetical means (VHM),Var⁡(E⁡[Y∣X]).{\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X]).}The ratio of explained to total variance determines how much “credibility” to give to individual risk classifications.[2] Forjointly Gaussian(X,Y){\displaystyle (X,Y)}, the fractionVar⁡(E⁡[Y∣X])/Var⁡(Y){\displaystyle \operatorname {Var} (\operatorname {E} [Y\mid X])/\operatorname {Var} (Y)}relates directly to themutual informationI(Y;X).{\displaystyle I(Y;X).}[5]In non-Gaussian settings, a high explained-variance ratio still indicates significant information aboutYcontained inX. The law of total variance generalizes to multiple or nested conditionings. For example, with two conditioning variablesX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}:Var⁡(Y)=E⁡[Var⁡(Y∣X1,X2)]+E⁡[Var⁡(E⁡[Y∣X1,X2]∣X1)]+Var⁡(E⁡[Y∣X1]).{\displaystyle \operatorname {Var} (Y)=\operatorname {E} {\bigl [}\operatorname {Var} (Y\mid X_{1},X_{2}){\bigr ]}+\operatorname {E} {\bigl [}\operatorname {Var} (\operatorname {E} [Y\mid X_{1},X_{2}]\mid X_{1}){\bigr ]}+\operatorname {Var} (\operatorname {E} [Y\mid X_{1}]).}More generally, thelaw of total cumulanceextends this approach to higher moments.
https://en.wikipedia.org/wiki/Law_of_total_variance
Instatisticsaminimum-variance unbiased estimator (MVUE)oruniformly minimum-variance unbiased estimator (UMVUE)is anunbiased estimatorthat has lower variance than any other unbiased estimator for all possible values of the parameter. For practical statistics problems, it is important to determine the MVUE if one exists, since less-than-optimal procedures would naturally be avoided, other things being equal. This has led to substantial development of statistical theory related to the problem of optimal estimation. While combining the constraint ofunbiasednesswith the desirability metric of leastvarianceleads to good results in most practical settings—making MVUE a natural starting point for a broad range of analyses—a targeted specification may perform better for a given problem; thus, MVUE is not always the best stopping point. Consider estimation ofg(θ){\displaystyle g(\theta )}based on dataX1,X2,…,Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}i.i.d. from some member of a family of densitiespθ,θ∈Ω{\displaystyle p_{\theta },\theta \in \Omega }, whereΩ{\displaystyle \Omega }is the parameter space. An unbiased estimatorδ(X1,X2,…,Xn){\displaystyle \delta (X_{1},X_{2},\ldots ,X_{n})}ofg(θ){\displaystyle g(\theta )}isUMVUEif∀θ∈Θ{\displaystyle \forall \theta \in \Theta }, for any other unbiased estimatorδ~.{\displaystyle {\tilde {\delta }}.} If an unbiased estimator ofg(θ){\displaystyle g(\theta )}exists, then one can prove there is an essentially unique MVUE.[1]Using theRao–Blackwell theoremone can also prove that determining the MVUE is simply a matter of finding acompletesufficientstatistic for the familypθ,θ∈Ω{\displaystyle p_{\theta },\theta \in \Omega }and conditioninganyunbiased estimator on it. Further, by theLehmann–Scheffé theorem, an unbiased estimator that is a function of a complete, sufficient statistic is the UMVUE estimator. Put formally, supposeδ(X1,X2,…,Xn){\displaystyle \delta (X_{1},X_{2},\ldots ,X_{n})}is unbiased forg(θ){\displaystyle g(\theta )}, and thatT{\displaystyle T}is a complete sufficient statistic for the family of densities. Then is the MVUE forg(θ).{\displaystyle g(\theta ).} ABayesiananalog is aBayes estimator, particularly withminimum mean square error(MMSE). Anefficient estimatorneed not exist, but if it does and if it is unbiased, it is the MVUE. Since themean squared error(MSE) of an estimatorδis the MVUE minimizes MSEamong unbiased estimators. In some cases biased estimators have lower MSE because they have a smaller variance than does any unbiased estimator; seeestimator bias. Consider the data to be a single observation from anabsolutely continuous distributiononR{\displaystyle \mathbb {R} }with density whereθ > 0, and we wish to find the UMVU estimator of First we recognize that the density can be written as which is an exponential family withsufficient statisticT=log⁡(1+e−X){\displaystyle T=\log(1+e^{-X})}. In fact this is a full rank exponential family, and thereforeT{\displaystyle T}is complete sufficient. Seeexponential familyfor a derivation which shows Therefore, Here we use Lehmann–Scheffé theorem to get the MVUE. Clearly,δ(X)=T2/2{\displaystyle \delta (X)=T^{2}/2}is unbiased andT=log⁡(1+e−X){\displaystyle T=\log(1+e^{-X})}is complete sufficient, thus the UMVU estimator is This example illustrates that an unbiased function of the complete sufficient statistic will be UMVU, asLehmann–Scheffé theoremstates.
https://en.wikipedia.org/wiki/Minimum-variance_unbiased_estimator
Model selectionis the task of selecting amodelfrom among various candidates on the basis of performance criterion to choose the best one.[1]In the context ofmachine learningand more generallystatistical analysis, this may be the selection of astatistical modelfrom a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve thedesign of experimentssuch that thedata collectedis well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor). Konishi & Kitagawa (2008, p. 75) state, "The majority of the problems instatistical inferencecan be considered to be problems related to statistical modeling". Relatedly,Cox (2006, p. 197) has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose ofdecision makingor optimization under uncertainty.[2] Inmachine learning, algorithmic approaches to model selection includefeature selection,hyperparameter optimization, andstatistical learning theory. In its most basic forms, model selection is one of the fundamental tasks ofscientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, whenGalileoperformed hisinclined planeexperiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model[citation needed]. Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such aspolynomialsare used, at least initially[citation needed].Burnham & Anderson (2002)emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data. Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant bybestis controversial. A good model selection technique will balancegoodness of fitwith simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using alikelihood ratioapproach, or an approximation of this, leading to achi-squared test. The complexity is generally measured by counting the number ofparametersin the model. Model selection techniques can be considered asestimatorsof some physical quantity, such as the probability of the model producing the given data. Thebiasandvarianceare both important measures of the quality of this estimator;efficiencyis also often considered. A standard example of model selection is that ofcurve fitting, where, given a set of points and other background knowledge (e.g. points are a result ofi.i.d.samples), we must select a curve that describes the function that generated the points. There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions. In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction.[3]The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the mostrobustcandidate will be consistently selected given sufficiently many data samples. The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading.[3]Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.[4] Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), seeStoica & Selen (2004)for a review. Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems.[citation needed] Burnham & Anderson (2002, §6.3) say the following: There is a variety of model selection methods. However, from the point of view of statistical performance of a method, and intended context of its use, there are only two distinct classes of methods: These have been labeledefficientandconsistent. (...) Under the frequentist paradigm for model selection one generally has three main approaches: (I) optimization of some selection criteria, (II) tests of hypotheses, and (III) ad hoc methods.
https://en.wikipedia.org/wiki/Model_selection
Instatistics,regression validationis the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained fromregression analysis, are acceptable as descriptions of the data. The validation process can involve analyzing thegoodness of fitof the regression, analyzing whether theregression residualsare random, and checking whether the model's predictive performance deteriorates substantially when applied to data that were not used in model estimation. One measure of goodness of fit is thecoefficient of determination, often denoted,R2. Inordinary least squareswith an intercept, it ranges between 0 and 1. However, anR2close to 1 does not guarantee that the model fits the data well. For example, if the functional form of the model does not match the data,R2can be high despite a poor model fit.Anscombe's quartetconsists of four example data sets with similarly highR2values, but data that sometimes clearly does not fit the regression line. Instead, the data sets includeoutliers,high-leverage points, or non-linearities. One problem with theR2as a measure of model validity is that it can always be increased by adding more variables into the model, except in the unlikely event that the additional variables are exactly uncorrelated with the dependent variable in the data sample being used. This problem can be avoided by doing anF-testof the statistical significance of the increase in theR2, or by instead using theadjustedR2. Theresidualsfrom a fitted model are the differences between the responses observed at each combination of values of theexplanatory variablesand the corresponding prediction of the response computed using the regression function. Mathematically, the definition of the residual for theithobservation in thedata setis written withyidenoting theithresponse in the data set andxithe vector of explanatory variables, each set at the corresponding values found in theithobservation in the data set. If the model fit to the data were correct, the residuals would approximate the random errors that make the relationship between the explanatory variables and the response variable a statistical relationship. Therefore, if the residuals appear to behave randomly, it suggests that the model fits the data well. On the other hand, if non-random structure is evident in the residuals, it is a clear sign that the model fits the data poorly. The next section details the types of plots to use to test different aspects of a model and gives the correct interpretations of different results that could be observed for each type of plot. A basic, though not quantitatively precise, way to check for problems that render a model inadequate is to conduct a visual examination of the residuals (the mispredictions of the data used in quantifying the model) to look for obvious deviations from randomness. If a visual examination suggests, for example, the possible presence ofheteroscedasticity(a relationship between the variance of the model errors and the size of an independent variable's observations), then statistical tests can be performed to confirm or reject this hunch; if it is confirmed, different modeling procedures are called for. Different types of plots of the residuals from a fitted model provide information on the adequacy of different aspects of the model. Graphical methods have an advantage over numerical methods for model validation because they readily illustrate a broad range of complex aspects of the relationship between the model and the data. Numerical methods also play an important role in model validation. For example, thelack-of-fit testfor assessing the correctness of the functional part of the model can aid in interpreting a borderline residual plot. One common situation when numerical validation methods take precedence over graphical methods is when the number ofparametersbeing estimated is relatively close to the size of the data set. In this situation residual plots are often difficult to interpret due to constraints on the residuals imposed by the estimation of the unknown parameters. One area in which this typically happens is in optimization applications usingdesigned experiments.Logistic regressionwithbinary datais another area in which graphical residual analysis can be difficult. Serial correlationof the residuals can indicate model misspecification, and can be checked for with theDurbin–Watson statistic. The problem ofheteroskedasticitycan be checked for in any ofseveral ways. Cross-validation is the process of assessing how the results of a statistical analysis will generalize to an independent data set. If the model has been estimated over some, but not all, of the available data, then the model using the estimated parameters can be used to predict the held-back data. If, for example, the out-of-samplemean squared error, also known as themean squared prediction error, is substantially higher than the in-sample mean square error, this is a sign of deficiency in the model. A development in medical statistics is the use of out-of-sample cross validation techniques in meta-analysis. It forms the basis of thevalidation statistic, Vn, which is used to test the statistical validity of meta-analysis summary estimates. Essentially it measures a type of normalized prediction error and its distribution is a linear combination ofχ2variables of degree 1.[1] This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Regression_model_validation
Inestimation theoryandstatistics, theCramér–Rao bound(CRB) relates toestimationof a deterministic (fixed, though unknown) parameter. The result is named in honor ofHarald CramérandCalyampudi Radhakrishna Rao,[1][2][3]but has also been derived independently byMaurice Fréchet,[4]Georges Darmois,[5]and byAlexander AitkenandHarold Silverstone.[6][7]It is also known as Fréchet-Cramér–Rao or Fréchet-Darmois-Cramér-Rao lower bound. It states that theprecisionof anyunbiased estimatoris at most theFisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on itsvariance. An unbiased estimator that achieves this bound is said to be (fully)efficient. Such a solution achieves the lowest possiblemean squared erroramong all unbiased methods, and is, therefore, theminimum variance unbiased(MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information. The Cramér–Rao bound can also be used to bound the variance ofbiasedestimators of given bias. In some cases, a biased approach can result in both a variance and amean squared errorthat arebelowthe unbiased Cramér–Rao lower bound; seeestimator bias. Significant progress over the Cramér–Rao lower bound was proposed byAnil Kumar Bhattacharyyathrough a series of works, calledBhattacharyya bound.[8][9][10][11] The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is ascalarand its estimator isunbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listedlater in this section. Supposeθ{\displaystyle \theta }is an unknown deterministic parameter that is to be estimated fromn{\displaystyle n}independent observations (measurements) ofx{\displaystyle x}, each from a distribution according to someprobability density functionf(x;θ){\displaystyle f(x;\theta )}. Thevarianceof anyunbiasedestimatorθ^{\displaystyle {\hat {\theta }}}ofθ{\displaystyle \theta }is then bounded[12]by thereciprocalof theFisher informationI(θ){\displaystyle I(\theta )}: where the Fisher informationI(θ){\displaystyle I(\theta )}is defined by andℓ(x;θ)=log⁡(f(x;θ)){\displaystyle \ell (x;\theta )=\log(f(x;\theta ))}is thenatural logarithmof thelikelihood functionfor a single samplex{\displaystyle x}andEx;θ{\displaystyle \operatorname {E} _{x;\theta }}denotes theexpected valuewith respect to the densityf(x;θ){\displaystyle f(x;\theta )}ofX{\displaystyle X}. If not indicated, in what follows, the expectation is taken with respect toX{\displaystyle X}. Ifℓ(x;θ){\displaystyle \ell (x;\theta )}is twice differentiable and certain regularity conditions hold, then the Fisher information can also be defined as follows:[13] Theefficiencyof an unbiased estimatorθ^{\displaystyle {\hat {\theta }}}measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér–Rao lower bound thus gives A more general form of the bound can be obtained by considering a biased estimatorT(X){\displaystyle T(X)}, whose expectation is notθ{\displaystyle \theta }but a function of this parameter, say,ψ(θ){\displaystyle \psi (\theta )}. HenceE{T(X)}−θ=ψ(θ)−θ{\displaystyle E\{T(X)\}-\theta =\psi (\theta )-\theta }is not generally equal to 0. In this case, the bound is given by whereψ′(θ){\displaystyle \psi '(\theta )}is the derivative ofψ(θ){\displaystyle \psi (\theta )}(byθ{\displaystyle \theta }), andI(θ){\displaystyle I(\theta )}is the Fisher information defined above. Apart from being a bound on estimators of functions of the parameter, this approach can be used to derive a bound on the variance of biased estimators with a given bias, as follows.[14]Consider an estimatorθ^{\displaystyle {\hat {\theta }}}with biasb(θ)=E{θ^}−θ{\displaystyle b(\theta )=E\{{\hat {\theta }}\}-\theta }, and letψ(θ)=b(θ)+θ{\displaystyle \psi (\theta )=b(\theta )+\theta }. By the result above, any unbiased estimator whose expectation isψ(θ){\displaystyle \psi (\theta )}has variance greater than or equal to(ψ′(θ))2/I(θ){\displaystyle (\psi '(\theta ))^{2}/I(\theta )}. Thus, any estimatorθ^{\displaystyle {\hat {\theta }}}whose bias is given by a functionb(θ){\displaystyle b(\theta )}satisfies[15] The unbiased version of the bound is a special case of this result, withb(θ)=0{\displaystyle b(\theta )=0}. It's trivial to have a small variance − an "estimator" that is constant has a variance of zero. But from the above equation, we find that themean squared errorof a biased estimator is bounded by using the standard decomposition of the MSE. Note, however, that if1+b′(θ)<1{\displaystyle 1+b'(\theta )<1}this bound might be less than the unbiased Cramér–Rao bound1/I(θ){\displaystyle 1/I(\theta )}. For instance, in theexample of estimating variance below,1+b′(θ)=nn+2<1{\displaystyle 1+b'(\theta )={\frac {n}{n+2}}<1}. Extending the Cramér–Rao bound to multiple parameters, define a parameter columnvector with probability density functionf(x;θ){\displaystyle f(x;{\boldsymbol {\theta }})}which satisfies the tworegularity conditionsbelow. TheFisher information matrixis ad×d{\displaystyle d\times d}matrix with elementIm,k{\displaystyle I_{m,k}}defined as LetT(X){\displaystyle {\boldsymbol {T}}(X)}be an estimator of any vector function of parameters,T(X)=(T1(X),…,Td(X))T{\displaystyle {\boldsymbol {T}}(X)=(T_{1}(X),\ldots ,T_{d}(X))^{T}}, and denote its expectation vectorE⁡[T(X)]{\displaystyle \operatorname {E} [{\boldsymbol {T}}(X)]}byψ(θ){\displaystyle {\boldsymbol {\psi }}({\boldsymbol {\theta }})}. The Cramér–Rao bound then states that thecovariance matrixofT(X){\displaystyle {\boldsymbol {T}}(X)}satisfies where IfT(X){\displaystyle {\boldsymbol {T}}(X)}is anunbiasedestimator ofθ{\displaystyle {\boldsymbol {\theta }}}(i.e.,ψ(θ)=θ{\displaystyle {\boldsymbol {\psi }}\left({\boldsymbol {\theta }}\right)={\boldsymbol {\theta }}}), then the Cramér–Rao bound reduces to If it is inconvenient to compute the inverse of theFisher information matrix, then one can simply take the reciprocal of the corresponding diagonal element to find a (possibly loose) lower bound.[16] The bound relies on two weak regularity conditions on theprobability density function,f(x;θ){\displaystyle f(x;\theta )}, and the estimatorT(X){\displaystyle T(X)}: Proof based on.[17] First equation: Letδ{\displaystyle \delta }be an infinitesimal, then for anyv∈Rn{\displaystyle v\in \mathbb {R} ^{n}}, pluggingθ′=θ+δv{\displaystyle \theta '=\theta +\delta v}in, we have(Eθ′[T]−Eθ[T])=vTϕ(θ)δ;χ2(μθ′;μθ)=vTI(θ)vδ2{\displaystyle (E_{\theta '}[T]-E_{\theta }[T])=v^{T}\phi (\theta )\delta ;\quad \chi ^{2}(\mu _{\theta '};\mu _{\theta })=v^{T}I(\theta )v\delta ^{2}} Plugging this into multivariateChapman–Robbins boundgivesI(θ)≥ϕ(θ)Covθ⁡[T]−1ϕ(θ)T{\displaystyle I(\theta )\geq \phi (\theta )\operatorname {Cov} _{\theta }[T]^{-1}\phi (\theta )^{T}}. Second equation: It suffices to prove this for scalar case, withh(X){\displaystyle h(X)}taking values inR{\displaystyle \mathbb {R} }. Because for generalT(X){\displaystyle T(X)}, we can take anyv∈Rm{\displaystyle v\in \mathbb {R} ^{m}}, then definingh:=∑jvjTj{\textstyle h:=\sum _{j}v_{j}T_{j}}, the scalar case givesVarθ⁡[h]=vTCovθ⁡[T]v≥vTϕ(θ)I(θ)−1ϕ(θ)Tv{\displaystyle \operatorname {Var} _{\theta }[h]=v^{T}\operatorname {Cov} _{\theta }[T]v\geq v^{T}\phi (\theta )I(\theta )^{-1}\phi (\theta )^{T}v}This holds for allv∈Rm{\displaystyle v\in \mathbb {R} ^{m}}, so we can concludeCovθ⁡[T]≥ϕ(θ)I(θ)−1ϕ(θ)T{\displaystyle \operatorname {Cov} _{\theta }[T]\geq \phi (\theta )I(\theta )^{-1}\phi (\theta )^{T}}The scalar case states thatVarθ⁡[h]≥ϕ(θ)TI(θ)−1ϕ(θ){\displaystyle \operatorname {Var} _{\theta }[h]\geq \phi (\theta )^{T}I(\theta )^{-1}\phi (\theta )}withϕ(θ):=∇θEθ[h]{\displaystyle \phi (\theta ):=\nabla _{\theta }E_{\theta }[h]}. Letδ{\displaystyle \delta }be an infinitesimal, then for anyv∈Rn{\displaystyle v\in \mathbb {R} ^{n}}, takingθ′=θ+δv{\displaystyle \theta '=\theta +\delta v}in the single-variateChapman–Robbins boundgivesVarθ⁡[h]≥⟨v,ϕ(θ)⟩2vTI(θ)v{\displaystyle \operatorname {Var} _{\theta }[h]\geq {\frac {\langle v,\phi (\theta )\rangle ^{2}}{v^{T}I(\theta )v}}}. By linear algebra,supv≠0⟨w,v⟩2vTMv=wTM−1w{\displaystyle \sup _{v\neq 0}{\frac {\langle w,v\rangle ^{2}}{v^{T}Mv}}=w^{T}M^{-1}w}for any positive-definite matrixM{\displaystyle M}, thus we obtainVarθ⁡[h]≥ϕ(θ)TI(θ)−1ϕ(θ).{\displaystyle \operatorname {Var} _{\theta }[h]\geq \phi (\theta )^{T}I(\theta )^{-1}\phi (\theta ).} For thegeneral scalar case: Assume thatT=t(X){\displaystyle T=t(X)}is an estimator with expectationψ(θ){\displaystyle \psi (\theta )}(based on the observationsX{\displaystyle X}), i.e. thatE⁡(T)=ψ(θ){\displaystyle \operatorname {E} (T)=\psi (\theta )}. The goal is to prove that, for allθ{\displaystyle \theta }, LetX{\displaystyle X}be arandom variablewith probability density functionf(x;θ){\displaystyle f(x;\theta )}. HereT=t(X){\displaystyle T=t(X)}is astatistic, which is used as anestimatorforψ(θ){\displaystyle \psi (\theta )}. DefineV{\displaystyle V}as thescore: where thechain ruleis used in the final equality above. Then theexpectationofV{\displaystyle V}, writtenE⁡(V){\displaystyle \operatorname {E} (V)}, is zero. This is because: where the integral and partial derivative have been interchanged (justified by the second regularity condition). If we consider thecovariancecov⁡(V,T){\displaystyle \operatorname {cov} (V,T)}ofV{\displaystyle V}andT{\displaystyle T}, we havecov⁡(V,T)=E⁡(VT){\displaystyle \operatorname {cov} (V,T)=\operatorname {E} (VT)}, becauseE⁡(V)=0{\displaystyle \operatorname {E} (V)=0}. Expanding this expression we have again because the integration and differentiation operations commute (second condition). TheCauchy–Schwarz inequalityshows that therefore which proves the proposition. For the case of ad-variate normal distribution theFisher information matrixhas elements[18] where "tr" is thetrace. For example, letw[j]{\displaystyle w[j]}be a sample ofn{\displaystyle n}independent observations with unknown meanθ{\displaystyle \theta }and known varianceσ2{\displaystyle \sigma ^{2}}. Then the Fisher information is a scalar given by and so the Cramér–Rao bound is SupposeXis anormally distributedrandom variable with known meanμ{\displaystyle \mu }and unknown varianceσ2{\displaystyle \sigma ^{2}}. Consider the following statistic: ThenTis unbiased forσ2{\displaystyle \sigma ^{2}}, asE(T)=σ2{\displaystyle E(T)=\sigma ^{2}}. What is the variance ofT? (the second equality follows directly from the definition of variance). The first term is the fourthmoment about the meanand has value3(σ2)2{\displaystyle 3(\sigma ^{2})^{2}}; the second is the square of the variance, or(σ2)2{\displaystyle (\sigma ^{2})^{2}}. Thus Now, what is theFisher informationin the sample? Recall that thescoreV{\displaystyle V}is defined as whereL{\displaystyle L}is thelikelihood function. Thus in this case, where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative ofV{\displaystyle V}, or Thus the information in a sample ofn{\displaystyle n}independent observations is justn{\displaystyle n}times this, orn2(σ2)2.{\displaystyle {\frac {n}{2(\sigma ^{2})^{2}}}.} The Cramér–Rao bound states that In this case, the inequality is saturated (equality is achieved), showing that theestimatorisefficient. However, we can achieve a lowermean squared errorusing a biased estimator. The estimator obviously has a smaller variance, which is in fact Its bias is so its mean squared error is which is less than what unbiased estimators can achieve according to the Cramér–Rao bound. When the mean is not known, the minimum mean squared error estimate of the variance of a sample from Gaussian distribution is achieved by dividing byn+1{\displaystyle n+1}, rather thann−1{\displaystyle n-1}orn+2{\displaystyle n+2}.
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao_bound
Instatistical inference, specificallypredictive inference, aprediction intervalis an estimate of anintervalin which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used inregression analysis. A simple example is given by a six-sided die with face values ranging from 1 to 6. The confidence interval for the estimated expected value of the face value will be around 3.5 and will become narrower with a larger sample size. However, the prediction interval for the next roll will approximately range from 1 to 6, even with any number of samples seen so far. Prediction intervals are used in bothfrequentist statisticsandBayesian statistics: a prediction interval bears the same relationship to a future observation that a frequentistconfidence intervalor Bayesiancredible intervalbears to an unobservable population parameter: prediction intervals predict the distribution of individual future points, whereas confidence intervals and credible intervals of parameters predict the distribution of estimates of the true population mean or other quantity of interest that cannot be observed. If one makes theparametric assumptionthat the underlying distribution is anormal distribution, and has a sample set {X1, ...,Xn}, then confidence intervals and credible intervals may be used to estimate thepopulation meanμandpopulation standard deviationσof the underlying population, while prediction intervals may be used to estimate the value of the next sample variable,Xn+1. Alternatively, inBayesian terms, a prediction interval can be described as a credible interval for the variable itself, rather than for a parameter of the distribution thereof. The concept of prediction intervals need not be restricted to inference about a single future sample value but can be extended to more complicated cases. For example, in the context of river flooding where analyses are often based on annual values of the largest flow within the year, there may be interest in making inferences about the largest flood likely to be experienced within the next 50 years. Since prediction intervals are only concerned with past and future observations, rather than unobservable population parameters, they are advocated as a better method than confidence intervals by some statisticians, such asSeymour Geisser,[citation needed]following the focus on observables byBruno de Finetti.[citation needed] Given a sample from anormal distribution, whose parameters are unknown, it is possible to give prediction intervals in the frequentist sense, i.e., an interval [a,b] based on statistics of the sample such that on repeated experiments,Xn+1falls in the interval the desired percentage of the time; one may call these "predictiveconfidence intervals".[1] A general technique of frequentist prediction intervals is to find and compute apivotal quantityof the observablesX1, ...,Xn,Xn+1– meaning a function of observables and parameters whose probability distribution does not depend on the parameters – that can be inverted to give a probability of the future observationXn+1falling in some interval computed in terms of the observed values so far,X1,…,Xn.{\displaystyle X_{1},\dots ,X_{n}.}Such a pivotal quantity, depending only on observables, is called anancillary statistic.[2]The usual method of constructing pivotal quantities is to take the difference of two variables that depend on location, so that location cancels out, and then take the ratio of two variables that depend on scale, so that scale cancels out. The most familiar pivotal quantity is theStudent's t-statistic, which can be derived by this method and is used in the sequel. A prediction interval [ℓ,u] for a future observationXin a normal distributionN(μ,σ2) with knownmeanandvariancemay be calculated from whereZ=X−μσ{\displaystyle Z={\frac {X-\mu }{\sigma }}}, thestandard scoreofX, is distributed as standard normal. Hence or withzthequantilein the standard normal distribution for which: or equivalently; The prediction interval is conventionally written as: For example, to calculate the 95% prediction interval for a normal distribution with a mean (μ) of 5 and a standard deviation (σ) of 1, thenzis approximately 2. Therefore, the lower limit of the prediction interval is approximately 5 ‒ (2⋅1) = 3, and the upper limit is approximately 5 + (2⋅1) = 7, thus giving a prediction interval of approximately 3 to 7. For a distribution with unknown parameters, a direct approach to prediction is to estimate the parameters and then use the associated quantile function – for example, one could use the sample meanX¯{\displaystyle {\overline {X}}}as estimate forμand thesample variances2as an estimate forσ2. There are two natural choices fors2here – dividing by(n−1){\displaystyle (n-1)}yields an unbiased estimate, while dividing bynyields themaximum likelihood estimator, and either might be used. One then uses the quantile function with these estimated parametersΦX¯,s2−1{\displaystyle \Phi _{{\overline {X}},s^{2}}^{-1}}to give a prediction interval. This approach is usable, but the resulting interval will not have the repeated sampling interpretation[4]– it is not a predictive confidence interval. For the sequel, use the sample mean: and the (unbiased) sample variance: Given[5]a normal distribution with unknown meanμbut known varianceσ2{\displaystyle \sigma ^{2}}, the sample meanX¯{\displaystyle {\overline {X}}}of the observationsX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}has distributionN(μ,σ2/n),{\displaystyle N(\mu ,\sigma ^{2}/n),}while the future observationXn+1{\displaystyle X_{n+1}}has distributionN(μ,σ2).{\displaystyle N(\mu ,\sigma ^{2}).}Taking the difference of these cancels theμand yields a normal distribution of varianceσ2+(σ2/n),{\displaystyle \sigma ^{2}+(\sigma ^{2}/n),}thus Solving forXn+1{\displaystyle X_{n+1}}gives the prediction distributionN(X¯,σ2+(σ2/n)),{\displaystyle N({\overline {X}},\sigma ^{2}+(\sigma ^{2}/n)),}from which one can compute intervals as before. This is a predictive confidence interval in the sense that if one uses a quantile range of 100p%, then on repeated applications of this computation, the future observationXn+1{\displaystyle X_{n+1}}will fall in the predicted interval 100p% of the time. Notice that this prediction distribution is more conservative than using the estimated meanX¯{\displaystyle {\overline {X}}}and known varianceσ2{\displaystyle \sigma ^{2}}, as this uses compound varianceσ2+(σ2/n){\displaystyle \sigma ^{2}+(\sigma ^{2}/n)}, hence yields slightly wider intervals. This is necessary for the desired confidence interval property to hold. Conversely, given a normal distribution with known meanμbut unknown varianceσ2{\displaystyle \sigma ^{2}}, the sample variances2{\displaystyle s^{2}}of the observationsX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}has, up to scale, aχn−12{\displaystyle \chi _{n-1}^{2}}distribution; more precisely: On the other hand, the future observationXn+1{\displaystyle X_{n+1}}has distributionN(μ,σ2).{\displaystyle N(\mu ,\sigma ^{2}).}Taking the ratio of the future observation residualXn+1−μ{\displaystyle X_{n+1}-\mu }and the sample standard deviationscancels theσ,yielding aStudent's t-distributionwithn– 1degrees of freedom(see itsderivation): Solving forXn+1{\displaystyle X_{n+1}}gives the prediction distributionμ±sTn−1,{\displaystyle \mu \pm sT_{n-1},}from which one can compute intervals as before. Notice that this prediction distribution is more conservative than using a normal distribution with the estimated standard deviations{\displaystyle s}and known meanμ, as it uses the t-distribution instead of the normal distribution, hence yields wider intervals. This is necessary for the desired confidence interval property to hold. Combining the above for a normal distributionN(μ,σ2){\displaystyle N(\mu ,\sigma ^{2})}with bothμandσ2unknown yields the following ancillary statistic:[6] This simple combination is possible because the sample mean and sample variance of the normal distribution are independent statistics; this is only true for the normal distribution, and in fact characterizes the normal distribution. Solving forXn+1{\displaystyle X_{n+1}}yields the prediction distribution The probability ofXn+1{\displaystyle X_{n+1}}falling in a given interval is then: whereTn-1,ais the 100((1 −p)/2)thpercentileofStudent's t-distributionwithn− 1 degrees of freedom. Therefore, the numbers are the endpoints of a 100(1 −p)% prediction interval forXn+1{\displaystyle X_{n+1}}. One can compute prediction intervals without any assumptions on the population, i.e. in anon-parametricway. The residualbootstrapmethod can be used for constructing non-parametric prediction intervals. In general the conformal prediction method is more general. Let us look at the special case of using the minimum and maximum as boundaries for a prediction interval: If one has a sample of identical random variables {X1, ...,Xn}, then the probability that the next observationXn+1will be the largest is 1/(n+ 1), since all observations have equal probability of being the maximum. In the same way, the probability thatXn+1will be the smallest is 1/(n+ 1). The other (n− 1)/(n+ 1) of the time,Xn+1falls between thesample maximumandsample minimumof the sample {X1, ...,Xn}. Thus, denoting the sample maximum and minimum byMandm,this yields an (n− 1)/(n+ 1) prediction interval of [m,M].[citation needed] Notice that while this gives the probability that a future observation will fall in a range, it does not give any estimate as to where in a segment it will fall – notably, if it falls outside the range of observed values, it may be far outside the range. Seeextreme value theoryfor further discussion. Formally, this applies not just to sampling from a population, but to anyexchangeable sequenceof random variables, not necessarily independent oridentically distributed. In the formula for the predictive confidence intervalno mentionis made of the unobservable parametersμandσof population mean and standard deviation – the observedsamplestatisticsX¯n{\displaystyle {\overline {X}}_{n}}andSn{\displaystyle S_{n}}of sample mean and standard deviation are used, and what is estimated is the outcome offuturesamples. When considering prediction intervals, rather than using sample statistics as estimators of population parameters and applying confidence intervals to these estimates, one considers "the next sample"Xn+1{\displaystyle X_{n+1}}asitselfa statistic, and computes itssampling distribution. In parameter confidence intervals, one estimates population parameters; if one wishes to interpret this as prediction of the next sample, one models "the next sample" as a draw from this estimated population, using the (estimated)populationdistribution. By contrast, in predictive confidence intervals, one uses thesamplingdistribution of (a statistic of) a sample ofnorn+ 1 observations from such a population, and the population distribution is not directly used, though the assumption about its form (though not the values of its parameters) is used in computing the sampling distribution. A common application of prediction intervals is toregression analysis. Suppose the data is being modeled by a straight line (simple linear regression): whereyi{\displaystyle y_{i}}is theresponse variable,xi{\displaystyle x_{i}}is theexplanatory variable,εiis a random error term, andα{\displaystyle \alpha }andβ{\displaystyle \beta }are parameters. Given estimatesα^{\displaystyle {\hat {\alpha }}}andβ^{\displaystyle {\hat {\beta }}}for the parameters, such as from aordinary least squares, the predicted response valueydfor a given explanatory valuexdis (the point on the regression line), while the actual response would be Thepoint estimatey^d{\displaystyle {\hat {y}}_{d}}is called themean response, and is an estimate of theexpected valueofyd,E(y∣xd).{\displaystyle E(y\mid x_{d}).} A prediction interval instead gives an interval in which one expectsydto fall; this is not necessary if the actual parametersαandβare known (together with the error termεi), but if one is estimating from asample, then one may use thestandard errorof the estimates for the intercept and slope (α^{\displaystyle {\hat {\alpha }}}andβ^{\displaystyle {\hat {\beta }}}), as well as their correlation, to compute a prediction interval. In regression,Faraway (2002, p. 39) makes a distinction between intervals for predictions of the mean response vs. for predictions of observed response—affecting essentially the inclusion or not of the unity term within the square root in the expansion factorsabove; for details, seeFaraway (2002). Seymour Geisser, a proponent of predictive inference, gives predictive applications ofBayesian statistics.[7] In Bayesian statistics, one can compute (Bayesian) prediction intervals from theposterior probabilityof the random variable, as acredible interval. In theoretical work, credible intervals are not often calculated for the prediction of future events, but for inference of parameters – i.e., credible intervals of a parameter, not for the outcomes of the variable itself. However, particularly where applications are concerned with possible extreme values of yet to be observed cases, credible intervals for such values can be of practical importance. Prediction intervals are commonly used as definitions ofreference ranges, such asreference ranges for blood teststo give an idea of whether ablood testis normal or not. For this purpose, the most commonly used prediction interval is the 95% prediction interval, and a reference range based on it can be called astandard reference range.
https://en.wikipedia.org/wiki/Prediction_interval
Instochastic game theory,Bayesian regretis the expected difference ("regret") between theutilityof a given strategy and the utility of the best possible strategy in hindsight—i.e., the strategy that would have maximized expected payoff if the true underlying model or distribution were known. This notion of regret measures how much is lost, on average, due to uncertainty or imperfect information. The termBayesianrefers toThomas Bayes(1702–1761), who proved a special case of what is now calledBayes' theorem, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known asBayesian inference. This term has been used to compare a random buy-and-hold strategy to professional traders' records. This same concept has received numerous different names, as the New York Times notes: "In 1957, for example, a statistician named James Hanna called his theorem Bayesian Regret. He had been preceded byDavid Blackwell, also astatistician, who called his theorem Controlled Random Walks.[1]Other, later papers had titles like 'On Pseudo Games',[2]'How to Play an Unknown Game'[3][citation needed], 'Universal Coding'[4]and 'Universal Portfolios'".[5][6]
https://en.wikipedia.org/wiki/Bayesian_regret
Inmachine learningandmathematical optimization,loss functions for classificationare computationally feasibleloss functionsrepresenting the price paid for inaccuracy of predictions inclassification problems(problems of identifying which category a particular observation belongs to).[1]GivenX{\displaystyle {\mathcal {X}}}as the space of all possible inputs (usuallyX⊂Rd{\displaystyle {\mathcal {X}}\subset \mathbb {R} ^{d}}), andY={−1,1}{\displaystyle {\mathcal {Y}}=\{-1,1\}}as the set of labels (possible outputs), a typical goal of classification algorithms is to find a functionf:X→Y{\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}which best predicts a labely{\displaystyle y}for a given inputx→{\displaystyle {\vec {x}}}.[2]However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the samex→{\displaystyle {\vec {x}}}to generate differenty{\displaystyle y}.[3]As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as whereV(f(x→),y){\displaystyle V(f({\vec {x}}),y)}is a given loss function, andp(x→,y){\displaystyle p({\vec {x}},y)}is theprobability density functionof the process that generated the data, which can equivalently be written as Within classification, several commonly usedloss functionsare written solely in terms of the product of the true labely{\displaystyle y}and the predicted labelf(x→){\displaystyle f({\vec {x}})}. Therefore, they can be defined as functions of only one variableυ=yf(x→){\displaystyle \upsilon =yf({\vec {x}})}, so thatV(f(x→),y)=ϕ(yf(x→))=ϕ(υ){\displaystyle V(f({\vec {x}}),y)=\phi (yf({\vec {x}}))=\phi (\upsilon )}with a suitably chosen functionϕ:R→R{\displaystyle \phi :\mathbb {R} \to \mathbb {R} }. These are calledmargin-based loss functions. Choosing a margin-based loss function amounts to choosingϕ{\displaystyle \phi }. Selection of a loss function within this framework impacts the optimalfϕ∗{\displaystyle f_{\phi }^{*}}which minimizes the expected risk, seeempirical risk minimization. In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically, The second equality follows from the properties described above. The third equality follows from the fact that 1 and −1 are the only possible values fory{\displaystyle y}, and the fourth becausep(−1∣x)=1−p(1∣x){\displaystyle p(-1\mid x)=1-p(1\mid x)}. The term within brackets[ϕ(f(x→))p(1∣x→)+ϕ(−f(x→))(1−p(1∣x→))]{\displaystyle [\phi (f({\vec {x}}))p(1\mid {\vec {x}})+\phi (-f({\vec {x}}))(1-p(1\mid {\vec {x}}))]}is known as theconditional risk. One can solve for the minimizer ofI[f]{\displaystyle I[f]}by taking the functional derivative of the last equality with respect tof{\displaystyle f}and setting the derivative equal to 0. This will result in the following equation whereη=p(y=1|x→){\displaystyle \eta =p(y=1|{\vec {x}})}, which is also equivalent to setting the derivative of the conditional risk equal to zero. Given the binary nature of classification, a natural selection for a loss function (assuming equal cost forfalse positives and false negatives) would be the0-1 loss function(0–1indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by whereH{\displaystyle H}indicates theHeaviside step function. However, this loss function is non-convex and non-smooth, and solving for the optimal solution is anNP-hardcombinatorial optimization problem.[4]As a result, it is better to substituteloss function surrogateswhich are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem.[5]Some of these surrogates are described below. In practice, the probability distributionp(x→,y){\displaystyle p({\vec {x}},y)}is unknown. Consequently, utilizing a training set ofn{\displaystyle n}independently and identically distributedsample points drawn from the datasample space, one seeks tominimize empirical risk as a proxy for expected risk.[3](Seestatistical learning theoryfor a more detailed description.) UtilizingBayes' theorem, it can be shown that the optimalf0/1∗{\displaystyle f_{0/1}^{*}}, i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of A loss function is said to beclassification-calibrated or Bayes consistentif its optimalfϕ∗{\displaystyle f_{\phi }^{*}}is such thatf0/1∗(x→)=sgn⁡(fϕ∗(x→)){\displaystyle f_{0/1}^{*}({\vec {x}})=\operatorname {sgn} (f_{\phi }^{*}({\vec {x}}))}and is thus optimal under the Bayes decision rule. A Bayes consistent loss function allows us to find the Bayes optimal decision functionfϕ∗{\displaystyle f_{\phi }^{*}}by directly minimizing the expected risk and without having to explicitly model the probability density functions. For convex margin lossϕ(υ){\displaystyle \phi (\upsilon )}, it can be shown thatϕ(υ){\displaystyle \phi (\upsilon )}is Bayes consistent if and only if it is differentiable at 0 andϕ′(0)<0{\displaystyle \phi '(0)<0}.[6][1]Yet, this result does not exclude the existence of non-convex Bayes consistent loss functions. A more general result states that Bayes consistent loss functions can be generated using the following formulation[7] wheref(η),(0≤η≤1){\displaystyle f(\eta ),(0\leq \eta \leq 1)}is any invertible function such thatf−1(−v)=1−f−1(v){\displaystyle f^{-1}(-v)=1-f^{-1}(v)}andC(η){\displaystyle C(\eta )}is any differentiable strictly concave function such thatC(η)=C(1−η){\displaystyle C(\eta )=C(1-\eta )}. Table-I shows the generated Bayes consistent loss functions for some example choices ofC(η){\displaystyle C(\eta )}andf−1(v){\displaystyle f^{-1}(v)}. Note that the Savage and Tangent loss are not convex. Such non-convex loss functions have been shown to be useful in dealing with outliers in classification.[7][8]For all loss functions generated from (2), the posterior probabilityp(y=1|x→){\displaystyle p(y=1|{\vec {x}})}can be found using the invertiblelink functionasp(y=1|x→)=η=f−1(v){\displaystyle p(y=1|{\vec {x}})=\eta =f^{-1}(v)}. Such loss functions where the posterior probability can be recovered using the invertible link are calledproper loss functions. The sole minimizer of the expected risk,fϕ∗{\displaystyle f_{\phi }^{*}}, associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the correspondingf(η){\displaystyle f(\eta )}. This holds even for the nonconvex loss functions, which means that gradient descent based algorithms such asgradient boostingcan be used to construct the minimizer. For proper loss functions, theloss margincan be defined asμϕ=−ϕ′(0)ϕ″(0){\displaystyle \mu _{\phi }=-{\frac {\phi '(0)}{\phi ''(0)}}}and shown to be directly related to the regularization properties of the classifier.[9]Specifically a loss function of larger margin increases regularization and produces better estimates of the posterior probability. For example, the loss margin can be increased for the logistic loss by introducing aγ{\displaystyle \gamma }parameter and writing the logistic loss as1γlog⁡(1+e−γv){\displaystyle {\frac {1}{\gamma }}\log(1+e^{-\gamma v})}where smaller0<γ<1{\displaystyle 0<\gamma <1}increases the margin of the loss. It is shown that this is directly equivalent to decreasing the learning rate ingradient boostingFm(x)=Fm−1(x)+γhm(x),{\displaystyle F_{m}(x)=F_{m-1}(x)+\gamma h_{m}(x),}where decreasingγ{\displaystyle \gamma }improves the regularization of the boosted classifier. The theory makes it clear that when a learning rate ofγ{\displaystyle \gamma }is used, the correct formula for retrieving the posterior probability is nowη=f−1(γF(x)){\displaystyle \eta =f^{-1}(\gamma F(x))}. In conclusion, by choosing a loss function with larger margin (smallerγ{\displaystyle \gamma }) we increase regularization and improve our estimates of the posterior probability which in turn improves the ROC curve of the final classifier. While more commonly used in regression, the square loss function can be re-written as a functionϕ(yf(x→)){\displaystyle \phi (yf({\vec {x}}))}and utilized for classification. It can be generated using (2) and Table-I as follows The square loss function is both convex and smooth. However, the square loss function tends to penalize outliers excessively, leading to slower convergence rates (with regards to sample complexity) than for the logistic loss or hinge loss functions.[1]In addition, functions which yield high values off(x→){\displaystyle f({\vec {x}})}for somex∈X{\displaystyle x\in X}will perform poorly with the square loss function, since high values ofyf(x→){\displaystyle yf({\vec {x}})}will be penalized severely, regardless of whether the signs ofy{\displaystyle y}andf(x→){\displaystyle f({\vec {x}})}match. A benefit of the square loss function is that its structure lends itself to easy cross validation of regularization parameters. Specifically forTikhonov regularization, one can solve for the regularization parameter using leave-one-outcross-validationin the same time as it would take to solve a single problem.[10] The minimizer ofI[f]{\displaystyle I[f]}for the square loss function can be directly found from equation (1) as The logistic loss function can be generated using (2) and Table-I as follows The logistic loss is convex and grows linearly for negative values which make it less sensitive to outliers. The logistic loss is used in theLogitBoost algorithm. The minimizer ofI[f]{\displaystyle I[f]}for the logistic loss function can be directly found from equation (1) as This function is undefined whenp(1∣x)=1{\displaystyle p(1\mid x)=1}orp(1∣x)=0{\displaystyle p(1\mid x)=0}(tending toward ∞ and −∞ respectively), but predicts a smooth curve which grows whenp(1∣x){\displaystyle p(1\mid x)}increases and equals 0 whenp(1∣x)=0.5{\displaystyle p(1\mid x)=0.5}.[3] It's easy to check that the logistic loss and binarycross-entropyloss (Log loss) are in fact the same (up to a multiplicative constant1log⁡(2){\displaystyle {\frac {1}{\log(2)}}}). The cross-entropy loss is closely related to theKullback–Leibler divergencebetween the empirical distribution and the predicted distribution. The cross-entropy loss is ubiquitous in moderndeep neural networks. The exponential loss function can be generated using (2) and Table-I as follows The exponential loss is convex and grows exponentially for negative values which makes it more sensitive to outliers. The exponentially-weighted 0-1 loss is used in theAdaBoost algorithmgiving implicitly rise to the exponential loss. The minimizer ofI[f]{\displaystyle I[f]}for the exponential loss function can be directly found from equation (1) as The Savage loss[7]can be generated using (2) and Table-I as follows The Savage loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. The Savage loss has been used ingradient boostingand the SavageBoost algorithm. The minimizer ofI[f]{\displaystyle I[f]}for the Savage loss function can be directly found from equation (1) as The Tangent loss[11]can be generated using (2) and Table-I as follows The Tangent loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. Interestingly, the Tangent loss also assigns a bounded penalty to data points that have been classified "too correctly". This can help prevent over-training on the data set. The Tangent loss has been used ingradient boosting, the TangentBoost algorithm and Alternating Decision Forests.[12] The minimizer ofI[f]{\displaystyle I[f]}for the Tangent loss function can be directly found from equation (1) as The hinge loss function is defined withϕ(υ)=max(0,1−υ)=[1−υ]+{\displaystyle \phi (\upsilon )=\max(0,1-\upsilon )=[1-\upsilon ]_{+}}, where[a]+=max(0,a){\displaystyle [a]_{+}=\max(0,a)}is thepositive partfunction. The hinge loss provides a relatively tight, convex upper bound on the 0–1indicator function. Specifically, the hinge loss equals the 0–1indicator functionwhensgn⁡(f(x→))=y{\displaystyle \operatorname {sgn} (f({\vec {x}}))=y}and|yf(x→)|≥1{\displaystyle |yf({\vec {x}})|\geq 1}. In addition, the empirical risk minimization of this loss is equivalent to the classical formulation forsupport vector machines(SVMs). Correctly classified points lying outside the margin boundaries of the support vectors are not penalized, whereas points within the margin boundaries or on the wrong side of the hyperplane are penalized in a linear fashion compared to their distance from the correct boundary.[4] While the hinge loss function is both convex and continuous, it is not smooth (is not differentiable) atyf(x→)=1{\displaystyle yf({\vec {x}})=1}. Consequently, the hinge loss function cannot be used withgradient descentmethods orstochastic gradient descentmethods which rely on differentiability over the entire domain. However, the hinge loss does have a subgradient atyf(x→)=1{\displaystyle yf({\vec {x}})=1}, which allows for the utilization ofsubgradient descent methods.[4]SVMs utilizing the hinge loss function can also be solved usingquadratic programming. The minimizer ofI[f]{\displaystyle I[f]}for the hinge loss function is whenp(1∣x)≠0.5{\displaystyle p(1\mid x)\neq 0.5}, which matches that of the 0–1 indicator function. This conclusion makes the hinge loss quite attractive, as bounds can be placed on the difference between expected risk and the sign of hinge loss function.[1]The Hinge loss cannot be derived from (2) sincefHinge∗{\displaystyle f_{\text{Hinge}}^{*}}is not invertible. The generalized smooth hinge loss function with parameterα{\displaystyle \alpha }is defined as where It is monotonically increasing and reaches 0 whenz=1{\displaystyle z=1}.
https://en.wikipedia.org/wiki/Loss_functions_for_classification
Discounted maximum loss, also known asworst-caserisk measure, is thepresent valueof the worst-case scenario for a financialportfolio. In investment, in order to protect the value of an investment, one must consider all possible alternatives to the initial investment. How one does this comes down to personal preference; however, the worst possible alternative is generally considered to be the benchmark against which all other options are measured. Thepresent valueof this worst possible outcome is the discounted maximum loss. Given a finite state spaceS{\displaystyle S}, letX{\displaystyle X}be a portfolio with profitXs{\displaystyle X_{s}}fors∈S{\displaystyle s\in S}. IfX1:S,...,XS:S{\displaystyle X_{1:S},...,X_{S:S}}is theorder statisticthe discounted maximum loss is simply−δX1:S{\displaystyle -\delta X_{1:S}}, whereδ{\displaystyle \delta }is thediscount factor. Given a generalprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}, letX{\displaystyle X}be a portfolio with discounted returnδX(ω){\displaystyle \delta X(\omega )}for stateω∈Ω{\displaystyle \omega \in \Omega }. Then the discounted maximum loss can be written as−ess.inf⁡δX=−supδ{x∈R:P(X≥x)=1}{\displaystyle -\operatorname {ess.inf} \delta X=-\sup \delta \{x\in \mathbb {R} :\mathbb {P} (X\geq x)=1\}}whereess.inf{\displaystyle \operatorname {ess.inf} }denotes theessential infimum.[1] As an example, assume that a portfolio is currently worth 100, and thediscount factoris 0.8 (corresponding to aninterest rateof 25%): In this case the maximum loss is from 100 to 20 = 80, so the discounted maximum loss is simply80×0.8=64{\displaystyle 80\times 0.8=64}
https://en.wikipedia.org/wiki/Discounted_maximum_loss
Statistical riskis aquantificationof a situation'sriskusingstatistical methods. These methods can be used to estimate aprobability distributionfor the outcome of a specificvariable, or at least one or more keyparametersof that distribution, and from that estimated distribution arisk functioncan be used to obtain a single non-negative number representing a particular conception of the risk of the situation. Statistical risk is taken account of in a variety of contexts includingfinanceandeconomics, and there are many risk functions that can be used depending on the context. One measure of the statistical risk of acontinuous variable, such as the return on aninvestment, is simply the estimatedvarianceof the variable, or equivalently the square root of the variance, called thestandard deviation. Another measure in finance, one which viewsupside riskas unimportant compared todownside risk, is thedownside beta. In the context of abinary variable, a simple statistical measure of risk is simply theprobabilitythat a variable will take on the lower of two values. There is a sense in which one risk A can be said to be unambiguously greater than another risk B (that is, greater for any reasonable risk function): namely, if A is amean-preserving spreadof B. This means that theprobability density functionof A can be formed, roughly speaking, by "spreading out" that of B. However, this is only apartial ordering: most pairs of risks cannot be unambiguously ranked in this way, and different risk functions applied to the estimated distributions of two such unordered risky variables will give different answers as to which is riskier. In the context ofstatistical estimationitself, therisk involved in estimating a particular parameteris a measure of the degree to which the estimate is likely to be inaccurate. Thiseconomics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Statistical_risk
Crossoverinevolutionary algorithmsandevolutionary computation, also calledrecombination, is agenetic operatorused to combine thegenetic informationof two parents to generate new offspring. It is one way tostochasticallygenerate newsolutionsfrom an existing population, and is analogous to thecrossoverthat happens duringsexual reproductioninbiology. New solutions can also be generated bycloningan existing solution, which is analogous toasexual reproduction. Newly generated solutions may bemutatedbefore being added to the population. The aim of recombination is to transfer good characteristics from two different parents to one child. Different algorithms in evolutionary computation may use different data structures to store genetic information, and eachgenetic representationcan be recombined with different crossover operators. Typicaldata structuresthat can be recombined with crossover arebit arrays, vectors of real numbers, ortrees. The list of operators presented below is by no means complete and serves mainly as an exemplary illustration of this dyadicgenetic operatortype. More operators and more details can be found in the literature.[1][2][3][4][5] Traditional genetic algorithms store genetic information in achromosomerepresented by abit array. Crossover methods for bit arrays are popular and an illustrative example ofgenetic recombination. A point on both parents' chromosomes is picked randomly, and designated a 'crossover point'. Bits to the right of that point are swapped between the two parent chromosomes. This results in two offspring, each carrying some genetic information from both parents. In two-point crossover, two crossover points are picked randomly from the parent chromosomes. The bits in between the two points are swapped between the parent organisms. Two-point crossover is equivalent to performing two single-point crossovers with different crossover points. This strategy can be generalized to k-point crossover for any positive integer k, picking k crossover points. In uniform crossover, typically, each bit is chosen from either parent with equal probability.[6]Other mixing ratios are sometimes used, resulting in offspring which inherit more genetic information from one parent than the other. In a uniform crossover, we don’t divide the chromosome into segments, rather we treat each gene separately. In this, we essentially flip a coin for each chromosome to decide whether or not it will be included in the off-spring. For the crossover operators presented above and for most other crossover operators for bit strings, it holds that they can also be applied accordingly to integer or real-valued genomes whose genes each consist of an integer or real-valued number. Instead of individual bits, integer or real-valued numbers are then simply copied into the child genome. The offspring lie on the remaining corners of the hyperbody spanned by the two parentsP1=(1.5,6,8){\displaystyle P_{1}=(1.5,6,8)}andP2=(7,2,1){\displaystyle P_{2}=(7,2,1)}, as exemplified in the accompanying image for the three-dimensional case. If the rules of the uniform crossover for bit strings are applied during the generation of the offspring, this is also calleddiscrete recombination.[7] In this recombination operator, the allele values of the child genomeai{\displaystyle a_{i}}are generated by mixing the alleles of the two parent genomesai,P1{\displaystyle a_{i,P_{1}}}andai,P2{\displaystyle a_{i,P_{2}}}:[7][8] The choice of the interval[−d,1+d]{\displaystyle [-d,1+d]}causes that besides the interior of the hyperbody spanned by the allele values of the parent genes additionally a certain environment for the range of values of the offspring is in question. A value of0.25{\displaystyle 0.25}is recommended ford{\displaystyle d}to counteract the tendency to reduce the allele values that otherwise exists atd=0{\displaystyle d=0}.[9] The adjacent figure shows for the two-dimensional case the range of possible new alleles of the two exemplary parentsP1=(3,6){\displaystyle P_{1}=(3,6)}andP2=(9,2){\displaystyle P_{2}=(9,2)}in intermediate recombination. The offspring of discrete recombinationC1{\displaystyle C_{1}}andC2{\displaystyle C_{2}}are also plotted. Intermediate recombination satisfies the arithmetic calculation of the allele values of the child genome required by virtual alphabet theory.[10][11]Discrete and intermediate recombination are used as a standard in theevolution strategy.[12] Forcombinatorial tasks,permutationsare usually used that are specifically designed for genomes that are themselves permutations of aset. The underlying set is usually a subset ofN{\displaystyle \mathbb {N} }orN0{\displaystyle \mathbb {N} _{0}}. If 1- or n-point or uniform crossover for integer genomes is used for such genomes, a child genome may contain some values twice and others may be missing. This can be remedied bygenetic repair, e.g. by replacing the redundant genes in positional fidelity for missing ones from the other child genome. In order to avoid the generation of invalid offspring, special crossover operators for permutations have been developed[13]which fulfill the basic requirements of such operators for permutations, namely that all elements of the initial permutation are also present in the new one and only the order is changed. It can be distinguished between combinatorial tasks, where all sequences are admissible, and those where there are constraints in the form of inadmissible partial sequences. A well-known representative of the first task type is thetraveling salesman problem(TSP), where the goal is to visit a set of cities exactly once on the shortest tour. An example of the constrained task type is theschedulingof multipleworkflows. Workflows involve sequence constraints on some of the individual work steps. For example, a thread cannot be cut until the corresponding hole has been drilled in a workpiece. Such problems are also calledorder-based permutations. In the following, two crossover operators are presented as examples, the partially mapped crossover (PMX) motivated by the TSP and the order crossover (OX1) designed for order-based permutations. A second offspring can be produced in each case by exchanging the parent chromosomes. The PMX operator was designed as a recombination operator for TSP like Problems.[14][15]The explanation of the procedure is illustrated by an example: The order crossover goes back to Davis[1]in its original form and is presented here in a slightly generalized version with more than two crossover points. It transfers information about the relative order from the second parent to the offspring. First, the number and position of the crossover points are determined randomly. The resulting gene sequences are then processed as described below: Pin order fromP1=(D,J,C,E,I){\displaystyle P_{{\text{in order from }}P_{1}}=\left(D,J,C,E,I\right)} Among other things, order crossover is well suited for scheduling multiple workflows, when used in conjunction with 1- and n-point crossover.[16] Over time, a large number of crossover operators for permutations have been proposed, so the following list is only a small selection. For more information, the reader is referred to the literature.[1][5][15][13] The usual approach to solving TSP-like problems by genetic or, more generally, evolutionary algorithms, presented earlier, is either torepairillegal descendants or to adjust the operators appropriately so that illegal offspring do not arise in the first place. Alternatively, Riazi suggests the use of a double chromosome representation, which avoids illegal offspring.[23]
https://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)
Domain adaptationis a field associated withmachine learningandtransfer learning. It addresses the challenge of training a model on one data distribution (thesource domain) and applying it to a related but different data distribution (thetarget domain). A common example isspam filtering, where a model trained on emails from one user (source domain) is adapted to handle emails for another user with significantly different patterns (target domain). Domain adaptation techniques can also leverage unrelated data sources to improve learning. When multiple source distributions are involved, the problem extends tomulti-source domain adaptation.[1] Domain adaptation is a specialized area within transfer learning. In domain adaptation, the source and target domains share the same feature space but differ in their data distributions. In contrast, transfer learning encompasses broader scenarios, including cases where the target domain’s feature space differs from that of the source domain(s).[2] Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain. Common distribution shifts are classified as follows:[3][4] Domain adaptation problems typically assume that some data from the target domain is available during training. Problems can be classified according to the type of this available data:[5][6] LetX{\displaystyle X}be the input space (or description space) and letY{\displaystyle Y}be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis)h:X→Y{\displaystyle h:X\to Y}able to attach a label fromY{\displaystyle Y}to an example fromX{\displaystyle X}. This model is learned from a learning sampleS={(xi,yi)∈(X×Y)}i=1m{\displaystyle S=\{(x_{i},y_{i})\in (X\times Y)\}_{i=1}^{m}}. Usually insupervised learning(without domain adaptation), we suppose that the examples(xi,yi)∈S{\displaystyle (x_{i},y_{i})\in S}are drawn i.i.d. from a distributionDS{\displaystyle D_{S}}of supportX×Y{\displaystyle X\times Y}(unknown and fixed). The objective is then to learnh{\displaystyle h}(fromS{\displaystyle S}) such that it commits the least error possible for labelling new examples coming from the distributionDS{\displaystyle D_{S}}. The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributionsDS{\displaystyle D_{S}}andDT{\displaystyle D_{T}}onX×Y{\displaystyle X\times Y}[citation needed]. The domain adaptation task then consists of the transfer of knowledge from the source domainDS{\displaystyle D_{S}}to the target oneDT{\displaystyle D_{T}}. The goal is then to learnh{\displaystyle h}(from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domainDT{\displaystyle D_{T}}[citation needed]. The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain? The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).[7][8] A method for adapting consists in iteratively "auto-labeling" the target examples.[9]The principle is simple: Note that there exist other iterative approaches, but they usually need target labeled examples.[10][11] The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task. This can be achieved through the use ofAdversarial machine learningtechniques where feature representations from samples in different domains are encouraged to be indistinguishable.[12][13] The goal is to construct aBayesian hierarchical modelp(n){\displaystyle p(n)}, which is essentially a factorization model for countsn{\displaystyle n}, to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.[14] Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:
https://en.wikipedia.org/wiki/Domain_adaptation
General game playing(GGP) is the design ofartificial intelligenceprograms to be able to play more than one game successfully.[1][2][3]For many games like chess, computers are programmed to play these games using a specially designed algorithm, which cannot be transferred to another context. For instance, achess-playing computer program cannot playcheckers. General game playing is considered as a necessary milestone on the way toartificial general intelligence.[4] General video game playing(GVGP) is the concept of GGP adjusted to the purpose of playingvideo games. For video games, game rules have to be eitherlearntover multiple iterations by artificial players likeTD-Gammon,[5]or are predefined manually in adomain-specific languageand sent in advance to artificial players[6][7]like in traditional GGP. Starting in 2013, significant progress was made following thedeep reinforcement learningapproach, including the development of programs that can learn to playAtari 2600games[8][5][9][10][11]as well as a program that can learn to playNintendo Entertainment Systemgames.[12][13][14] The first commercial usage of general game playing technology wasZillions of Gamesin 1998. General game playing was also proposed fortrading agentsinsupply chain managementthere under price negotiation inonline auctionsfrom 2003 onwards.[15][16][17][18] In 1992,Barney Pelldefined the concept of Meta-Game Playing, and developed the "MetaGame" system. This was the first program to automatically generate game rules of chess-like games, and one of the earliest programs to use automated game generation. Pell then developed the systemMetagamer.[19]This system was able to play a number of chess-like games, given game rules definition in a special language calledGame Description Language(GDL), without any human interaction once the games were generated.[20] In 1998, the commercial systemZillions of Gameswas developed by Jeff Mallett and Mark Lefler. The system used a LISP-like language to define the game rules. Zillions of Games derived theevaluation functionautomatically from the game rules based on piece mobility, board structure and game goals. It also employed usual algorithms as found incomputer chesssystems:alpha–beta pruningwith move ordering,transposition tables, etc.[21]The package was extended in 2007 by the addition of the Axiom plug-in, an alternate metagame engine that incorporates a complete Forth-based programming language. In 1998, z-Tree was developed byUrs Fischbacher.[22]z-Tree is the first and the most citedsoftware tool for experimental economics. z-Tree allows the definition of game rules in z-Tree-language forgame-theoretic experiments with human subjects. It also allows definition of computer players, which participate in a play with human subjects.[23] In 2005, the Stanford ProjectGeneral Game Playingwas established.[3] In 2012, the development of PyVGDL started.[24] General Game Playingis a project of the Stanford Logic Group ofStanford University, California, which aims to create a platform for general game playing. It is the most well-known effort at standardizing GGP AI, and generally seen as the standard for GGP systems. The games are defined by sets of rules represented in theGame Description Language. In order to play the games, players interact with a game hosting server[25][26]that monitors moves for legality and keeps players informed of state changes. Since 2005, there have been annual General Game Playing competitions at theAAAIConference. The competition judges competitor AI's abilities to play a variety of different games, by recording their performance on each individual game. In the first stage of the competition, entrants are judged on their ability to perform legal moves, gain the upper hand, and complete games faster. In the following runoff round, the AIs face off against each other in increasingly complex games. The AI that wins the most games at this stage wins the competition, and until 2013 its creator used to win a $10,000 prize.[19]So far, the following programs were victorious:[27] There are other general game playing systems, which use their own languages for defining the game rules. Other general game playing software include: GVGP could potentially be used to create realvideo game AIautomatically, as well as "to test game environments, including those created automatically using procedural content generation and to find potential loopholes in the gameplay that a human player could exploit".[7]GVGP has also been used to generate game rules, and estimate a game's quality based on Relative Algorithm Performance Profiles (RAPP), which compare the skill differentiation that a game allows between good AI and bad AI.[42] TheGeneral Video Game AI Competition(GVGAI) has been running since 2014. In this competition, two-dimensional video games similar to (and sometimes based on) 1980s-era arcade and console games are used instead of the board games used in the GGP competition. It has offered a way for researchers and practitioners to test and compare their best general video game playing algorithms. The competition has an associated software framework including a large number of games written in theVideo Game Description Language (VGDL), which should not be confused withGDLand is a coding language using simple semantics and commands that can easily be parsed. One example for VGDL is PyVGDL developed in 2013.[6][24]The games used in GVGP are, for now, often 2-dimensional arcade games, as they are the simplest and easiest to quantify.[43]To simplify the process of creating an AI that can interpret video games, games for this purpose are written in VGDL manually.[clarification needed]VGDL can be used to describe a game specifically for procedural generation of levels, using Answer Set Programming (ASP) and an Evolutionary Algorithm (EA). GVGP can then be used to test the validity of procedural levels, as well as the difficulty or quality of levels based on how an agent performed.[44] Since GGP AI must be designed to play multiple games, its design cannot rely on algorithms created specifically for certain games. Instead, the AI must be designed using algorithms whose methods can be applied to a wide range of games. The AI must also be an ongoing process, that can adapt to its current state rather than the output of previous states. For this reason,open looptechniques are often most effective.[45] A popular method for developing GGP AI is theMonte Carlo tree search(MCTS) algorithm.[46]Often used together with the UCT method (Upper Confidence Bound applied to Trees), variations of MCTS have been proposed to better play certain games, as well as to make it compatible with video game playing.[47][48][49]Another variation of tree-search algorithms used is theDirected Breadth-first Search(DBS),[50]in which a child node to the current state is created for each available action, and visits each child ordered by highest average reward, until either the game ends or runs out of time.[51]In each tree-search method, the AI simulates potential actions and ranks each based on the average highest reward of each path, in terms of points earned.[46][51] In order to interact with games, algorithms must operate under the assumption that games all share common characteristics. In the bookHalf-Real: Video Games Between Real Worlds and Fictional Worlds, Jesper Juul gives the following definition of games: Games are based on rules, they have variable outcomes, different outcomes give different values, player effort influences outcomes, the player is attached to the outcomes, and the game has negotiable consequences.[52]Using these assumptions, game playing AI can be created by quantifying the player input, the game outcomes, and how the various rules apply, and using algorithms to compute the most favorable path.[43]
https://en.wikipedia.org/wiki/General_game_playing
Multi-task learning(MTL) is a subfield ofmachine learningin which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately.[1][2][3]Inherently, Multi-task learning is amulti-objective optimizationproblem havingtrade-offsbetween different tasks.[4]Early versions of MTL were called "hints".[5][6] In a widely cited 1997 paper, Rich Caruana gave the following characterization: Multitask Learning is an approach toinductive transferthat improvesgeneralizationby using the domain information contained in the training signals of related tasks as aninductive bias. It does this by learning tasks in parallel while using a sharedrepresentation; what is learned for each task can help other tasks be learned better.[3] In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance.[citation needed]Further examples of settings for MTL includemulticlass classificationandmulti-label classification.[7] Multi-task learning works becauseregularizationinduced by requiring an algorithm to perform well on a related task can be superior to regularization that preventsoverfittingby penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled.[8]However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.[8][9] The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge: Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is alinear combinationof some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, withsparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases.[10]Task relatedness can be imposed a priori or learned from the data.[7][11]Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly.[8][12]For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains.[8] One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to beorthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods.[9] Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deepconvolutional neural networkGoogLeNet,[13]an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task.[14] Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termedGroup online adaptive learning(GOAL).[15]Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predictingfinancial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents. Multi-task optimizationfocuses on solving optimizing the whole process.[16][17]The paradigm has been inspired by the well-established concepts oftransfer learning[18]and multi-task learning inpredictive analytics.[19] The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes,[20]the search progress can be transferred to substantially accelerate the search on the other. The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems.[21] There is a direct relationship between multitask optimization andmulti-objective optimization.[22] In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models.[23]Commonly, MTL models employ task-specific modules on top of a joint feature representation obtained using a shared module. Since this joint representation must capture useful features across all tasks, MTL may hinder individual task performance if the different tasks seek conflicting representation, i.e., the gradients of different tasks point to opposing directions or differ significantly in magnitude. This phenomenon is commonly referred to as negative transfer. To mitigate this issue, various MTL optimization methods have been proposed. Commonly, the per-task gradients are combined into a joint update direction through various aggregation algorithms or heuristics. There are several common approaches for multi-task optimization:Bayesian optimization,evolutionary computation, and approaches based onGame theory.[16] Multi-task Bayesian optimizationis a modern model-based approach that leverages the concept of knowledge transfer to speed up the automatichyperparameter optimizationprocess of machine learning algorithms.[24]The method builds a multi-taskGaussian processmodel on the data originating from different searches progressing in tandem.[25]The captured inter-task dependencies are thereafter utilized to better inform the subsequent sampling of candidate solutions in respective search spaces. Evolutionary multi-taskinghas been explored as a means of exploiting theimplicit parallelismof population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover.[17][26]Recently, modes of knowledge transfer that are different from direct solutioncrossoverhave been explored.[27][28] Game-theoretic approaches to multi-task optimization propose to view the optimization problem as a game, where each task is a player. All players compete through the reward matrix of the game, and try to reach a solution that satisfies all players (all tasks). This view provide insight about how to build efficient algorithms based ongradient descentoptimization (GD), which is particularly important for trainingdeep neural networks.[29]In GD for MTL, the problem is that each task provides its own loss, and it is not clear how to combine all losses and create a single unified gradient, leading to several different aggregation strategies.[30][31][32]This aggregation problem can be solved by defining a game matrix where the reward of each player is the agreement of its own gradient with the common gradient, and then setting the common gradient to be the NashCooperative bargaining[33]of that system. Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner.[26]Inmachine learning, the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.[34][35]In addition, the concept of multi-tasking has led to advances in automatichyperparameter optimizationof machine learning models andensemble learning.[36][37] Applications have also been reported in cloud computing,[38]with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously.[17][39]Recent work has additionally shown applications in chemistry.[40]In addition, some recent works have applied multi-task optimization algorithms in industrial manufacturing.[41][42] The MTL problem can be cast within the context of RKHSvv (acompleteinner product spaceofvector-valued functionsequipped with areproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015.[7] Suppose the training data set isSt={(xit,yit)}i=1nt{\displaystyle {\mathcal {S}}_{t}=\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{n_{t}}}, withxit∈X{\displaystyle x_{i}^{t}\in {\mathcal {X}}},yit∈Y{\displaystyle y_{i}^{t}\in {\mathcal {Y}}}, wheretindexes task, andt∈1,...,T{\displaystyle t\in 1,...,T}. Letn=∑t=1Tnt{\displaystyle n=\sum _{t=1}^{T}n_{t}}. In this setting there is a consistent input and output space and the sameloss functionL:R×R→R+{\displaystyle {\mathcal {L}}:\mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R} _{+}}for each task: . This results in the regularized machine learning problem: whereH{\displaystyle {\mathcal {H}}}is a vector valued reproducing kernel Hilbert space with functionsf:X→YT{\displaystyle f:{\mathcal {X}}\rightarrow {\mathcal {Y}}^{T}}having componentsft:X→Y{\displaystyle f_{t}:{\mathcal {X}}\rightarrow {\mathcal {Y}}}. The reproducing kernel for the spaceH{\displaystyle {\mathcal {H}}}of functionsf:X→RT{\displaystyle f:{\mathcal {X}}\rightarrow \mathbb {R} ^{T}}is a symmetric matrix-valued functionΓ:X×X→RT×T{\displaystyle \Gamma :{\mathcal {X}}\times {\mathcal {X}}\rightarrow \mathbb {R} ^{T\times T}}, such thatΓ(⋅,x)c∈H{\displaystyle \Gamma (\cdot ,x)c\in {\mathcal {H}}}and the following reproducing property holds: The reproducing kernel gives rise to a representer theorem showing that any solution to equation1has the form: The form of the kernelΓinduces both the representation of thefeature spaceand structures the output across tasks. A natural simplification is to choose aseparable kernel,which factors into separate kernels on the input spaceXand on the tasks{1,...,T}{\displaystyle \{1,...,T\}}. In this case the kernel relating scalar componentsft{\displaystyle f_{t}}andfs{\displaystyle f_{s}}is given byγ((xi,t),(xj,s))=k(xi,xj)kT(s,t)=k(xi,xj)As,t{\textstyle \gamma ((x_{i},t),(x_{j},s))=k(x_{i},x_{j})k_{T}(s,t)=k(x_{i},x_{j})A_{s,t}}. For vector valued functionsf∈H{\displaystyle f\in {\mathcal {H}}}we can writeΓ(xi,xj)=k(xi,xj)A{\displaystyle \Gamma (x_{i},x_{j})=k(x_{i},x_{j})A}, wherekis a scalar reproducing kernel, andAis a symmetric positive semi-definiteT×T{\displaystyle T\times T}matrix. Henceforth denoteS+T={PSD matrices}⊂RT×T{\displaystyle S_{+}^{T}=\{{\text{PSD matrices}}\}\subset \mathbb {R} ^{T\times T}}. This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely byA. Methods for non-separable kernelsΓis a current field of research. For the separable case, the representation theorem is reduced tof(x)=∑i=1Nk(x,xi)Aci{\textstyle f(x)=\sum _{i=1}^{N}k(x,x_{i})Ac_{i}}. The model output on the training data is thenKCA, whereKis then×n{\displaystyle n\times n}empirical kernel matrix with entriesKi,j=k(xi,xj){\textstyle K_{i,j}=k(x_{i},x_{j})}, andCis then×T{\displaystyle n\times T}matrix of rowsci{\displaystyle c_{i}}. With the separable kernel, equation1can be rewritten as whereVis a (weighted) average ofLapplied entry-wise toYandKCA. (The weight is zero ifYit{\displaystyle Y_{i}^{t}}is a missing observation). Note the second term inPcan be derived as follows: There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping. Regularizer—With the separable kernel, it can be shown (below) that||f||H2=∑s,t=1TAt,s†⟨fs,ft⟩Hk{\textstyle ||f||_{\mathcal {H}}^{2}=\sum _{s,t=1}^{T}A_{t,s}^{\dagger }\langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}, whereAt,s†{\displaystyle A_{t,s}^{\dagger }}is thet,s{\displaystyle t,s}element of the pseudoinverse ofA{\displaystyle A}, andHk{\displaystyle {\mathcal {H}}_{k}}is the RKHS based on the scalar kernelk{\displaystyle k}, andft(x)=∑i=1nk(x,xi)At⊤ci{\textstyle f_{t}(x)=\sum _{i=1}^{n}k(x,x_{i})A_{t}^{\top }c_{i}}. This formulation shows thatAt,s†{\displaystyle A_{t,s}^{\dagger }}controls the weight of the penalty associated with⟨fs,ft⟩Hk{\textstyle \langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}. (Note that⟨fs,ft⟩Hk{\textstyle \langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}arises from||ft||Hk=⟨ft,ft⟩Hk{\textstyle ||f_{t}||_{{\mathcal {H}}_{k}}=\langle f_{t},f_{t}\rangle _{{\mathcal {H}}_{k}}}.) ‖f‖H2=⟨∑i=1nγ((xi,ti),⋅)citi,∑j=1nγ((xj,tj),⋅)cjtj⟩H=∑i,j=1nciticjtjγ((xi,ti),(xj,tj))=∑i,j=1n∑s,t=1Tcitcjsk(xi,xj)As,t=∑i,j=1nk(xi,xj)⟨ci,Acj⟩RT=∑i,j=1nk(xi,xj)⟨ci,AA†Acj⟩RT=∑i,j=1nk(xi,xj)⟨Aci,A†Acj⟩RT=∑i,j=1n∑s,t=1T(Aci)t(Acj)sk(xi,xj)As,t†=∑s,t=1TAs,t†⟨∑i=1nk(xi,⋅)(Aci)t,∑j=1nk(xj,⋅)(Acj)s⟩Hk=∑s,t=1TAs,t†⟨ft,fs⟩Hk{\displaystyle {\begin{aligned}\|f\|_{\mathcal {H}}^{2}&=\left\langle \sum _{i=1}^{n}\gamma ((x_{i},t_{i}),\cdot )c_{i}^{t_{i}},\sum _{j=1}^{n}\gamma ((x_{j},t_{j}),\cdot )c_{j}^{t_{j}}\right\rangle _{\mathcal {H}}\\&=\sum _{i,j=1}^{n}c_{i}^{t_{i}}c_{j}^{t_{j}}\gamma ((x_{i},t_{i}),(x_{j},t_{j}))\\&=\sum _{i,j=1}^{n}\sum _{s,t=1}^{T}c_{i}^{t}c_{j}^{s}k(x_{i},x_{j})A_{s,t}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle c_{i},Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle c_{i},AA^{\dagger }Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle Ac_{i},A^{\dagger }Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}\sum _{s,t=1}^{T}(Ac_{i})^{t}(Ac_{j})^{s}k(x_{i},x_{j})A_{s,t}^{\dagger }\\&=\sum _{s,t=1}^{T}A_{s,t}^{\dagger }\langle \sum _{i=1}^{n}k(x_{i},\cdot )(Ac_{i})^{t},\sum _{j=1}^{n}k(x_{j},\cdot )(Ac_{j})^{s}\rangle _{{\mathcal {H}}_{k}}\\&=\sum _{s,t=1}^{T}A_{s,t}^{\dagger }\langle f_{t},f_{s}\rangle _{{\mathcal {H}}_{k}}\end{aligned}}} Output metric—an alternative output metric onYT{\displaystyle {\mathcal {Y}}^{T}}can be induced by the inner product⟨y1,y2⟩Θ=⟨y1,Θy2⟩RT{\displaystyle \langle y_{1},y_{2}\rangle _{\Theta }=\langle y_{1},\Theta y_{2}\rangle _{\mathbb {R} ^{T}}}. With the squared loss there is an equivalence between the separable kernelsk(⋅,⋅)IT{\displaystyle k(\cdot ,\cdot )I_{T}}under the alternative metric, andk(⋅,⋅)Θ{\displaystyle k(\cdot ,\cdot )\Theta }, under the canonical metric. Output mapping—Outputs can be mapped asL:YT→Y~{\displaystyle L:{\mathcal {Y}}^{T}\rightarrow {\mathcal {\tilde {Y}}}}to a higher dimensional space to encode complex structures such as trees, graphs and strings. For linear mapsL, with appropriate choice of separable kernel, it can be shown thatA=L⊤L{\displaystyle A=L^{\top }L}. Via the regularizer formulation, one can represent a variety of task structures easily. Learning problemPcan be generalized to admit learning task matrix A as follows: Choice ofF:S+T→R+{\displaystyle F:S_{+}^{T}\rightarrow \mathbb {R} _{+}}must be designed to learn matricesAof a given type. See "Special cases" below. Restricting to the case ofconvexlosses andcoercivepenalties Cilibertoet al.have shown that althoughQis not convex jointly inCandA,a related problem is jointly convex. Specifically on the convex setC={(C,A)∈Rn×T×S+T|Range(C⊤KC)⊆Range(A)}{\displaystyle {\mathcal {C}}=\{(C,A)\in \mathbb {R} ^{n\times T}\times S_{+}^{T}|Range(C^{\top }KC)\subseteq Range(A)\}}, the equivalent problem is convex with the same minimum value. And if(CR,AR){\displaystyle (C_{R},A_{R})}is a minimizer forRthen(CRAR†,AR){\displaystyle (C_{R}A_{R}^{\dagger },A_{R})}is a minimizer forQ. Rmay be solved by a barrier method on a closed set by introducing the following perturbation: The perturbation via the barrierδ2tr(A†){\displaystyle \delta ^{2}tr(A^{\dagger })}forces the objective functions to be equal to+∞{\displaystyle +\infty }on the boundary ofRn×T×S+T{\displaystyle R^{n\times T}\times S_{+}^{T}}. Scan be solved with a block coordinate descent method, alternating inCandA.This results in a sequence of minimizers(Cm,Am){\displaystyle (C_{m},A_{m})}inSthat converges to the solution inRasδm→0{\displaystyle \delta _{m}\rightarrow 0}, and hence gives the solution toQ. Spectral penalties- Dinnuzoet al[43]suggested settingFas the Frobenius normtr(A⊤A){\displaystyle {\sqrt {tr(A^{\top }A)}}}. They optimizedQdirectly using block coordinate descent, not accounting for difficulties at the boundary ofRn×T×S+T{\displaystyle \mathbb {R} ^{n\times T}\times S_{+}^{T}}. Clustered tasks learning- Jacobet al[44]suggested to learnAin the setting whereTtasks are organized inRdisjoint clusters. In this case letE∈{0,1}T×R{\displaystyle E\in \{0,1\}^{T\times R}}be the matrix withEt,r=I(taskt∈groupr){\displaystyle E_{t,r}=\mathbb {I} ({\text{task }}t\in {\text{group }}r)}. SettingM=I−E†ET{\displaystyle M=I-E^{\dagger }E^{T}}, andU=1T11⊤{\displaystyle U={\frac {1}{T}}\mathbf {11} ^{\top }}, the task matrixA†{\displaystyle A^{\dagger }}can be parameterized as a function ofM{\displaystyle M}:A†(M)=ϵMU+ϵB(M−U)+ϵ(I−M){\displaystyle A^{\dagger }(M)=\epsilon _{M}U+\epsilon _{B}(M-U)+\epsilon (I-M)}, with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxationSc={M∈S+T:I−M∈S+T∧tr(M)=r}{\displaystyle {\mathcal {S}}_{c}=\{M\in S_{+}^{T}:I-M\in S_{+}^{T}\land tr(M)=r\}}. In this formulation,F(A)=I(A(M)∈{A:M∈SC}){\displaystyle F(A)=\mathbb {I} (A(M)\in \{A:M\in {\mathcal {S}}_{C}\})}. Non-convex penalties- Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases. Non-separable kernels- Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels. A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR)[45]implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning,[46][47]Multi-Task Learning with Joint Feature Selection,[48]Robust Multi-Task Feature Learning,[49]Trace-Norm Regularized Multi-Task Learning,[50]Alternating Structural Optimization,[51][52]Incoherent Low-Rank and Sparse Learning,[53]Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning,[54][55]Multi-Task Learning with Graph Structures.
https://en.wikipedia.org/wiki/Multi-task_learning
Multi-task learning(MTL) is a subfield ofmachine learningin which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately.[1][2][3]Inherently, Multi-task learning is amulti-objective optimizationproblem havingtrade-offsbetween different tasks.[4]Early versions of MTL were called "hints".[5][6] In a widely cited 1997 paper, Rich Caruana gave the following characterization: Multitask Learning is an approach toinductive transferthat improvesgeneralizationby using the domain information contained in the training signals of related tasks as aninductive bias. It does this by learning tasks in parallel while using a sharedrepresentation; what is learned for each task can help other tasks be learned better.[3] In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance.[citation needed]Further examples of settings for MTL includemulticlass classificationandmulti-label classification.[7] Multi-task learning works becauseregularizationinduced by requiring an algorithm to perform well on a related task can be superior to regularization that preventsoverfittingby penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled.[8]However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.[8][9] The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge: Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is alinear combinationof some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, withsparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases.[10]Task relatedness can be imposed a priori or learned from the data.[7][11]Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly.[8][12]For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains.[8] One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to beorthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods.[9] Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deepconvolutional neural networkGoogLeNet,[13]an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task.[14] Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termedGroup online adaptive learning(GOAL).[15]Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predictingfinancial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents. Multi-task optimizationfocuses on solving optimizing the whole process.[16][17]The paradigm has been inspired by the well-established concepts oftransfer learning[18]and multi-task learning inpredictive analytics.[19] The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes,[20]the search progress can be transferred to substantially accelerate the search on the other. The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems.[21] There is a direct relationship between multitask optimization andmulti-objective optimization.[22] In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models.[23]Commonly, MTL models employ task-specific modules on top of a joint feature representation obtained using a shared module. Since this joint representation must capture useful features across all tasks, MTL may hinder individual task performance if the different tasks seek conflicting representation, i.e., the gradients of different tasks point to opposing directions or differ significantly in magnitude. This phenomenon is commonly referred to as negative transfer. To mitigate this issue, various MTL optimization methods have been proposed. Commonly, the per-task gradients are combined into a joint update direction through various aggregation algorithms or heuristics. There are several common approaches for multi-task optimization:Bayesian optimization,evolutionary computation, and approaches based onGame theory.[16] Multi-task Bayesian optimizationis a modern model-based approach that leverages the concept of knowledge transfer to speed up the automatichyperparameter optimizationprocess of machine learning algorithms.[24]The method builds a multi-taskGaussian processmodel on the data originating from different searches progressing in tandem.[25]The captured inter-task dependencies are thereafter utilized to better inform the subsequent sampling of candidate solutions in respective search spaces. Evolutionary multi-taskinghas been explored as a means of exploiting theimplicit parallelismof population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover.[17][26]Recently, modes of knowledge transfer that are different from direct solutioncrossoverhave been explored.[27][28] Game-theoretic approaches to multi-task optimization propose to view the optimization problem as a game, where each task is a player. All players compete through the reward matrix of the game, and try to reach a solution that satisfies all players (all tasks). This view provide insight about how to build efficient algorithms based ongradient descentoptimization (GD), which is particularly important for trainingdeep neural networks.[29]In GD for MTL, the problem is that each task provides its own loss, and it is not clear how to combine all losses and create a single unified gradient, leading to several different aggregation strategies.[30][31][32]This aggregation problem can be solved by defining a game matrix where the reward of each player is the agreement of its own gradient with the common gradient, and then setting the common gradient to be the NashCooperative bargaining[33]of that system. Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner.[26]Inmachine learning, the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.[34][35]In addition, the concept of multi-tasking has led to advances in automatichyperparameter optimizationof machine learning models andensemble learning.[36][37] Applications have also been reported in cloud computing,[38]with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously.[17][39]Recent work has additionally shown applications in chemistry.[40]In addition, some recent works have applied multi-task optimization algorithms in industrial manufacturing.[41][42] The MTL problem can be cast within the context of RKHSvv (acompleteinner product spaceofvector-valued functionsequipped with areproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015.[7] Suppose the training data set isSt={(xit,yit)}i=1nt{\displaystyle {\mathcal {S}}_{t}=\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{n_{t}}}, withxit∈X{\displaystyle x_{i}^{t}\in {\mathcal {X}}},yit∈Y{\displaystyle y_{i}^{t}\in {\mathcal {Y}}}, wheretindexes task, andt∈1,...,T{\displaystyle t\in 1,...,T}. Letn=∑t=1Tnt{\displaystyle n=\sum _{t=1}^{T}n_{t}}. In this setting there is a consistent input and output space and the sameloss functionL:R×R→R+{\displaystyle {\mathcal {L}}:\mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R} _{+}}for each task: . This results in the regularized machine learning problem: whereH{\displaystyle {\mathcal {H}}}is a vector valued reproducing kernel Hilbert space with functionsf:X→YT{\displaystyle f:{\mathcal {X}}\rightarrow {\mathcal {Y}}^{T}}having componentsft:X→Y{\displaystyle f_{t}:{\mathcal {X}}\rightarrow {\mathcal {Y}}}. The reproducing kernel for the spaceH{\displaystyle {\mathcal {H}}}of functionsf:X→RT{\displaystyle f:{\mathcal {X}}\rightarrow \mathbb {R} ^{T}}is a symmetric matrix-valued functionΓ:X×X→RT×T{\displaystyle \Gamma :{\mathcal {X}}\times {\mathcal {X}}\rightarrow \mathbb {R} ^{T\times T}}, such thatΓ(⋅,x)c∈H{\displaystyle \Gamma (\cdot ,x)c\in {\mathcal {H}}}and the following reproducing property holds: The reproducing kernel gives rise to a representer theorem showing that any solution to equation1has the form: The form of the kernelΓinduces both the representation of thefeature spaceand structures the output across tasks. A natural simplification is to choose aseparable kernel,which factors into separate kernels on the input spaceXand on the tasks{1,...,T}{\displaystyle \{1,...,T\}}. In this case the kernel relating scalar componentsft{\displaystyle f_{t}}andfs{\displaystyle f_{s}}is given byγ((xi,t),(xj,s))=k(xi,xj)kT(s,t)=k(xi,xj)As,t{\textstyle \gamma ((x_{i},t),(x_{j},s))=k(x_{i},x_{j})k_{T}(s,t)=k(x_{i},x_{j})A_{s,t}}. For vector valued functionsf∈H{\displaystyle f\in {\mathcal {H}}}we can writeΓ(xi,xj)=k(xi,xj)A{\displaystyle \Gamma (x_{i},x_{j})=k(x_{i},x_{j})A}, wherekis a scalar reproducing kernel, andAis a symmetric positive semi-definiteT×T{\displaystyle T\times T}matrix. Henceforth denoteS+T={PSD matrices}⊂RT×T{\displaystyle S_{+}^{T}=\{{\text{PSD matrices}}\}\subset \mathbb {R} ^{T\times T}}. This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely byA. Methods for non-separable kernelsΓis a current field of research. For the separable case, the representation theorem is reduced tof(x)=∑i=1Nk(x,xi)Aci{\textstyle f(x)=\sum _{i=1}^{N}k(x,x_{i})Ac_{i}}. The model output on the training data is thenKCA, whereKis then×n{\displaystyle n\times n}empirical kernel matrix with entriesKi,j=k(xi,xj){\textstyle K_{i,j}=k(x_{i},x_{j})}, andCis then×T{\displaystyle n\times T}matrix of rowsci{\displaystyle c_{i}}. With the separable kernel, equation1can be rewritten as whereVis a (weighted) average ofLapplied entry-wise toYandKCA. (The weight is zero ifYit{\displaystyle Y_{i}^{t}}is a missing observation). Note the second term inPcan be derived as follows: There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping. Regularizer—With the separable kernel, it can be shown (below) that||f||H2=∑s,t=1TAt,s†⟨fs,ft⟩Hk{\textstyle ||f||_{\mathcal {H}}^{2}=\sum _{s,t=1}^{T}A_{t,s}^{\dagger }\langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}, whereAt,s†{\displaystyle A_{t,s}^{\dagger }}is thet,s{\displaystyle t,s}element of the pseudoinverse ofA{\displaystyle A}, andHk{\displaystyle {\mathcal {H}}_{k}}is the RKHS based on the scalar kernelk{\displaystyle k}, andft(x)=∑i=1nk(x,xi)At⊤ci{\textstyle f_{t}(x)=\sum _{i=1}^{n}k(x,x_{i})A_{t}^{\top }c_{i}}. This formulation shows thatAt,s†{\displaystyle A_{t,s}^{\dagger }}controls the weight of the penalty associated with⟨fs,ft⟩Hk{\textstyle \langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}. (Note that⟨fs,ft⟩Hk{\textstyle \langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}arises from||ft||Hk=⟨ft,ft⟩Hk{\textstyle ||f_{t}||_{{\mathcal {H}}_{k}}=\langle f_{t},f_{t}\rangle _{{\mathcal {H}}_{k}}}.) ‖f‖H2=⟨∑i=1nγ((xi,ti),⋅)citi,∑j=1nγ((xj,tj),⋅)cjtj⟩H=∑i,j=1nciticjtjγ((xi,ti),(xj,tj))=∑i,j=1n∑s,t=1Tcitcjsk(xi,xj)As,t=∑i,j=1nk(xi,xj)⟨ci,Acj⟩RT=∑i,j=1nk(xi,xj)⟨ci,AA†Acj⟩RT=∑i,j=1nk(xi,xj)⟨Aci,A†Acj⟩RT=∑i,j=1n∑s,t=1T(Aci)t(Acj)sk(xi,xj)As,t†=∑s,t=1TAs,t†⟨∑i=1nk(xi,⋅)(Aci)t,∑j=1nk(xj,⋅)(Acj)s⟩Hk=∑s,t=1TAs,t†⟨ft,fs⟩Hk{\displaystyle {\begin{aligned}\|f\|_{\mathcal {H}}^{2}&=\left\langle \sum _{i=1}^{n}\gamma ((x_{i},t_{i}),\cdot )c_{i}^{t_{i}},\sum _{j=1}^{n}\gamma ((x_{j},t_{j}),\cdot )c_{j}^{t_{j}}\right\rangle _{\mathcal {H}}\\&=\sum _{i,j=1}^{n}c_{i}^{t_{i}}c_{j}^{t_{j}}\gamma ((x_{i},t_{i}),(x_{j},t_{j}))\\&=\sum _{i,j=1}^{n}\sum _{s,t=1}^{T}c_{i}^{t}c_{j}^{s}k(x_{i},x_{j})A_{s,t}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle c_{i},Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle c_{i},AA^{\dagger }Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle Ac_{i},A^{\dagger }Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}\sum _{s,t=1}^{T}(Ac_{i})^{t}(Ac_{j})^{s}k(x_{i},x_{j})A_{s,t}^{\dagger }\\&=\sum _{s,t=1}^{T}A_{s,t}^{\dagger }\langle \sum _{i=1}^{n}k(x_{i},\cdot )(Ac_{i})^{t},\sum _{j=1}^{n}k(x_{j},\cdot )(Ac_{j})^{s}\rangle _{{\mathcal {H}}_{k}}\\&=\sum _{s,t=1}^{T}A_{s,t}^{\dagger }\langle f_{t},f_{s}\rangle _{{\mathcal {H}}_{k}}\end{aligned}}} Output metric—an alternative output metric onYT{\displaystyle {\mathcal {Y}}^{T}}can be induced by the inner product⟨y1,y2⟩Θ=⟨y1,Θy2⟩RT{\displaystyle \langle y_{1},y_{2}\rangle _{\Theta }=\langle y_{1},\Theta y_{2}\rangle _{\mathbb {R} ^{T}}}. With the squared loss there is an equivalence between the separable kernelsk(⋅,⋅)IT{\displaystyle k(\cdot ,\cdot )I_{T}}under the alternative metric, andk(⋅,⋅)Θ{\displaystyle k(\cdot ,\cdot )\Theta }, under the canonical metric. Output mapping—Outputs can be mapped asL:YT→Y~{\displaystyle L:{\mathcal {Y}}^{T}\rightarrow {\mathcal {\tilde {Y}}}}to a higher dimensional space to encode complex structures such as trees, graphs and strings. For linear mapsL, with appropriate choice of separable kernel, it can be shown thatA=L⊤L{\displaystyle A=L^{\top }L}. Via the regularizer formulation, one can represent a variety of task structures easily. Learning problemPcan be generalized to admit learning task matrix A as follows: Choice ofF:S+T→R+{\displaystyle F:S_{+}^{T}\rightarrow \mathbb {R} _{+}}must be designed to learn matricesAof a given type. See "Special cases" below. Restricting to the case ofconvexlosses andcoercivepenalties Cilibertoet al.have shown that althoughQis not convex jointly inCandA,a related problem is jointly convex. Specifically on the convex setC={(C,A)∈Rn×T×S+T|Range(C⊤KC)⊆Range(A)}{\displaystyle {\mathcal {C}}=\{(C,A)\in \mathbb {R} ^{n\times T}\times S_{+}^{T}|Range(C^{\top }KC)\subseteq Range(A)\}}, the equivalent problem is convex with the same minimum value. And if(CR,AR){\displaystyle (C_{R},A_{R})}is a minimizer forRthen(CRAR†,AR){\displaystyle (C_{R}A_{R}^{\dagger },A_{R})}is a minimizer forQ. Rmay be solved by a barrier method on a closed set by introducing the following perturbation: The perturbation via the barrierδ2tr(A†){\displaystyle \delta ^{2}tr(A^{\dagger })}forces the objective functions to be equal to+∞{\displaystyle +\infty }on the boundary ofRn×T×S+T{\displaystyle R^{n\times T}\times S_{+}^{T}}. Scan be solved with a block coordinate descent method, alternating inCandA.This results in a sequence of minimizers(Cm,Am){\displaystyle (C_{m},A_{m})}inSthat converges to the solution inRasδm→0{\displaystyle \delta _{m}\rightarrow 0}, and hence gives the solution toQ. Spectral penalties- Dinnuzoet al[43]suggested settingFas the Frobenius normtr(A⊤A){\displaystyle {\sqrt {tr(A^{\top }A)}}}. They optimizedQdirectly using block coordinate descent, not accounting for difficulties at the boundary ofRn×T×S+T{\displaystyle \mathbb {R} ^{n\times T}\times S_{+}^{T}}. Clustered tasks learning- Jacobet al[44]suggested to learnAin the setting whereTtasks are organized inRdisjoint clusters. In this case letE∈{0,1}T×R{\displaystyle E\in \{0,1\}^{T\times R}}be the matrix withEt,r=I(taskt∈groupr){\displaystyle E_{t,r}=\mathbb {I} ({\text{task }}t\in {\text{group }}r)}. SettingM=I−E†ET{\displaystyle M=I-E^{\dagger }E^{T}}, andU=1T11⊤{\displaystyle U={\frac {1}{T}}\mathbf {11} ^{\top }}, the task matrixA†{\displaystyle A^{\dagger }}can be parameterized as a function ofM{\displaystyle M}:A†(M)=ϵMU+ϵB(M−U)+ϵ(I−M){\displaystyle A^{\dagger }(M)=\epsilon _{M}U+\epsilon _{B}(M-U)+\epsilon (I-M)}, with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxationSc={M∈S+T:I−M∈S+T∧tr(M)=r}{\displaystyle {\mathcal {S}}_{c}=\{M\in S_{+}^{T}:I-M\in S_{+}^{T}\land tr(M)=r\}}. In this formulation,F(A)=I(A(M)∈{A:M∈SC}){\displaystyle F(A)=\mathbb {I} (A(M)\in \{A:M\in {\mathcal {S}}_{C}\})}. Non-convex penalties- Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases. Non-separable kernels- Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels. A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR)[45]implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning,[46][47]Multi-Task Learning with Joint Feature Selection,[48]Robust Multi-Task Feature Learning,[49]Trace-Norm Regularized Multi-Task Learning,[50]Alternating Structural Optimization,[51][52]Incoherent Low-Rank and Sparse Learning,[53]Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning,[54][55]Multi-Task Learning with Graph Structures.
https://en.wikipedia.org/wiki/Multitask_optimization
Transfer of learningoccurs when people apply information, strategies, and skills they have learned to a new situation or context. Transfer is not a discrete activity, but is rather an integral part of the learning process. Researchers attempt to identify when and how transfer occurs and to offer strategies to improve transfer. Theformal discipline(ormental discipline) approach to learning believed that specific mental faculties could be strengthened by particular courses of training and that these strengthened faculties transferred to other situations, based onfaculty psychologywhich viewed the mind as a collection of separate modules or faculties assigned to various mental tasks. This approach resulted in school curricula that required students to study subjects such as mathematics and Latin in order to strengthen reasoning and memory faculties.[1] Disputing formal discipline,Edward ThorndikeandRobert S. Woodworthin 1901 postulated that the transfer of learning was restricted or assisted by the elements in common between the original context and the next context.[1]The notion was originally introduced astransfer of practice. They explored how individuals would transfer learning in one context to another similar context and how "improvement in one mental function" could influence a related one. Their theory implied that transfer of learning depends on how similar the learning task and transfer tasks are, or where "identical elements are concerned in the influencing and influenced function", now known as theidentical element theory.[2]Thorndike urged schools to design curricula with tasks similar to those students would encounter outside of school to facilitate the transfer of learning.[1] In contrast to Thorndike,Edwin Ray Guthrie'slaw of contiguityexpected little transfer of learning. Guthrie recommended studying in the exact conditions in which one would be tested, because of his view that "we learn what we do in the presence of specific stimuli".[1]The expectation is that training in conditions as similar as possible to those in which learners will have to perform will facilitate transfer.[3] The argument is also made that transfer is not distinct from learning, as people do not encounter situations asblank slates.[4]Perkins andSalomonconsidered it more a continuum, with no bright line between learning and transfer.[5] Transfer may also be referred to asgeneralization,B. F. Skinner's concept of a response to a stimulus occurring to other stimuli.[3] Today, transfer of learning is usually described as the process and the effective extent to which past experiences (also referred to as thetransfer source) affect learning and performance in a new situation (thetransfer target).[6]However, there remains controversy as to how transfer of learning should be conceptualized and explained, what its prevalence is, what its relation is to learning in general, and whether it exists at all.[4] People storepropositions, or basic units of knowledge, in long-term memory. When new information enters the working memory, long-term memory is searched for associations which combine with the new information in working memory. The associations reinforce the new information and help assign meaning to it.[7]Learning that takes place in varying contexts can create more links and encouragegeneralizationof the skill or knowledge.[3]Connections between past learning and new learning can provide a context or framework for the new information, helping students to determine sense and meaning, and encouraging retention of the new information. These connections can build up a framework of associative networks that students can call upon for future problem-solving.[7]Information stored in memory is "flexible, interpretive, generically altered, and its recall and transfer are largely context-dependent".[4] When Thorndike refers to similarity of elements between learning and transfer, the elements can be conditions or procedures. Conditions can be environmental, physical, mental, or emotional, and the possible combinations of conditions are countless. Procedures include sequences of events or information.[1]Although the theory is that the similarity of elements facilitates transfer, there is a challenge in identifying which specific elements had an effect on the learner at the time of learning.[4] Factors that can affect transfer include:[7] Learners can increase transfer through effective practice and by mindfully abstracting knowledge. Abstraction is the process of examining our experiences for similarities. Methods for abstracting knowledge include seeking the underlying principles in what is learned, creating models, and identifying analogies and metaphors, all of which assist with creating associations and encouraging transfer.[5] Transfer of learning can be cognitive, socio-emotional, or motor.[4]The following table presents different types of transfer.[3] Transfer is less a deliberate activity by the learner than it is a result of the environment at the time of learning. Teachers, being part of the learning environment, can be an instrument of transfer (both positive and negative).[7]Recommendations for teaching for transfer include thehuggingandbridgingstrategies; providing authentic environment and activities within a conceptual framework; encouraging problem-based learning;community of practice;cognitive apprenticeship; and game-based learning.[5] Hugging and bridging as techniques for positive transfer were suggested by the research of Perkins and Salomon.[7] Hugging is when the teacher encourages transfer by incorporating similarities between the learning situation and the future situations in which the learning might be used. Some methods for hugging include simulation games, mental practice, and contingency learning.[7] Bridging is when the teacher encourages transfer by helping students to find connections between learning and to abstract their existing knowledge to new concepts. Some methods for bridging includebrainstorming, developing analogies, andmetacognition.[7]
https://en.wikipedia.org/wiki/Transfer_of_learning
Educational psychologyis the branch ofpsychologyconcerned with the scientific study of humanlearning. The study of learning processes, from bothcognitiveandbehavioralperspectives, allows researchers to understand individual differences inintelligence,cognitivedevelopment,affect,motivation, self-regulation, and self-concept, as well as their role inlearning. The field of educational psychology relies heavily on quantitative methods, including testing and measurement, to enhance educational activities related to instructional design,classroom management, and assessment, which serve to facilitate learning processes in various educational settings across the lifespan.[1] Educational psychology can in part be understood through its relationship with other disciplines. It is informed primarily bypsychology, bearing a relationship to that discipline analogous to the relationship betweenmedicineandbiology. It is also informed byneuroscience. Educational psychology in turn informs a wide range of specialties within educational studies, includinginstructional design,educational technology,curriculum development,organizational learning,special education,classroom management, and student motivation. Educational psychology both draws from and contributes tocognitive scienceand thelearning theory. In universities, departments of educational psychology are usually housed within faculties of education, possibly accounting for the lack of representation of educational psychology content in introductory psychology textbooks.[2] The field of educational psychology involves the study ofmemory, conceptual processes, and individual differences (via cognitive psychology) in conceptualizing new strategies for learning processes in humans. Educational psychology has been built upon theories ofoperant conditioning,functionalism,structuralism,constructivism,humanistic psychology,Gestalt psychology, andinformation processing.[1] Educational psychology has seen rapid growth and development as a profession in the last twenty years.[3]School psychologybegan with the concept of intelligence testing leading to provisions for special education students, who could not follow the regular classroom curriculum in the early part of the 20th century.[3]Another main focus of school psychology was to help close the gap for children of colour, as the fight against racial inequality and segregation was still very prominent, during the early to mid-1900s. However, "school psychology" itself has built a fairly new profession based upon the practices and theories of several psychologists among many different fields. Educational psychologists are working side by side with psychiatrists, social workers, teachers, speech and language therapists, and counselors in an attempt to understand the questions being raised when combining behavioral, cognitive, and social psychology in the classroom setting.[3] As a field of study, educational psychology is fairly new and was not considered a specific practice until the 20th century. Reflections on everyday teaching and learning allowed some individuals throughout history to elaborate on developmental differences in cognition, the nature of instruction, and the transfer of knowledge and learning. These topics are important to education and, as a result, they are important in understanding human cognition, learning, and social perception.[4] Some of the ideas and issues pertaining to educational psychology date back to the time ofPlatoandAristotle.Philosophersas well assophistsdiscussed the purpose ofeducation, training of the body and the cultivation of psycho-motor skills, the formation of good character, the possibilities and limits of moraleducation. Some other educational topics they spoke about were the effects of music, poetry, and the other arts on the development of the individual, role of the teacher, and the relations between teacher and student.[4]Plato saw knowledge acquisition as an innate ability, which evolves through experience and understanding of the world. This conception of human cognition has evolved into a continuing argument ofnature vs. nurturein understanding conditioning and learning today.Aristotle, on the other hand, ascribed to the idea of knowledge by association orschema. His fourlaws of associationincluded succession, contiguity, similarity, and contrast. His studies examined recall and facilitated learning processes.[5] John Lockeis considered one of the most influential philosophers in post-renaissance Europe, a time period that began around the mid-1600s. Locke is considered the "Father of English Psychology". One of Locke's most important works was written in 1690, namedAn Essay Concerning Human Understanding. In this essay, he introduced the term "tabula rasa" meaning "blank slate." Locke explained that learning was attained through experience only and that we are all born without knowledge.[6] He followed by contrasting Plato's theory of innate learning processes. Locke believed the mind was formed by experiences, not innate ideas. Locke introduced this idea as "empiricism", or the understanding that knowledge is only built on knowledge and experience.[7] In the late 1600s, John Locke advanced the hypothesis that people learn primarily from external forces. He believed that the mind was like a blank tablet (tabula rasa), and that successions of simple impressions give rise to complex ideas through association and reflection. Locke is credited with establishing "empiricism" as a criterion for testing the validity of knowledge, thus providing a conceptual framework for later development of experimental methodology in the natural and social sciences.[8] In the 18th century the philosopherJean-Jacques Rousseauespoused a set of theories which would become highly influential in the field of education, particularly through hisphilosophical novelEmile, or On Education. Despite stating that the book should not be used as a practical guide to nurturing children, the pedagogical approach outlined in it was lauded byEnlightenmentcontemporaries includingImmanuel KantandJohann Wolfgang von Goethe. Rousseau advocated achild-centeredapproach to education, and that the age of the child should be accounted for in choosing what and how to teach them. In particular he insisted on the primacy ofexperiential education, in order to develop the child's ability to reason autonomously. Rousseau's philosophy influenced educational reformers includingJohann Bernhard Basedow, whose practice in his model school thePhilanthropinumdrew upon his ideas, as well asJohann Heinrich Pestalozzi. More generally Rousseau's thinking had significant direct and indirect influence on the development of pedagogy in Germany, Switzerland and the Netherlands. In addition, Jean Piaget's stage-based approach to child development has been observed to have parallels to Rousseau's theories.[9] Philosophers of education such as Juan Vives, Johann Pestalozzi, Friedrich Fröbel, and Johann Herbart had examined, classified and judged the methods of education centuries before the beginnings of psychology in the late 1800s. Juan Vives(1493–1540) proposed induction as the method of study and believed in the directobservationand investigation of the study ofnature. His studies focused on humanisticlearning, which opposed scholasticism and was influenced by a variety of sources includingphilosophy,psychology,politics,religion, andhistory.[10]He was one of the first prominent thinkers to emphasize that the location of a school is important tolearning.[11]He suggested that a school should be located away from disturbing noises; the air quality should be good and there should be plenty of food for the students and teachers.[11]Vives emphasized the importance of understanding individual differences of the students and suggested practice as an important tool for learning.[11] Vives introduced his educational ideas in his writing, "De anima et vita" in 1538. In this publication, Vives exploresmoral philosophyas a setting for his educational ideals; with this, he explains that the different parts of the soul (similar to that of Aristotle's ideas) are each responsible for different operations, which function distinctively. The first book covers the different "souls": "The Vegetative Soul"; this is the soul ofnutrition, growth, and reproduction, "The Sensitive Soul", which involves the five external senses; "The Cogitative soul", which includes internal senses andcognitivefacilities. The second book involves functions of the rational soul: mind, will, and memory. Lastly, the third book explains the analysis of emotions.[12] Johann Pestalozzi(1746–1827), a Swiss educational reformer, emphasized the child rather than the content of the school.[13]Pestalozzi fostered an educational reform backed by the idea that early education was crucial for children, and could be manageable for mothers. Eventually, this experience with early education would lead to a "wholesome person characterized by morality."[14]Pestalozzi has been acknowledged for opening institutions for education, writing books for mother's teaching home education, and elementary books for students, mostly focusing on the kindergarten level. In his later years, he published teaching manuals and methods of teaching.[14] During the time ofThe Enlightenment, Pestalozzi's ideals introduced "educationalization". This created the bridge between social issues and education by introducing the idea of social issues to be solved through education. Horlacher describes the most prominent example of this during The Enlightenment to be "improving agricultural production methods."[14] Johann Herbart(1776–1841) is considered the father of educationalpsychology.[15]He believed thatlearningwas influenced by interest in the subject and the teacher.[15]He thought that teachers should consider the students' existing mental sets—what they already know—when presenting new information or material.[15]Herbart came up with what are now known as the formal steps. The 5 steps that teachers should use are: There were three major figures in educational psychology in this period: William James, G. Stanley Hall, and John Dewey. These three men distinguished themselves in general psychology and educational psychology, which overlapped significantly at the end of the 19th century.[4] The period of 1890–1920 is considered the golden era of educational psychology when aspirations of the new discipline rested on the application of the scientific methods of observation and experimentation to educational problems. From 1840 to 1920 37 million people immigrated to the United States.[10]This created an expansion of elementary schools and secondary schools. The increase in immigration also provided educational psychologists the opportunity to use intelligence testing to screen immigrants at Ellis Island.[10]Darwinisminfluenced the beliefs of the prominent educational psychologists.[10]Even in the earliest years of the discipline, educational psychologists recognized the limitations of this new approach. The pioneering American psychologistWilliam Jamescommented that: Psychology is a science, and teaching is an art; and sciences never generate arts directly out of themselves. An intermediate inventive mind must make that application, by using its originality".[16] James is the father of psychology in America, but he also made contributions to educational psychology. In his famous series of lecturesTalks to Teachers on Psychology, published in 1899, Jamesdefines educationas "the organization of acquired habits of conduct and tendencies to behavior".[16]He states that teachers should "train the pupil to behavior"[16]so that he fits into the social and physical world. Teachers should also realize the importance of habit and instinct. They should present information that is clear and interesting and relate this new information and material to things the student already knows about.[16]He also addresses important issues such as attention, memory, and association of ideas. Alfred BinetpublishedMental Fatiguein 1898, in which he attempted to apply the experimental method to educational psychology.[10]In this experimental method he advocated for two types of experiments, experiments done in the lab and experiments done in the classroom. In 1904 he was appointed the Minister of Public Education.[10]This is when he began to look for a way to distinguish children with developmental disabilities.[10]Binet strongly supported special education programs because he believed that "abnormality" could be cured.[10]The Binet-Simon test was the first intelligence test and was the first to distinguish between "normal children" and those with developmental disabilities.[10]Binet believed that it was important to study individual differences between age groups and children of the same age.[10]He also believed that it was important for teachers to take into account individual students' strengths and also the needs of the classroom as a whole when teaching and creating a good learning environment.[10]He also believed that it was important to train teachers in observation so that they would be able to see individual differences among children and adjust the curriculum to the students.[10]Binet also emphasized that practice of material was important. In 1916Lewis Termanrevised the Binet-Simon so that the average score was always 100.[15]The test became known as the Stanford-Binet and was one of the most widely used tests of intelligence. Terman, unlike Binet, was interested in using intelligence test to identify gifted children who had high intelligence.[10]In his longitudinal study of gifted children, who became known as the Termites, Terman found that gifted children become gifted adults.[15] Edward Thorndike(1874–1949) supported the scientific movement in education. He based teaching practices on empirical evidence and measurement.[10]Thorndike developed the theory ofinstrumental conditioningor the law of effect. The law of effect states that associations are strengthened when it is followed by something pleasing and associations are weakened when followed by something not pleasing. He also found thatlearningis done a little at a time or in increments, learning is an automatic process and its principles apply to all mammals. Thorndike's research withRobert Woodworthon the theory of transfer found that learning one subject will only influence your ability to learn another subject if the subjects are similar.[10]This discovery led to less emphasis on learning theclassicsbecause they found that studying the classics does not contribute to overall general intelligence.[10]Thorndike was one of the first to say that individual differences incognitivetasks were due to how many stimulus-response patterns a person had rather than general intellectual ability.[10]He contributed word dictionaries that werescientificallybased to determine the words and definitions used.[10]The dictionaries were the first to take into consideration the users' maturity level.[10]He also integrated pictures and easier pronunciation guide into each of the definitions.[10]Thorndike contributedarithmeticbooks based onlearning theory. He made all the problems more realistic and relevant to what was being studied, not just to improve the generalintelligence.[10]He developed tests that were standardized to measure performance in school-related subjects.[10]His biggest contribution to testing was the CAVD intelligence test which used a multidimensional approach to intelligence and was the first to use a ratio scale.[10]His later work was on programmed instruction, mastery learning, and computer-based learning: If, by a miracle of mechanical ingenuity, a book could be so arranged that only to him who had done what was directed on page one would page two become visible, and so on, much that now requires personal instruction could be managed by print.[17] John Dewey(1859–1952) had a major influence on the development ofprogressive educationin the United States. He believed that the classroom should prepare children to be good citizens and facilitate creative intelligence.[10]He pushed for the creation of practical classes that could be applied outside of a school setting.[10]He also thought that education should be student-oriented, not subject-oriented. For Dewey, education was a social experience that helped bring together generations of people. He stated that students learn by doing. He believed in an active mind that was able to be educated through observation, problem-solving, and enquiry. In his 1910 bookHow We Think, he emphasizes that material should be provided in a way that is stimulating and interesting to the student since it encourages original thought and problem-solving.[18]He also stated that material should be relative to the student's own experience.[18] "The material furnished by way of information should be relevant to a question that is vital in the students own experience"[18] Jean Piaget(1896–1980) was one of the most powerful researchers in of developmental psychology during the 20th century. He developed thetheory of cognitive development.[10]The theory stated that intelligence developed in four different stages. The stages are the sensorimotor stage from birth to 2 years old, the preoperational state from 2 to 7 years old, the concrete operational stage from 7 to 10 years old, and the formal operational stage from 12 years old and up.[10]He also believed that learning was constrained to the child's cognitive development. Piaget influenced educational psychology because he was the first to believe that cognitive development was important and something that should be paid attention to in education.[10]Most of the research on Piagetian theory was carried out by American educational psychologists. The number of people receiving a high school and college education increased dramatically from 1920 to 1960.[10]Because very few jobs were available to teens coming out of eighth grade, there was an increase in high school attendance in the 1930s.[10]The progressive movement in the United States took off at this time and led to the idea ofprogressive education. John Flanagan, an educational psychologist, developed tests for combat trainees and instructions in combat training.[10]In 1954 the work of Kenneth Clark and his wife on the effects of segregation on black and white children was influential in the Supreme Court caseBrown v. Board of Education.[15]From the 1960s to present day, educational psychology has switched from a behaviorist perspective to a more cognitive-based perspective because of the influence and development ofcognitive psychologyat this time.[10] Jerome Bruneris notable for integratingPiaget'scognitiveapproaches into educationalpsychology.[10]He advocated fordiscovery learningwhere teachers create aproblem solvingenvironment that allows the student to question,exploreand experiment.[10]In his bookThe Process of EducationBruner stated that the structure of the material and thecognitiveabilities of the person are important inlearning.[10]He emphasized the importance of the subject matter. He also believed that how the subject was structured was important for the student's understanding of the subject and that it was the goal of the teacher to structure the subject in a way that was easy for the student to understand.[10]In the early 1960s, Bruner went toAfricato teach math and science to school children, which influenced his view as schooling as aculturalinstitution. Bruner was also influential in the development of MACOS,Man: a Course of Study, which was an educational program that combinedanthropologyandscience.[10]The program exploredhuman evolutionandsocial behavior. He also helped with the development of the head start program. He was interested in the influence of culture on education and looked at the impact of poverty on educational development.[10] Benjamin Bloom(1903–1999) spent over 50 years at theUniversity of Chicago, where he worked in the department of education.[10]He believed that all students can learn. He developed thetaxonomy of educational objectives.[10]The objectives were divided into three domains: cognitive, affective, and psychomotor. The cognitive domain deals with how we think.[19]It is divided into categories that are on a continuum from easiest to more complex.[19]The categories are knowledge or recall, comprehension, application, analysis, synthesis, and evaluation.[19]The affective domain deals with emotions and has 5 categories.[19]The categories are receiving phenomenon, responding to that phenomenon, valuing, organization, and internalizing values.[19]The psychomotor domain deals with the development of motor skills, movement, and coordination and has 7 categories that also go from simplest to most complex.[19]The 7 categories of the psychomotor domain are perception, set, guided response, mechanism, complex overt response, adaptation, and origination.[19]The taxonomy provided broad educational objectives that could be used to help expand the curriculum to match the ideas in the taxonomy.[10]The taxonomy is considered to have a greater influence internationally than in the United States. Internationally, the taxonomy is used in every aspect of education from the training of the teachers to the development of testing material.[10]Bloom believed in communicating clear learning goals and promoting an active student. He thought that teachers should provide feedback to the students on their strengths and weaknesses.[10]Bloom also did research on college students and their problem-solving processes. He found that they differ in understanding the basis of the problem and the ideas in the problem. He also found that students differ in process of problem-solving in their approach and attitude toward the problem.[10] Nathaniel Gage(1917–2008) is an important figure in educational psychology as his research focused on improving teaching and understanding the processes involved in teaching.[10]He edited the bookHandbook of Research on Teaching(1963), which helped develop early research in teaching and educational psychology.[10]Gage founded the Stanford Center for Research and Development in Teaching, which contributed research on teaching as well as influencing the education of important educational psychologists.[10] Applied behavior analysis, a research-based science utilizing behavioral principles ofoperant conditioning, is effective in a range of educational settings.[20]For example, teachers can alter student behavior by systematically rewarding students who follow classroom rules with praise, stars, or tokens exchangeable for sundry items.[21][22]Despite the demonstrated efficacy of awards in changing behavior, their use in education has been criticized by proponents ofself-determination theory, who claim that praise and other rewards undermineintrinsic motivation. There is evidence that tangible rewards decrease intrinsic motivation in specific situations, such as when the student already has a high level of intrinsic motivation to perform the goal behavior.[23]But the results showing detrimental effects are counterbalanced by evidence that, in other situations, such as when rewards are given for attaining a gradually increasing standard of performance, rewards enhance intrinsic motivation.[24][25]Many effective therapies have been based on the principles of applied behavior analysis, includingpivotal response therapywhich is used to treatautism spectrum disorders.[citation needed] Among current educational psychologists, the cognitive perspective is more widely held than the behavioral perspective, perhaps because it admits causally related mental constructs such astraits,beliefs,memories,motivations, andemotions.[26]Cognitive theories claim that memory structures determine how information isperceived,processed, stored,retrievedandforgotten. Among the memory structures theorized by cognitive psychologists are separate but linked visual and verbal systems described byAllan Paivio'sdual coding theory. Educational psychologists have useddual coding theoryandcognitive loadtheory to explain how people learn frommultimediapresentations.[27] Thespaced learningeffect, acognitivephenomenon strongly supported by psychological research, has broad applicability withineducation.[29]For example, students have been found to perform better on a test of knowledge about a text passage when a second reading of the passage is delayed rather than immediate (see figure).[28]Educational psychology research has confirmed the applicability to the education of other findings from cognitive psychology, such as the benefits of usingmnemonicsfor immediate and delayed retention of information.[30] Problem solving, according to prominent cognitive psychologists, is fundamental tolearning. It resides as an important research topic in educational psychology. A student is thought to interpret a problem by assigning it to aschemaretrieved fromlong-term memory. A problem students run into while reading is called "activation." This is when the student's representations of the text are present duringworking memory. This causes the student to read through the material without absorbing the information and being able to retain it. When working memory is absent from the reader's representations of the working memory, they experience something called "deactivation." When deactivation occurs, the student has an understanding of the material and is able to retain information. If deactivation occurs during the first reading, the reader does not need to undergo deactivation in the second reading. The reader will only need to reread to get a "gist" of the text to spark theirmemory. When the problem is assigned to the wrong schema, the student's attention is subsequently directed away from features of the problem that are inconsistent with the assigned schema.[31]The critical step of finding a mapping between the problem and a pre-existing schema is often cited as supporting the centrality ofanalogicalthinking to problem-solving. Each person has an individual profile of characteristics, abilities, and challenges that result from predisposition, learning, and development. These manifest as individual differences inintelligence,creativity,cognitive style,motivation, and the capacity to process information, communicate, and relate to others. The most prevalent disabilities found among school age children areattention deficit hyperactivity disorder(ADHD),learning disability,dyslexia, andspeech disorder. Less common disabilities includeintellectual disability,hearing impairment,cerebral palsy,epilepsy, andblindness.[32] Although theories ofintelligencehave been discussed by philosophers sincePlato,intelligence testingis an invention of educational psychology and is coincident with the development of that discipline. Continuing debates about the nature of intelligence revolve on whether it can be characterized by a singlefactorknown asgeneral intelligence,[33]multiple factors (e.g.,Gardner'stheory of multiple intelligences[34]), or whether it can be measured at all. In practice, standardized instruments such as theStanford-Binet IQ testand theWISC[35]are widely used in economically developed countries to identify children in need of individualized educational treatment. Children classified asgiftedare often provided with accelerated or enriched programs. Children with identified deficits may be provided with enhanced education in specific skills such asphonological awareness. In addition to basic abilities, the individual's personalitytraitsare also important, with people higher inconscientiousnessandhopeattaining superior academic achievements, even after controlling for intelligence and past performance.[36] Developmental psychology, and especially the psychology of cognitive development, opens a special perspective for educational psychology. This is so because education and the psychology of cognitive development converge on a number of crucial assumptions. First, the psychology of cognitive development defines human cognitive competence at successive phases of development. Education aims to help students acquire knowledge and develop skills that are compatible with their understanding and problem-solving capabilities at different ages. Thus, knowing the students' level on a developmental sequence provides information on the kind and level of knowledge they can assimilate, which, in turn, can be used as a frame for organizing the subject matter to be taught at different school grades. This is the reason whyPiaget's theory of cognitive developmentwas so influential for education, especially mathematics and science education.[37]In the same direction, theneo-Piagetian theories of cognitive developmentsuggest that in addition to the concerns above, sequencing of concepts and skills in teaching must take account of the processing andworking memorycapacities that characterize successive age levels.[38][39] Second, the psychology ofcognitive developmentinvolves understanding howcognitivechange takes place and recognizing the factors and processes which enable cognitive competence to develop.Educationalso capitalizes oncognitivechange, because the construction of knowledge presupposes effective teaching methods that would move the student from a lower to a higher level of understanding. Mechanisms such as reflection on actual ormentalactions vis-à-vis alternative solutions to problems, tagging new concepts or solutions to symbols that help one recall and mentally manipulate them are just a few examples of how mechanisms of cognitive development may be used to facilitate learning.[39][40] Finally, the psychology of cognitive development is concerned with individual differences in the organization of cognitive processes and abilities, in their rate of change, and in their mechanisms of change. The principles underlying intra- and inter-individual differences could be educationally useful, because knowing how students differ in regard to the various dimensions of cognitive development, such as processing and representational capacity, self-understanding and self-regulation, and the various domains of understanding, such as mathematical, scientific, or verbal abilities, would enable the teacher to cater for the needs of the different students so that no one is left behind.[39][41] Constructivism is a category of learning theory in which emphasis is placed on the agency and prior "knowing" and experience of the learner, and often on the social and cultural determinants of the learning process. Educational psychologists distinguish individual (or psychological) constructivism, identified withPiaget's theory of cognitive development, fromsocial constructivism. The social constructivist paradigm views the context in which the learning occurs as central to the learning itself.[42]It regards learning as a process of enculturation. People learn by exposure to the culture of practitioners. They observe and practice the behavior of practitioners and 'pick up relevant jargon, imitate behavior, and gradually start to act in accordance with the norms of the practice'.[43]So, a student learns to become a mathematician through exposure to mathematician using tools to solve mathematical problems. So in order to master a particular domain of knowledge it is not enough for students to learn the concepts of the domain. They should be exposed to the use of the concepts in authentic activities by the practitioners of the domain.[43] A dominant influence on the social constructivist paradigm isLev Vygotsky's work on sociocultural learning, describing how interactions with adults, more capable peers, and cognitive tools are internalized to form mental constructs. "Zone of Proximal Development" (ZPD) is a term Vygotsky used to characterize an individual's mental development. He believed that tasks individuals can do on their own do not give a complete understanding of their mental development. He originally defined the ZPD as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers.”[44]He cited a famous example to make his case. Two children in school who originally can solve problems at an eight-year-old developmental level (that is, typical for children who were age 8) might be at different developmental levels. If each child received assistance from an adult, one was able to perform at a nine-year-old level and one was able to perform at a twelve-year-old level. He said “This difference between twelve and eight, or between nine and eight, is what we callthe zone of proximal development.”[44]He further said that the ZPD “defines those functions that have not yet matured but are in the process of maturation, functions that will mature tomorrow but are currently in an embryonic state.”[44]The zone is bracketed by the learner's current ability and the ability they can achieve with the aid of an instructor of some capacity. Vygotsky viewed the ZPD as a better way to explain the relation between children's learning and cognitive development. Prior to the ZPD, the relation between learning and development could be boiled down to the following three major positions: 1) Development always precedes learning (e.g.,constructivism): children first need to meet a particular maturation level before learning can occur; 2) Learning and development cannot be separated, but instead occur simultaneously (e.g.,behaviorism): essentially, learning is development; and 3) learning and development are separate, but interactive processes (e.g.,gestaltism): one process always prepares the other process, and vice versa. Vygotsky rejected these three major theories because he believed that learning should always precede development in the ZPD. According to Vygotsky, through the assistance of a more knowledgeable other, a child can learn skills or aspects of a skill that go beyond the child's actual developmental or maturational level. The lower limit of ZPD is the level of skill reached by the child working independently (also referred to as the child's developmental level). The upper limit is the level of potential skill that the child can reach with the assistance of a more capable instructor. In this sense, the ZPD provides a prospective view of cognitive development, as opposed to a retrospective view that characterizes development in terms of a child's independent capabilities. The advancement through and attainment of the upper limit of the ZPD is limited by the instructional and scaffolding-related capabilities of the more knowledgeable other (MKO). The MKO is typically assumed to be an older, more experienced teacher or parent, but often can be a learner's peer or someone their junior. The MKO need not even be a person, it can be a machine or book, or other source of visual and/or audio input.[45] Elaborating on Vygotsky's theory,Jerome Brunerand other educational psychologists developed the important concept ofinstructional scaffolding, in which the social or information environment offers supports for learning that are gradually withdrawn as they become internalized.[46] Jean Piagetwas interested in how an organism adapts to its environment. Piaget hypothesized that infants are born with aschemaoperating at birth that he called "reflexes". Piaget identified four stages in cognitive development. The four stages are sensorimotor stage, pre-operational stage, concrete operational stage, and formal operational stage.[47] To understand the characteristics of learners inchildhood,adolescence,adulthood, andold age, educational psychology develops and applies theories of humandevelopment.[48]Often represented as stages through which people pass as they mature, developmental theories describe changes in mental abilities (cognition), social roles, moral reasoning, and beliefs about the nature of knowledge. For example, educational psychologists have conducted research on the instructional applicability ofJean Piaget's theory of development, according to which children mature through four stages of cognitive capability. Piaget hypothesized that children are not capable of abstract logical thought until they are older than about 11 years, and therefore younger children need to be taught using concrete objects and examples. Researchers have found that transitions, such as from concrete to abstract logical thought, do not occur at the same time in all domains. A child may be able to think abstractly about mathematics but remain limited to concrete thought when reasoning about human relationships. Perhaps Piaget's most enduring contribution is his insight that people actively construct their understanding through a self-regulatory process.[32] Piaget proposed a developmental theory ofmoral reasoningin which children progress from a naïve understanding ofmoralitybased on behavior and outcomes to a more advanced understanding based on intentions. Piaget's views of moral development were elaborated byLawrence Kohlberginto astage theory of moral development. There is evidence that the moral reasoning described in stage theories is not sufficient to account for moral behavior. For example, other factors such asmodeling(as described by thesocial cognitive theory of morality) are required to explainbullying. Rudolf Steiner's model ofchild developmentinterrelates physical, emotional, cognitive, and moral development[49]in developmental stages similar to those later described byPiaget.[50] Developmental theories are sometimes presented not as shifts between qualitatively different stages, but as gradual increments on separate dimensions. Development ofepistemologicalbeliefs (beliefs about knowledge) have been described in terms of gradual changes in people's belief in: certainty and permanence of knowledge, fixedness of ability, and credibility of authorities such as teachers and experts. People develop more sophisticated beliefs about knowledge as they gain in education and maturity.[51] Motivationis an internal state that activates, guides and sustains behavior. Motivation can have several impacting effects on how students learn and how they behave towards subject matter:[52] Educational psychology research on motivation is concerned with thevolitionorwillthat students bring to a task, their level of interest andintrinsic motivation, the personally heldgoalsthat guide their behavior, and their belief about the causes of their success or failure. As intrinsic motivation deals with activities that act as their own rewards, extrinsic motivation deals with motivations that are brought on by consequences or punishments. A form ofattribution theorydeveloped byBernard Weiner[53]describes how students' beliefs about the causes of academic success or failure affect their emotions and motivations. For example, when students attribute failure to lack of ability, and ability is perceived as uncontrollable, they experience the emotions ofshameandembarrassmentand consequently decrease effort and show poorer performance. In contrast, when students attribute failure to lack of effort, and effort is perceived as controllable, they experience the emotion ofguiltand consequently increase effort and show improved performance.[53] Theself-determination theory(SDT) was developed by psychologistsEdward Deciand Richard Ryan. SDT focuses on the importance ofintrinsic and extrinsic motivationin driving human behavior and posits inherent growth and development tendencies. It emphasizes the degree to which an individual's behavior is self-motivated and self-determined. When applied to the realm of education, the self-determination theory is concerned primarily with promoting in students an interest in learning, a value of education, and a confidence in their own capacities and attributes.[54] Motivational theories also explain howlearners' goalsaffect the way they engage with academic tasks.[55]Those who havemastery goalsstrive to increase their ability and knowledge. Those who haveperformance approach goalsstrive for high grades and seek opportunities to demonstrate their abilities. Those who haveperformance avoidancegoals are driven by fear of failure and avoid situations where their abilities are exposed. Research has found that mastery goals are associated with many positive outcomes such as persistence in the face of failure, preference for challenging tasks,creativity, andintrinsic motivation. Performance avoidance goals are associated with negative outcomes such as poorconcentrationwhile studying, disorganized studying, less self-regulation, shallow information processing, andtest anxiety. Performance approach goals are associated with positive outcomes, and some negative outcomes such as an unwillingness to seek help and shallow information processing.[55] Locus of controlis a salient factor in the successful academic performance of students. During the 1970s and '80s,Cassandra B. Whytedid significant educational research studying locus of control as related to the academic achievement of students pursuing higher education coursework. Much of her educational research and publications focused upon the theories ofJulian B. Rotterin regard to the importance of internal control and successful academic performance.[56]Whyte reported that individuals who perceive and believe that their hard work may lead to more successful academic outcomes, instead of depending on luck or fate, persist and achieve academically at a higher level. Therefore, it is important to provide education and counseling in this regard.[57] Instructional design, the systematic design of materials, activities, and interactive environments for learning, is broadly informed by educational psychology theories and research. For example, in defining learning goals or objectives, instructional designers often use ataxonomy of educational objectivescreated byBenjamin Bloomand colleagues.[58]Bloom also researchedmastery learning, an instructional strategy in which learners only advance to a new learning objective after they have mastered its prerequisite objectives. Bloom[59]discovered that a combination of mastery learning with one-to-one tutoring is highly effective, producing learning outcomes far exceeding those normally achieved in classroom instruction.Gagné, another psychologist, had earlier developed an influential method oftask analysisin which a terminal learning goal is expanded into a hierarchy of learning objectives[60]connected by prerequisite relationships. The following list of technological resources incorporate computer-aided instruction and intelligence for educational psychologists and their students: Technology is essential to the field of educational psychology, not only for the psychologist themselves as far as testing, organization, and resources, but also for students. Educational psychologists who reside in the K-12 setting focus most of their time on special education students. It has been found that students with disabilities learning through technology such as iPad applications and videos are more engaged and motivated to learn in the classroom setting. Liu et al. explain that learning-based technology allows for students to be more focused, and learning is more efficient with learning technologies. The authors explain that learning technology also allows for students with social-emotional disabilities to participate in distance learning.[61] Research onclassroom managementandpedagogyis conducted to guide teaching practice and form a foundation for teacher education programs. The goals of classroom management are to create an environment conducive to learning and to develop students' self-management skills. More specifically, classroom management strives to create positive teacher-student and peer relationships, manage student groups to sustain on-task behavior, and use counseling and other psychological methods to aid students who present persistent psychosocial problems.[63] Introductory educational psychology is a commonly required area of study in most North American teacher education programs. When taught in that context, its content varies, but it typically emphasizes learning theories (especially cognitively oriented ones), issues about motivation, assessment of students' learning, and classroom management. A developingWikibook about educational psychologygives more detail about the educational psychology topics that are typically presented in preservice teacher education. In order to become an educational psychologist, students can complete an undergraduate degree of their choice. They then must go to graduate school to study education psychology, counseling psychology, or school counseling. Most students today are also receiving theirdoctoraldegrees in order to hold the "psychologist" title. Educational psychologists work in a variety of settings. Some work in university settings where they carry out research on the cognitive and social processes of human development, learning and education. Educational psychologists may also work as consultants in designing and creating educational materials, classroom programs and online courses. Educational psychologists who work in K–12 school settings (closely related areschool psychologistsin the US and Canada) are trained at themaster'sand doctoral levels. In addition to conducting assessments, school psychologists provide services such as academic and behavioral intervention, counseling, teacher consultation, and crisis intervention. However, school psychologists are generally more individual-oriented towards students.[64] Many high schools and colleges are increasingly offering educational psychology courses, with some colleges offering it as a general education requirement. Similarly, colleges offer students opportunities to obtain a Ph.D. in educational psychology. Within the UK, students must hold a degree that is accredited by the British Psychological Society (either undergraduate or at the master's level) before applying for a three-year doctoral course that involves further education, placement, and a research thesis. In recent years, many university training programs in the US have included curriculum that focuses on issues of race, gender, disability, trauma, and poverty, and how those issues affect learning and academic outcomes. A growing number of universities offer specialized certificates that allow professionals to work and study in these fields (i.e. autism specialists, trauma specialists). Anticipated to grow by 18–26%, employment for psychologists in the United States is expected to grow faster than most occupations in 2014. One in four psychologists is employed in educational settings. In the United States, themediansalary for psychologists in primary and secondary schools is US$58,360 as of May 2004.[65] In recent decades, the participation of women as professional researchers in North American educational psychology has risen dramatically.[66] As opposed to some other fields ofeducational research,quantitative methodsare the predominant mode of inquiry in educational psychology, but qualitative and mixed-methods studies are also common.[67]Educational psychology, as much as any other field ofpsychologyrelies on a balance ofobservational, correlational, and experimental study designs. Given the complexities of modelingdependent dataand psychological variables in school settings, educational psychologists have been at the forefront of the development of several common statistical tools, includingpsychometric methods,meta-analysis,regression discontinuityandlatent variable modeling.
https://en.wikipedia.org/wiki/Educational_psychology
Zero-shot learning(ZSL) is a problem setup indeep learningwhere, at test time, a learner observes samples from classes which werenotobserved duringtraining, and needs to predict the class that they belong to. The name is a play on words based on the earlier concept ofone-shot learning, in which classification can be learned from only one, or a few, examples. Zero-shot methods generally work by associating observed and non-observed classes through some form of auxiliary information, which encodes observable distinguishing properties of objects.[1]For example, given a set of images of animals to be classified, along with auxiliary textual descriptions of what animals look like, an artificial intelligence model which has been trained to recognize horses, but has never been given a zebra, can still recognize a zebra when it also knows that zebras look like striped horses. This problem is widely studied incomputer vision,natural language processing, andmachine perception.[2] The first paper on zero-shot learning in natural language processing appeared in a 2008 paper by Chang, Ratinov, Roth, and Srikumar, at theAAAI’08, but the name given to the learning paradigm there wasdataless classification.[3]The first paper on zero-shot learning in computer vision appeared at the same conference, under the namezero-data learning.[4]The termzero-shot learningitself first appeared in the literature in a 2009 paper from Palatucci, Hinton, Pomerleau, and Mitchell atNIPS’09.[5]This terminology was repeated later in another computer vision paper[6]and the termzero-shot learningcaught on, as a take-off onone-shot learningthat was introduced in computer visionyears earlier.[7] In computer vision, zero-shot learning models learned parameters for seen classes along with their class representations and rely on representational similarity among class labels so that, during inference, instances can be classified into new classes. In natural language processing, the key technical direction developed builds on the ability to "understand the labels"—represent the labels in the same semantic space as that of the documents to be classified. This supports the classification of asingle examplewithout observing any annotated data, the purest form of zero-shot classification. The original paper[3]made use of theExplicit Semantic Analysis(ESA) representation but later papers made use of other representations, including dense representations. This approach was also extended to multilingual domains,[8][9]fine entity typing[10]and other problems. Moreover, beyond relying solely on representations, the computational approach has been extended to depend on transfer from other tasks, such astextual entailment[11]and question answering.[12] The original paper[3]also points out that, beyond the ability to classify a single example, when a collection of examples is given, with the assumption that they come from the same distribution, it is possible to bootstrap the performance in a semi-supervised like manner (ortransductive learning). Unlike standardgeneralizationin machine learning, where classifiers are expected to correctly classify new samples to classes they have already observed during training, in ZSL, no samples from the classes have been given during training the classifier. It can therefore be viewed as an extreme case ofdomain adaptation. Naturally, some form of auxiliary information has to be given about these zero-shot classes, and this type of information can be of several types. The above ZSL setup assumes that at test time, only zero-shot samples are given, namely, samples from new unseen classes. In generalized zero-shot learning, samples from both new and known classes, may appear at test time. This poses new challenges for classifiers at test time, because it is very challenging to estimate if a given sample is new or known. Some approaches to handle this include: Zero shot learning has been applied to the following fields:
https://en.wikipedia.org/wiki/Zero-shot_learning
External validityis the validity of applying the conclusions of a scientific study outside the context of that study.[1]In other words, it is the extent to which the results of a study can generalize or transport to other situations, people, stimuli, and times.[2][3]Generalizability refers to the applicability of a predefined sample to a broader population while transportability refers to the applicability of one sample to another target population.[2]In contrast,internal validityis the validity of conclusions drawnwithinthe context of a particular study. Mathematical analysis of external validity concerns a determination of whether generalization across heterogeneous populations is feasible, and devising statistical and computational methods that produce valid generalizations.[4] In establishing external validity, scholars tend to identify the "scope" of the study, which refers to the applicability or limitations of the theory or argument of the study.[2]This entails defining the sample of the study and the broader population that the sample represents.[2] "A threat to external validity is an explanation of how you might be wrong in making a generalization from the findings of a particular study."[5]In most cases, generalizability is limited when the effect of one factor (i.e. theindependent variable) depends on other factors. Therefore, all threats to external validity can be described asstatistical interactions.[6]Some examples include: Note that a study'sexternalvalidity is limited by itsinternalvalidity. If acausal inferencemade within a study is invalid, then generalizations of that inference to other contexts will also be invalid. Cook and Campbell[7]made the crucial distinction between generalizingtosome population and generalizingacrosssubpopulations defined by different levels of some background factor. Lynch has argued that it is almost never possible to generalizetomeaningful populations except as a snapshot of history, but it is possible to test the degree to which the effect of some cause on some dependent variable generalizesacrosssubpopulations that vary in some background factor. That requires a test of whether the treatment effect being investigated is moderated by interactions with one or more background factors.[6][8] Whereas enumerating threats to validity may help researchers avoid unwarranted generalizations, many of those threats can be disarmed, or neutralized in a systematic way, so as to enable a valid generalization. Specifically, experimental findings from one population can be "re-processed", or "re-calibrated" so as to circumvent population differences and produce valid generalizations in a second population, where experiments cannot be performed. Pearl and Bareinboim[4]classified generalization problems into two categories: (1) those that lend themselves to valid re-calibration, and (2) those where external validity is theoretically impossible. Using graph-basedcausal inferencecalculus,[9]they derived a necessary and sufficient condition for a problem instance to enable a valid generalization, and devised algorithms that automatically produce the needed re-calibration, whenever such exists.[10]This reduces the external validity problem to an exercise in graph theory, and has led some philosophers to conclude that the problem is now solved.[11] An important variant of the external validity problem deals withselection bias, also known assampling bias—that is, bias created when studies are conducted on non-representative samples of the intended population. For example, if a clinical trial is conducted on college students, an investigator may wish to know whether the results generalize to the entire population, where attributes such as age, education, and income differ substantially from those of a typical student. The graph-based method of Bareinboim and Pearl identifies conditions under which sample selection bias can be circumvented and, when these conditions are met, the method constructs an unbiased estimator of the average causal effect in the entire population. The main difference between generalization from improperly sampled studies and generalization across disparate populations lies in the fact that disparities among populations are usually caused by preexisting factors, such as age or ethnicity, whereas selection bias is often caused by post-treatment conditions, for example, patients dropping out of the study, or patients selected by severity of injury. When selection is governed by post-treatment factors, unconventional re-calibration methods are required to ensure bias-free estimation, and these methods are readily obtained from the problem's graph.[12][13] If age is judged to be a major factor causing treatment effect to vary from individual to individual, then age differences between the sampled students and the general population would lead to a biased estimate of the average treatment effect in that population. Such bias can be corrected though by a simple re-weighing procedure: We take the age-specific effect in the student subpopulation and compute its average using the age distribution in the general population. This would give us an unbiased estimate of the average treatment effect in the population. If, on the other hand, the relevant factor that distinguishes the study sample from the general population is in itself affected by the treatment, then a different re-weighing scheme need be invoked. Calling this factorZ, we again average thez-specific effect ofXonYin the experimental sample, but now we weigh it by the "causal effect" ofXonZ. In other words, the new weight is the proportion of units attaining levelZ=zhad treatmentX=xbeen administered to the entire population. This interventional probability, often written usingDo-calculus[14]P(Z=z|do(X=x)){\displaystyle P(Z=z|do(X=x))}, can sometimes be estimated from observational studies in the general population. A typical example of this nature occurs whenZis a mediator between the treatment and outcome, For instance, the treatment may be a cholesterol-reducing drug,Zmay be cholesterol level, andYlife expectancy. Here,Zis both affected by the treatment and a major factor in determining the outcome,Y. Suppose that subjects selected for the experimental study tend to have higher cholesterol levels than is typical in the general population. To estimate the average effect of the drug on survival in the entire population, we first compute thez-specific treatment effect in the experimental study, and then average it usingP(Z=z|do(X=x)){\displaystyle P(Z=z|do(X=x))}as a weighting function. The estimate obtained will be bias-free even whenZandYare confounded—that is, when there is an unmeasured common factor that affects bothZandY.[15] The precise conditions ensuring the validity of this and other weighting schemes are formulated in Bareinboim and Pearl, 2016[15]and Bareinboim et al., 2014.[13] In many studies and research designs, there may be a trade-off betweeninternal validityand external validity:[16][17][18]Attempts to increase internal validity may also limit the generalizability of the findings, and vice versa. This situation has led many researchers call for "ecologically valid" experiments. By that they mean that experimental procedures should resemble "real-world" conditions. They criticize the lack ofecological validityin many laboratory-based studies with a focus on artificially controlled and constricted environments. Some researchers think external validity and ecological validity are closely related in the sense that causal inferences based on ecologically valid research designs often allow for higher degrees of generalizability than those obtained in an artificially produced lab environment. However, this again relates to the distinction between generalizing to some population (closely related to concerns about ecological validity) and generalizing across subpopulations that differ on some background factor. Some findings produced in ecologically valid research settings may hardly be generalizable, and some findings produced in highly controlled settings may claim near-universal external validity. Thus, external and ecological validity are independent—a study may possess external validity but not ecological validity, and vice versa. Within thequalitative researchparadigm, external validity is replaced by the concept of transferability. Transferability is the ability of research results to transfer to situations with similar parameters, populations and characteristics.[19] It is common for researchers to claim that experiments are by their nature low in external validity. Some claim that many drawbacks can occur when following the experimental method. By the virtue of gaining enough control over the situation so as to randomly assign people to conditions and rule out the effects of extraneous variables, the situation can become somewhat artificial and distant from real life. There are two kinds of generalizability at issue: However, both of these considerations pertain to Cook and Campbell's concept of generalizingtosome target population rather than the arguably more central task of assessing the generalizability of findings from an experimentacrosssubpopulations that differ from the specific situation studied and people who differ from the respondents studied in some meaningful way.[7] Critics of experiments suggest that external validity could be improved by the use of field settings (or, at a minimum, realistic laboratory settings) and by the use of true probability samples of respondents. However, if one's goal is to understand generalizabilityacrosssubpopulations that differ in situational or personal background factors, these remedies do not have the efficacy in increasing external validity that is commonly ascribed to them. If background factor X treatment interactions exist of which the researcher is unaware (as seems likely), these research practices can mask a substantial lack of external validity. Dipboye and Flanagan, writing about industrial and organizational psychology, note that the evidence is that findings from one field setting and from one lab setting are equallyunlikelyto generalize to a second field setting.[20]Thus, field studies are not by their nature high in external validity and laboratory studies are not by their nature low in external validity. It depends in both cases whether the particular treatment effect studied would change with changes in background factors that are held constant in that study. If one's study is "unrealistic" on the level of some background factor that does not interact with the treatments, it has no effect on external validity. It is only if an experiment holds some background factor constant at an unrealistic level and if varying that background factor would have revealed a strong Treatment x Background factor interaction, that external validity is threatened.[6] Research in psychology experiments attempted in universities is often criticized for being conducted in artificial situations and that it cannot be generalized to real life.[21][22]To solve this problem, social psychologists attempt to increase the generalizability of their results by making their studies as realistic as possible. As noted above, this is in the hope of generalizing to some specific population. Realism per se does not help the make statements about whether the results would change if the setting were somehow more realistic, or if study participants were placed in a different realistic setting. If only one setting is tested, it is not possible to make statements about generalizability across settings.[6][8] However, many authors conflate external validity and realism. There is more than one way that an experiment can be realistic: This is referred to the extent to which an experiment is similar to real-life situations as the experiment'smundane realism.[21] It is more important to ensure that a study is high inpsychological realism—how similar the psychological processes triggered in an experiment are to psychological processes that occur in everyday life.[23] Psychological realism is heightened if people find themselves engrossed in a real event. To accomplish this, researchers sometimes tell the participants acover story—a false description of the study's purpose. If however, the experimenters were to tell the participants the purpose of the experiment then such a procedure would be low in psychological realism. In everyday life, no one knows when emergencies are going to occur and people do not have time to plan responses to them. This means that the kinds of psychological processes triggered would differ widely from those of a real emergency, reducing the psychological realism of the study.[3] People don't always know why they do what they do, or what they do until it happens. Therefore, describing an experimental situation to participants and then asking them to respond normally will produce responses that may not match the behavior of people who are actually in the same situation. We cannot depend on people's predictions about what they would do in a hypothetical situation; we can only find out what people will really do when we construct a situation that triggers the same psychological processes as occur in the real world. Social psychologists study the way in which people, in general, are susceptible to social influence. Several experiments have documented an interesting, unexpected example of social influence, whereby the mere knowledge that others were present reduced the likelihood that people helped. The only way to be certain that the results of an experiment represent the behaviour of a particular population is to ensure that participants are randomly selected from that population. Samples in experiments cannot be randomly selected just as they are in surveys because it is impractical and expensive to select random samples for social psychology experiments. It is difficult enough to convince a random sample of people to agree to answer a few questions over the telephone as part of a political poll, and such polls can cost thousands of dollars to conduct. Moreover, even if one somehow was able to recruit a truly random sample, there can be unobserved heterogeneity in the effects of the experimental treatments... A treatment can have a positive effect on some subgroups but a negative effect on others. The effects shown in the treatment averages may not generalize to any subgroup.[6][24] Many researchers address this problem by studying basic psychological processes that make people susceptible to social influence, assuming that these processes are so fundamental that they are universally shared. Some social psychologist processes do vary in different cultures and in those cases, diverse samples of people have to be studied.[25] The ultimate test of an experiment's external validity isreplication— conducting the study over again, generally with different subject populations or in different settings. Researchers will often use different methods, to see if they still get the same results. When many studies of one problem are conducted, the results can vary. Several studies might find an effect of the number of bystanders on helping behaviour, whereas a few do not. To make sense out of this, there is a statistical technique calledmeta-analysisthat averages the results of two or more studies to see if the effect of an independent variable is reliable. A meta analysis essentially tells us the probability that the findings across the results of many studies are attributable to chance or to the independent variable. If an independent variable is found to have an effect in only one of 20 studies, the meta-analysis will tell you that that one study was an exception and that, on average, the independent variable is not influencing the dependent variable. If an independent variable is having an effect in most of the studies, the meta-analysis is likely to tell us that, on average, it does influence the dependent variable. There can be reliable phenomena that are not limited to the laboratory. For example, increasing the number of bystanders has been found to inhibit helping behaviour with many kinds of people, including children, university students, and future ministers;[25]in Israel;[26]in small towns and large cities in the U.S.;[27]in a variety of settings, such as psychology laboratories, city streets, and subway trains;[28]and with a variety of types of emergencies, such as seizures, potential fires, fights, and accidents,[29]as well as with less serious events, such as having a flat tire.[30]Many of these replications have been conducted in real-life settings where people could not possibly have known that an experiment was being conducted. When conducting experiments in psychology, some believe that there is always a trade-off between internal and external validity— Some researchers believe that a good way to increase external validity is by conductingfield experiments. In a field experiment, people's behavior is studied outside the laboratory, in its natural setting. A field experiment is identical in design to a laboratory experiment, except that it is conducted in a real-life setting. The participants in a field experiment are unaware that the events they experience are in fact an experiment. Some claim that the external validity of such an experiment is high because it is taking place in the real world, with real people who are more diverse than a typical university student sample. However, as real-world settings differ dramatically, findings in one real-world setting may or may not generalize to another real-world setting.[20] Neither internal nor external validity is captured in a single experiment. Social psychologists opt first for internal validity, conducting laboratory experiments in which people are randomly assigned to different conditions and all extraneous variables are controlled. Other social psychologists prefer external validity to control, conducting most of their research in field studies, and many do both. Taken together, both types of studies meet the requirements of the perfect experiment. Through replication, researchers can study a given research question with maximal internal and external validity.[31]
https://en.wikipedia.org/wiki/External_validity
Attentionis amachine learningmethod that determines the importance of each component in a sequence relative to the other components in that sequence. Innatural language processing, importance is represented by"soft"weights assigned to each word in a sentence. More generally, attention encodes vectors calledtokenembeddingsacross a fixed-widthsequencethat can range from tens to millions of tokens in size. Unlike "hard" weights, which are computed during the backwards training pass, "soft" weights exist only in the forward pass and therefore change with every step of the input. Earlier designs implemented the attention mechanism in a serialrecurrent neural network(RNN) language translation system, but a more recent design, namely thetransformer, removed the slower sequential RNN and relied more heavily on the faster parallel attention scheme. Inspired by ideas aboutattention in humans, the attention mechanism was developed to address the weaknesses of leveraging information from thehidden layersof recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to beattenuated. Attention allows a token equal access to any part of a sentence directly, rather than only through the previous state. Academic reviews of the history of the attention mechanism are provided in Niu et al.[1]and Soydaner.[2] seq2seqwith RNN + Attention.[13]Attention mechanism was added onto RNN encoder-decoder architecture to improve language translation of long sentences. See Overview section. The modern era of machine attention was revitalized by grafting an attention mechanism (Fig 1. orange) to an Encoder-Decoder. Figure 2 shows the internal step-by-step operation of the attention block (A) in Fig 1. This attention scheme has been compared to the Query-Key analogy of relational databases. That comparison suggests anasymmetricrole for the Query and Key vectors, whereoneitem of interest (the Query vector "that") is matched againstallpossible items (the Key vectors of each word in the sentence). However, both Self and Cross Attentions' parallel calculations matches all tokens of the K matrix with all tokens of the Q matrix; therefore the roles of these vectors aresymmetric. Possibly because the simplistic database analogy is flawed, much effort has gone into understanding attention mechanisms further by studying their roles in focused settings, such as in-context learning,[20]masked language tasks,[21]stripped down transformers,[22]bigram statistics,[23]N-gram statistics,[24]pairwise convolutions,[25]and arithmetic factoring.[26] In translating between languages, alignment is the process of matching words from the source sentence to words of the translated sentence. Networks that perform verbatim translation without regard to word order would show the highest scores along the (dominant) diagonal of the matrix. The off-diagonal dominance shows that the attention mechanism is more nuanced. Consider an example of translatingI love youto French. On the first pass through the decoder, 94% of the attention weight is on the first English wordI, so the network offers the wordje. On the second pass of the decoder, 88% of the attention weight is on the third English wordyou, so it offerst'. On the last pass, 95% of the attention weight is on the second English wordlove, so it offersaime. In theI love youexample, the second wordloveis aligned with the third wordaime. Stacking soft row vectors together forje,t', andaimeyields analignment matrix: Sometimes, alignment can be multiple-to-multiple. For example, the English phraselook it upcorresponds tocherchez-le. Thus, "soft" attention weights work better than "hard" attention weights (setting one attention weight to 1, and the others to 0), as we would like the model to make a context vector consisting of a weighted sum of the hidden vectors, rather than "the best one", as there may not be a best hidden vector. Many variants of attention implement soft weights, such as Forconvolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention,[30]channel attention,[31]or combinations.[32][33] These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Core Calculations section above. The size of the attention matrix is proportional to the square of the number of input tokens. Therefore, when the input is long, calculating the attention matrix requires a lot ofGPUmemory. Flash attention is an implementation that reduces the memory needs and increases efficiency without sacrificing accuracy. It achieves this by partitioning the attention computation into smaller blocks that fit into the GPU's faster on-chip memory, reducing the need to store large intermediate matrices and thus lowering memory usage while increasing computational efficiency.[38] Flex Attention[39]is an attention kernel developed by Meta that allows users to modify attention scores prior tosoftmaxand dynamically chooses the optimal attention algorithm. The major breakthrough came with self-attention, where each element in the input sequence attends to all others, enabling the model to capture global dependencies. This idea was central to the Transformer architecture, which replaced recurrence entirely with attention mechanisms. As a result, Transformers became the foundation for models like BERT, GPT, and T5 (Vaswani et al., 2017). Attention is widely used in natural language processing, computer vision, and speech recognition. In NLP, it improves context understanding in tasks like question answering and summarization. In vision, visual attention helps models focus on relevant image regions, enhancing object detection and image captioning. For matrices:Q∈Rm×dk,K∈Rn×dk{\displaystyle \mathbf {Q} \in \mathbb {R} ^{m\times d_{k}},\mathbf {K} \in \mathbb {R} ^{n\times d_{k}}}andV∈Rn×dv{\displaystyle \mathbf {V} \in \mathbb {R} ^{n\times d_{v}}}, the scaled dot-product, orQKV attentionis defined as:Attention(Q,K,V)=softmax(QKTdk)V∈Rm×dv{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}\right)\mathbf {V} \in \mathbb {R} ^{m\times d_{v}}}whereT{\displaystyle {}^{T}}denotestransposeand thesoftmax functionis applied independently to every row of its argument. The matrixQ{\displaystyle \mathbf {Q} }containsm{\displaystyle m}queries, while matricesK,V{\displaystyle \mathbf {K} ,\mathbf {V} }jointly contain anunorderedset ofn{\displaystyle n}key-value pairs. Value vectors in matrixV{\displaystyle \mathbf {V} }are weighted using the weights resulting from the softmax operation, so that the rows of them{\displaystyle m}-by-dv{\displaystyle d_{v}}output matrix are confined to theconvex hullof the points inRdv{\displaystyle \mathbb {R} ^{d_{v}}}given by the rows ofV{\displaystyle \mathbf {V} }. To understand thepermutation invarianceandpermutation equivarianceproperties of QKV attention,[40]letA∈Rm×m{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times m}}andB∈Rn×n{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times n}}bepermutation matrices; andD∈Rm×n{\displaystyle \mathbf {D} \in \mathbb {R} ^{m\times n}}an arbitrary matrix. The softmax function ispermutation equivariantin the sense that: By noting that the transpose of a permutation matrix is also its inverse, it follows that: which shows that QKV attention isequivariantwith respect to re-ordering the queries (rows ofQ{\displaystyle \mathbf {Q} }); andinvariantto re-ordering of the key-value pairs inK,V{\displaystyle \mathbf {K} ,\mathbf {V} }. These properties are inherited when applying linear transforms to the inputs and outputs of QKV attention blocks. For example, a simpleself-attentionfunction defined as: is permutation equivariant with respect to re-ordering the rows of the input matrixX{\displaystyle X}in a non-trivial way, because every row of the output is a function of all the rows of the input. Similar properties hold formulti-head attention, which is defined below. When QKV attention is used as a building block for an autoregressive decoder, and when at training time all input and output matrices haven{\displaystyle n}rows, amasked attentionvariant is used:Attention(Q,K,V)=softmax(QKTdk+M)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}+\mathbf {M} \right)\mathbf {V} }where the mask,M∈Rn×n{\displaystyle \mathbf {M} \in \mathbb {R} ^{n\times n}}is astrictly upper triangular matrix, with zeros on and below the diagonal and−∞{\displaystyle -\infty }in every element above the diagonal. The softmax output, also inRn×n{\displaystyle \mathbb {R} ^{n\times n}}is thenlower triangular, with zeros in all elements above the diagonal. The masking ensures that for all1≤i<j≤n{\displaystyle 1\leq i<j\leq n}, rowi{\displaystyle i}of the attention output is independent of rowj{\displaystyle j}of any of the three input matrices. The permutation invariance and equivariance properties of standard QKV attention do not hold for the masked variant. Multi-head attentionMultiHead(Q,K,V)=Concat(head1,...,headh)WO{\displaystyle {\text{MultiHead}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{Concat}}({\text{head}}_{1},...,{\text{head}}_{h})\mathbf {W} ^{O}}where each head is computed with QKV attention as:headi=Attention(QWiQ,KWiK,VWiV){\displaystyle {\text{head}}_{i}={\text{Attention}}(\mathbf {Q} \mathbf {W} _{i}^{Q},\mathbf {K} \mathbf {W} _{i}^{K},\mathbf {V} \mathbf {W} _{i}^{V})}andWiQ,WiK,WiV{\displaystyle \mathbf {W} _{i}^{Q},\mathbf {W} _{i}^{K},\mathbf {W} _{i}^{V}}, andWO{\displaystyle \mathbf {W} ^{O}}are parameter matrices. The permutation properties of (standard, unmasked) QKV attention apply here also. For permutation matrices,A,B{\displaystyle \mathbf {A} ,\mathbf {B} }: from which we also see thatmulti-head self-attention: is equivariant with respect to re-ordering of the rows of input matrixX{\displaystyle X}. Attention(Q,K,V)=softmax(tanh⁡(WQQ+WKK)V){\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\tanh(\mathbf {W} _{Q}\mathbf {Q} +\mathbf {W} _{K}\mathbf {K} )\mathbf {V} )}whereWQ{\displaystyle \mathbf {W} _{Q}}andWK{\displaystyle \mathbf {W} _{K}}are learnable weight matrices.[13] Attention(Q,K,V)=softmax(QWKT)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\mathbf {Q} \mathbf {W} \mathbf {K} ^{T})\mathbf {V} }whereW{\displaystyle \mathbf {W} }is a learnable weight matrix.[27] Self-attention is essentially the same as cross-attention, except that query, key, and value vectors all come from the same model. Both encoder and decoder can use self-attention, but with subtle differences. For encoder self-attention, we can start with a simple encoder without self-attention, such as an "embedding layer", which simply converts each input word into a vector by a fixedlookup table. This gives a sequence of hidden vectorsh0,h1,…{\displaystyle h_{0},h_{1},\dots }. These can then be applied to a dot-product attention mechanism, to obtainh0′=Attention(h0WQ,HWK,HWV)h1′=Attention(h1WQ,HWK,HWV)⋯{\displaystyle {\begin{aligned}h_{0}'&=\mathrm {Attention} (h_{0}W^{Q},HW^{K},HW^{V})\\h_{1}'&=\mathrm {Attention} (h_{1}W^{Q},HW^{K},HW^{V})\\&\cdots \end{aligned}}}or more succinctly,H′=Attention(HWQ,HWK,HWV){\displaystyle H'=\mathrm {Attention} (HW^{Q},HW^{K},HW^{V})}. This can be applied repeatedly, to obtain a multilayered encoder. This is the "encoder self-attention", sometimes called the "all-to-all attention", as the vector at every position can attend to every other. For decoder self-attention, all-to-all attention is inappropriate, because during the autoregressive decoding process, the decoder cannot attend to future outputs that has yet to be decoded. This can be solved by forcing the attention weightswij=0{\displaystyle w_{ij}=0}for alli<j{\displaystyle i<j}, called "causal masking". This attention mechanism is the "causally masked self-attention".
https://en.wikipedia.org/wiki/Attention_(machine_learning)
Inmathematics(in particular,functional analysis),convolutionis amathematical operationon twofunctionsf{\displaystyle f}andg{\displaystyle g}that produces a third functionf∗g{\displaystyle f*g}, as theintegralof the product of the two functions after one is reflected about the y-axis and shifted. The termconvolutionrefers to both the resulting function and to the process of computing it. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (seecommutativity). Graphically, it expresses how the 'shape' of one function is modified by the other. Some features of convolution are similar tocross-correlation: for real-valued functions, of a continuous or discrete variable, convolutionf∗g{\displaystyle f*g}differs from cross-correlationf⋆g{\displaystyle f\star g}only in that eitherf(x){\displaystyle f(x)}org(x){\displaystyle g(x)}is reflected about the y-axis in convolution; thus it is a cross-correlation ofg(−x){\displaystyle g(-x)}andf(x){\displaystyle f(x)}, orf(−x){\displaystyle f(-x)}andg(x){\displaystyle g(x)}.[A]For complex-valued functions, the cross-correlation operator is theadjointof the convolution operator. Convolution has applications that includeprobability,statistics,acoustics,spectroscopy,signal processingandimage processing,geophysics,engineering,physics,computer visionanddifferential equations.[1] The convolution can be defined for functions onEuclidean spaceand othergroups(asalgebraic structures).[citation needed]For example,periodic functions, such as thediscrete-time Fourier transform, can be defined on acircleand convolved byperiodic convolution. (See row 18 atDTFT § Properties.) Adiscrete convolutioncan be defined for functions on the set ofintegers. Generalizations of convolution have applications in the field ofnumerical analysisandnumerical linear algebra, and in the design and implementation offinite impulse responsefilters in signal processing.[citation needed] Computing theinverseof the convolution operation is known asdeconvolution. The convolution off{\displaystyle f}andg{\displaystyle g}is writtenf∗g{\displaystyle f*g}, denoting the operator with the symbol∗{\displaystyle *}.[B]It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind ofintegral transform: An equivalent definition is (seecommutativity): While the symbolt{\displaystyle t}is used above, it need not represent the time domain. At eacht{\displaystyle t}, the convolution formula can be described as the area under the functionf(τ){\displaystyle f(\tau )}weighted by the functiong(−τ){\displaystyle g(-\tau )}shifted by the amountt{\displaystyle t}. Ast{\displaystyle t}changes, the weighting functiong(t−τ){\displaystyle g(t-\tau )}emphasizes different parts of the input functionf(τ){\displaystyle f(\tau )}; Ift{\displaystyle t}is a positive value, theng(t−τ){\displaystyle g(t-\tau )}is equal tog(−τ){\displaystyle g(-\tau )}that slides or is shifted along theτ{\displaystyle \tau }-axis toward the right (toward+∞{\displaystyle +\infty }) by the amount oft{\displaystyle t}, while ift{\displaystyle t}is a negative value, theng(t−τ){\displaystyle g(t-\tau )}is equal tog(−τ){\displaystyle g(-\tau )}that slides or is shifted toward the left (toward−∞{\displaystyle -\infty }) by the amount of|t|{\displaystyle |t|}. For functionsf{\displaystyle f},g{\displaystyle g}supportedon only[0,∞){\displaystyle [0,\infty )}(i.e., zero for negative arguments), the integration limits can be truncated, resulting in: For the multi-dimensional formulation of convolution, seedomain of definition(below). A common engineering notational convention is:[2] which has to be interpreted carefully to avoid confusion. For instance,f(t)∗g(t−t0){\displaystyle f(t)*g(t-t_{0})}is equivalent to(f∗g)(t−t0){\displaystyle (f*g)(t-t_{0})}, butf(t−t0)∗g(t−t0){\displaystyle f(t-t_{0})*g(t-t_{0})}is in fact equivalent to(f∗g)(t−2t0){\displaystyle (f*g)(t-2t_{0})}.[3] Given two functionsf(t){\displaystyle f(t)}andg(t){\displaystyle g(t)}withbilateral Laplace transforms(two-sided Laplace transform) and respectively, the convolution operation(f∗g)(t){\displaystyle (f*g)(t)}can be defined as theinverse Laplace transformof the product ofF(s){\displaystyle F(s)}andG(s){\displaystyle G(s)}.[4][5]More precisely, Lett=u+v{\displaystyle t=u+v}, then Note thatF(s)⋅G(s){\displaystyle F(s)\cdot G(s)}is the bilateral Laplace transform of(f∗g)(t){\displaystyle (f*g)(t)}. A similar derivation can be done using theunilateral Laplace transform(one-sided Laplace transform). The convolution operation also describes the output (in terms of the input) of an important class of operations known aslinear time-invariant(LTI). SeeLTI system theoryfor a derivation of convolution as the result of LTI constraints. In terms of theFourier transformsof the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as atransfer function). SeeConvolution theoremfor a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. The resultingwaveform(not shown here) is the convolution of functionsf{\displaystyle f}andg{\displaystyle g}. Iff(t){\displaystyle f(t)}is aunit impulse, the result of this process is simplyg(t){\displaystyle g(t)}. Formally: One of the earliest uses of the convolution integral appeared inD'Alembert's derivation ofTaylor's theoreminRecherches sur différents points importants du système du monde,published in 1754.[6] Also, an expression of the type: is used bySylvestre François Lacroixon page 505 of his book entitledTreatise on differences and series, which is the last of 3 volumes of the encyclopedic series:Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800.[7]Soon thereafter, convolution operations appear in the works ofPierre Simon Laplace,Jean-Baptiste Joseph Fourier,Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known asFaltung(which meansfoldinginGerman),composition product,superposition integral, andCarson's integral.[8]Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses.[9][10] The operation: is a particular case of composition products considered by the Italian mathematicianVito Volterrain 1913.[11] When a functiongT{\displaystyle g_{T}}is periodic, with periodT{\displaystyle T}, then for functions,f{\displaystyle f}, such thatf∗gT{\displaystyle f*g_{T}}exists, the convolution is also periodic and identical to: wheret0{\displaystyle t_{0}}is an arbitrary choice. The summation is called aperiodic summationof the functionf{\displaystyle f}. WhengT{\displaystyle g_{T}}is a periodic summation of another function,g{\displaystyle g}, thenf∗gT{\displaystyle f*g_{T}}is known as acircularorcyclicconvolution off{\displaystyle f}andg{\displaystyle g}. And if the periodic summation above is replaced byfT{\displaystyle f_{T}}, the operation is called aperiodicconvolution offT{\displaystyle f_{T}}andgT{\displaystyle g_{T}}. For complex-valued functionsf{\displaystyle f}andg{\displaystyle g}defined on the setZ{\displaystyle \mathbb {Z} }of integers, thediscrete convolutionoff{\displaystyle f}andg{\displaystyle g}is given by:[12] or equivalently (seecommutativity) by: The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of twopolynomials, then the coefficients of theordinary product of the two polynomialsare the convolution of the original two sequences. This is known as theCauchy productof the coefficients of the sequences. Thus whenghas finite support in the set{−M,−M+1,…,M−1,M}{\displaystyle \{-M,-M+1,\dots ,M-1,M\}}(representing, for instance, afinite impulse response), a finite summation may be used:[13] When a functiongN{\displaystyle g_{_{N}}}is periodic, with periodN,{\displaystyle N,}then for functions,f,{\displaystyle f,}such thatf∗gN{\displaystyle f*g_{_{N}}}exists, the convolution is also periodic and identical to: The summation onk{\displaystyle k}is called aperiodic summationof the functionf.{\displaystyle f.} IfgN{\displaystyle g_{_{N}}}is a periodic summation of another function,g,{\displaystyle g,}thenf∗gN{\displaystyle f*g_{_{N}}}is known as acircular convolutionoff{\displaystyle f}andg.{\displaystyle g.} When the non-zero durations of bothf{\displaystyle f}andg{\displaystyle g}are limited to the interval[0,N−1],{\displaystyle [0,N-1],}f∗gN{\displaystyle f*g_{_{N}}}reduces to these common forms: The notationf∗Ng{\displaystyle f*_{N}g}forcyclic convolutiondenotes convolution over thecyclic groupofintegers moduloN. Circular convolution arises most often in the context of fast convolution with afast Fourier transform(FFT) algorithm. In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation inmultiplicationof multi-digit numbers, which can therefore be efficiently implemented with transform techniques (Knuth 1997, §4.3.3.C;von zur Gathen & Gerhard 2003, §8.2). Eq.1requiresNarithmetic operations per output value andN2operations forNoutputs. That can be significantly reduced with any of several fast algorithms.Digital signal processingand other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(NlogN) complexity. The most common fast convolution algorithms usefast Fourier transform(FFT) algorithms via thecircular convolution theorem. Specifically, thecircular convolutionof two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as theSchönhage–Strassen algorithmor the Mersenne transform,[14]use fast Fourier transforms in otherrings. The Winograd method is used as an alternative to the FFT.[15]It significantly speeds up 1D,[16]2D,[17]and 3D[18]convolution. If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.[19]Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as theoverlap–save methodandoverlap–add method.[20]A hybrid convolution method that combines block andFIRalgorithms allows for a zero input-output latency that is useful for real-time convolution computations.[21] The convolution of two complex-valued functions onRdis itself a complex-valued function onRd, defined by: and is well-defined only iffandgdecay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up ingat infinity can be easily offset by sufficiently rapid decay inf. The question of existence thus may involve different conditions onfandg: Iffandgarecompactly supportedcontinuous functions, then their convolution exists, and is also compactly supported and continuous (Hörmander 1983, Chapter 1). More generally, if either function (sayf) is compactly supported and the other islocally integrable, then the convolutionf∗gis well-defined and continuous. Convolution offandgis also well defined when both functions are locally square integrable onRand supported on an interval of the form[a, +∞)(or both supported on[−∞,a]). The convolution offandgexists iffandgare bothLebesgue integrable functionsinL1(Rd), and in this casef∗gis also integrable (Stein & Weiss 1971, Theorem 1.3). This is a consequence ofTonelli's theorem. This is also true for functions inL1, under the discrete convolution, or more generally for theconvolution on any group. Likewise, iff∈L1(Rd)  andg∈Lp(Rd)  where1 ≤p≤ ∞,  thenf*g∈Lp(Rd),  and In the particular casep= 1, this shows thatL1is aBanach algebraunder the convolution (and equality of the two sides holds iffandgare non-negative almost everywhere). More generally,Young's inequalityimplies that the convolution is a continuous bilinear map between suitableLpspaces. Specifically, if1 ≤p,q,r≤ ∞satisfy: then so that the convolution is a continuous bilinear mapping fromLp×LqtoLr. The Young inequality for convolution is also true in other contexts (circle group, convolution onZ). The preceding inequality is not sharp on the real line: when1 <p,q,r< ∞, there exists a constantBp,q< 1such that: The optimal value ofBp,qwas discovered in 1975[22]and independently in 1976,[23]seeBrascamp–Lieb inequality. A stronger estimate is true provided1 <p,q,r< ∞: where‖g‖q,w{\displaystyle \|g\|_{q,w}}is theweakLqnorm. Convolution also defines a bilinear continuous mapLp,w×Lq,w→Lr,w{\displaystyle L^{p,w}\times L^{q,w}\to L^{r,w}}for1<p,q,r<∞{\displaystyle 1<p,q,r<\infty }, owing to the weak Young inequality:[24] In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that iffandgboth decay rapidly, thenf∗galso decays rapidly. In particular, iffandgarerapidly decreasing functions, then so is the convolutionf∗g. Combined with the fact that convolution commutes with differentiation (see#Properties), it follows that the class ofSchwartz functionsis closed under convolution (Stein & Weiss 1971, Theorem 3.3). Iffis a smooth function that iscompactly supportedandgis a distribution, thenf∗gis a smooth function defined by More generally, it is possible to extend the definition of the convolution in a unique way withφ{\displaystyle \varphi }the same asfabove, so that the associative law remains valid in the case wherefis a distribution, andga compactly supported distribution (Hörmander 1983, §4.2). The convolution of any twoBorel measuresμandνofbounded variationis the measureμ∗ν{\displaystyle \mu *\nu }defined by (Rudin 1962) In particular, whereA⊂Rd{\displaystyle A\subset \mathbf {R} ^{d}}is a measurable set and1A{\displaystyle 1_{A}}is theindicator functionofA{\displaystyle A}. This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1functions when μ and ν are absolutely continuous with respect to the Lebesgue measure. The convolution of measures also satisfies the following version of Young's inequality where the norm is thetotal variationof a measure. Because the space of measures of bounded variation is aBanach space, convolution of measures can be treated with standard methods offunctional analysisthat may not apply for the convolution of distributions. The convolution defines a product on thelinear spaceof integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutativeassociative algebrawithoutidentity(Strichartz 1994, §3.3). Other linear spaces of functions, such as the space of continuous functions of compact support, areclosedunder the convolution, and so also form commutative associative algebras. Proof (usingconvolution theorem): q(t)⟺FQ(f)=R(f)S(f){\displaystyle q(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ Q(f)=R(f)S(f)} q(−t)⟺FQ(−f)=R(−f)S(−f){\displaystyle q(-t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ Q(-f)=R(-f)S(-f)} q(−t)=F−1{R(−f)S(−f)}=F−1{R(−f)}∗F−1{S(−f)}=r(−t)∗s(−t){\displaystyle {\begin{aligned}q(-t)&={\mathcal {F}}^{-1}{\bigg \{}R(-f)S(-f){\bigg \}}\\&={\mathcal {F}}^{-1}{\bigg \{}R(-f){\bigg \}}*{\mathcal {F}}^{-1}{\bigg \{}S(-f){\bigg \}}\\&=r(-t)*s(-t)\end{aligned}}} Iffandgare integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals:[25] This follows fromFubini's theorem. The same result holds iffandgare only assumed to be nonnegative measurable functions, byTonelli's theorem. In the one-variable case, whereddx{\displaystyle {\frac {d}{dx}}}is thederivative. More generally, in the case of functions of several variables, an analogous formula holds with thepartial derivative: A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution offandgis differentiable as many times asfandgare in total. These identities hold for example under the condition thatfandgare absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence ofYoung's convolution inequality. For instance, whenfis continuously differentiable with compact support, andgis an arbitrary locally integrable function, These identities also hold much more broadly in the sense of tempered distributions if one offorgis arapidly decreasing tempered distribution, a compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution. In the discrete case, thedifference operatorDf(n) =f(n+ 1) −f(n) satisfies an analogous relationship: Theconvolution theoremstates that[26] whereF{f}{\displaystyle {\mathcal {F}}\{f\}}denotes theFourier transformoff{\displaystyle f}. Versions of this theorem also hold for theLaplace transform,two-sided Laplace transform,Z-transformandMellin transform. IfW{\displaystyle {\mathcal {W}}}is theFourier transform matrix, then where∙{\displaystyle \bullet }isface-splitting product,[27][28][29][30][31]⊗{\displaystyle \otimes }denotesKronecker product,∘{\displaystyle \circ }denotesHadamard product(this result is an evolving ofcount sketchproperties[32]). This can be generalized for appropriate matricesA,B{\displaystyle \mathbf {A} ,\mathbf {B} }: from the properties of theface-splitting product. The convolution commutes with translations, meaning that where τxf is the translation of the functionfbyxdefined by Iffis aSchwartz function, thenτxfis the convolution with a translated Dirac delta functionτxf=f∗τxδ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution. Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study oftime-invariant systems, and especiallyLTI system theory. The representing functiongSis theimpulse responseof the transformationS. A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition thatSmust be acontinuous linear operatorwith respect to the appropriatetopology. It is known, for instance, that every continuous translation invariant continuous linear operator onL1is the convolution with a finiteBorel measure. More generally, every continuous translation invariant continuous linear operator onLpfor 1 ≤p< ∞ is the convolution with atempered distributionwhoseFourier transformis bounded. To wit, they are all given by boundedFourier multipliers. IfGis a suitablegroupendowed with ameasureλ, and iffandgare real or complex valuedintegrablefunctions onG, then we can define their convolution by It is not commutative in general. In typical cases of interestGis alocally compactHausdorfftopological groupand λ is a (left-)Haar measure. In that case, unlessGisunimodular, the convolution defined in this way is not the same as∫f(xy−1)g(y)dλ(y){\textstyle \int f\left(xy^{-1}\right)g(y)\,d\lambda (y)}. The preference of one over the other is made so that convolution with a fixed functiongcommutes with left translation in the group: Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former. Onlocally compact abelian groups, a version of theconvolution theoremholds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. Thecircle groupTwith the Lebesgue measure is an immediate example. For a fixedginL1(T), we have the following familiar operator acting on theHilbert spaceL2(T): The operatorTiscompact. A direct calculation shows that its adjointT*is convolution with By the commutativity property cited above,Tisnormal:T*T=TT* . Also,Tcommutes with the translation operators. Consider the familySof operators consisting of all such convolutions and the translation operators. ThenSis a commuting family of normal operators. According tospectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizesS. This characterizes convolutions on the circle. Specifically, we have which are precisely thecharactersofT. Each convolution is a compactmultiplication operatorin this basis. This can be viewed as a version of the convolution theorem discussed above. A discrete example is a finitecyclic groupof ordern. Convolution operators are here represented bycirculant matrices, and can be diagonalized by thediscrete Fourier transform. A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensionalunitary representationsform an orthonormal basis inL2by thePeter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects ofharmonic analysisthat depend on the Fourier transform. LetGbe a (multiplicatively written) topological group. If μ and ν areRadon measuresonG, then their convolutionμ∗νis defined as thepushforward measureof thegroup actionand can be written as[33] for each measurable subsetEofG. The convolution is also a Radon measure, whosetotal variationsatisfies In the case whenGislocally compactwith (left-)Haar measureλ, and μ and ν areabsolutely continuouswith respect to a λ,so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. In fact, ifeithermeasure is absolutely continuous with respect to the Haar measure, then so is their convolution.[34] If μ and ν areprobability measureson the topological group(R,+),then the convolutionμ∗νis theprobability distributionof the sumX+Yof twoindependentrandom variablesXandYwhose respective distributions are μ and ν. Inconvex analysis, theinfimal convolutionof proper (not identically+∞{\displaystyle +\infty })convex functionsf1,…,fm{\displaystyle f_{1},\dots ,f_{m}}onRn{\displaystyle \mathbb {R} ^{n}}is defined by:[35](f1∗⋯∗fm)(x)=infx{f1(x1)+⋯+fm(xm)|x1+⋯+xm=x}.{\displaystyle (f_{1}*\cdots *f_{m})(x)=\inf _{x}\{f_{1}(x_{1})+\cdots +f_{m}(x_{m})|x_{1}+\cdots +x_{m}=x\}.}It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by theLegendre transform:φ∗(x)=supy(x⋅y−φ(y)).{\displaystyle \varphi ^{*}(x)=\sup _{y}(x\cdot y-\varphi (y)).}We have:(f1∗⋯∗fm)∗(x)=f1∗(x)+⋯+fm∗(x).{\displaystyle (f_{1}*\cdots *f_{m})^{*}(x)=f_{1}^{*}(x)+\cdots +f_{m}^{*}(x).} Let (X, Δ, ∇,ε,η) be abialgebrawith comultiplication Δ, multiplication ∇, unit η, and counitε. The convolution is a product defined on theendomorphism algebraEnd(X) as follows. Letφ,ψ∈ End(X), that is,φ,ψ:X→Xare functions that respect all algebraic structure ofX, then the convolutionφ∗ψis defined as the composition The convolution appears notably in the definition ofHopf algebras(Kassel 1995, §III.3). A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphismSsuch that Convolution and related operations are found in many applications in science, engineering and mathematics.
https://en.wikipedia.org/wiki/Convolution
Natural language processing(NLP) is a subfield ofcomputer scienceand especiallyartificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded innatural languageand is thus closely related toinformation retrieval,knowledge representationandcomputational linguistics, a subfield oflinguistics. Major tasks in natural language processing arespeech recognition,text classification,natural-language understanding, andnatural-language generation. Natural language processing has its roots in the 1950s.[1]Already in 1950,Alan Turingpublished an article titled "Computing Machinery and Intelligence" which proposed what is now called theTuring testas a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language. The premise of symbolic NLP is well-summarized byJohn Searle'sChinese roomexperiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it confronts. Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction ofmachine learningalgorithms for language processing. This was due to both the steady increase in computational power (seeMoore's law) and the gradual lessening of the dominance ofChomskyantheories of linguistics (e.g.transformational grammar), whose theoretical underpinnings discouraged the sort ofcorpus linguisticsthat underlies the machine-learning approach to language processing.[8] Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular:[18][19]such as by writing grammars or devising heuristic rules forstemming. Machine learningapproaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach: Rule-based systems are commonly used: In the late 1980s and mid-1990s, the statistical approach ended a period ofAI winter, which was caused by the inefficiencies of the rule-based approaches.[20][21] The earliestdecision trees, producing systems of hardif–then rules, were still very similar to the old rule-based approaches. Only the introduction of hiddenMarkov models, applied to part-of-speech tagging, announced the end of the old rule-based approach. A major drawback of statistical methods is that they require elaboratefeature engineering. Since 2015,[22]the statistical approach has been replaced by theneural networksapproach, usingsemantic networks[23]andword embeddingsto capture semantic properties of words. Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) are not needed anymore. Neural machine translation, based on then-newly inventedsequence-to-sequencetransformations, made obsolete the intermediate steps, such as word alignment, previously necessary forstatistical machine translation. The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks. Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below. Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed:[46] Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above). Cognitionrefers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses."[47]Cognitive scienceis the interdisciplinary, scientific study of the mind and its processes.[48]Cognitive linguisticsis an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics.[49]Especially during the age ofsymbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies. As an example,George Lakoffoffers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics,[50]with two defining aspects: Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar,[53]functional grammar,[54]construction grammar,[55]computational psycholinguistics and cognitive neuroscience (e.g.,ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences[56]of theACL). More recently, ideas of cognitive NLP have been revived as an approach to achieveexplainability, e.g., under the notion of "cognitive AI".[57]Likewise, ideas of cognitive NLP are inherent to neural modelsmultimodalNLP (although rarely made explicit)[58]and developments inartificial intelligence, specifically tools and technologies usinglarge language modelapproaches[59]and new directions inartificial general intelligencebased on thefree energy principle[60]by British neuroscientist and theoretician at University College LondonKarl J. Friston.
https://en.wikipedia.org/wiki/Natural-language_processing
Theneocognitronis a hierarchical, multilayeredartificial neural networkproposed byKunihiko Fukushimain 1979.[1][2]It has been used for Japanesehandwritten character recognitionand otherpattern recognitiontasks, and served as the inspiration forconvolutional neural networks.[3] Previously in 1969, he published a similar architecture, but with hand-designed kernels inspired by convolutions in mammalian vision.[4]In 1975 he improved it to the Cognitron,[5][6]and in 1979 he improved it to the neocognitron, whichlearnsall convolutional kernels byunsupervised learning(in his terminology, "self-organizedby 'learning without a teacher'").[2] The neocognitron was inspired by the model proposed byHubel&Wieselin 1959. They found two types of cells in the visual primary cortex calledsimple cellandcomplex cell, and also proposed a cascading model of these two types of cells for use in pattern recognition tasks.[7][8] The neocognitron is a natural extension of these cascading models. The neocognitron consists of multiple types of cells, the most important of which are calledS-cellsandC-cells.[9]The local features are extracted by S-cells, and these features' deformation, such as local shifts, are tolerated by C-cells. Local features in the input are integrated gradually and classified in the higher layers.[10]The idea of local feature integration is found in several other models, such as theConvolutional Neural Networkmodel, theSIFTmethod, and theHoGmethod. There are various kinds of neocognitron.[11]For example, some types of neocognitron can detect multiple patterns in the same input by using backward signals to achieveselective attention.[12] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Neocognitron
Thescale-invariant feature transform(SIFT) is acomputer visionalgorithm to detect, describe, and match localfeaturesin images, invented byDavid Lowein 1999.[1]Applications includeobject recognition,robotic mappingand navigation,image stitching,3D modeling,gesture recognition,video tracking, individual identification of wildlife andmatch moving. SIFT keypoints of objects are first extracted from a set of reference images[1]and stored in a database. An object is recognized in a new image by individually comparing each feature from the new image to this database and finding candidate matching features based onEuclidean distanceof their feature vectors. From the full set of matches, subsets of keypoints that agree on the object and its location, scale, and orientation in the new image are identified to filter out good matches. The determination of consistent clusters is performed rapidly by using an efficienthash tableimplementation of the generalisedHough transform. Each cluster of 3 or more features that agree on an object and itsposeis then subject to further detailed model verification and subsequently outliers are discarded. Finally the probability that a particular set of features indicates the presence of an object is computed, given the accuracy of fit and number of probable false matches. Object matches that pass all these tests can be identified as correct with high confidence.[2] Although the SIFT algorithm was previously protected by a patent, its patent expired in 2020.[3] For any object in an image, we can extract important points in the image to provide a "feature description" of the object. This description, extracted from a training image, can then be used to locate the object in a new (previously unseen) image containing other objects. In order to do this reliably, the features should be detectable even if the image is scaled, or if it has noise and different illumination. Such points usually lie on high-contrast regions of the image, such as object edges. Another important characteristic of these features is that the relative positions between them in the original scene should not change between images. For example, if only the four corners of a door were used as features, they would work regardless of the door's position; but if points in the frame were also used, the recognition would fail if the door is opened or closed. Similarly, features located in articulated or flexible objects would typically not work if any change in their internal geometry happens between two images in the set being processed. In practice, SIFT detects and uses a much larger number of features from the images, which reduces the contribution of the errors caused by these local variations in the average error of all feature matching errors. SIFT[3]can robustly identify objects even among clutter and under partial occlusion, because the SIFT feature descriptor is invariant touniform scaling,orientation, illumination changes, and partially invariant toaffine distortion.[1]This section summarizes the original SIFT algorithm and mentions a few competing techniques available for object recognition under clutter and partial occlusion. The SIFT descriptor is based on image measurements in terms ofreceptive fields[4][5][6][7]over whichlocal scale invariant reference frames[8][9]are established bylocal scale selection.[10][11][9]A general theoretical explanation about this is given in the Scholarpedia article on SIFT.[12] The detection and description of local image features can help in object recognition. The SIFT features are local and based on the appearance of the object at particular interest points, and are invariant to image scale and rotation. They are also robust to changes in illumination, noise, and minor changes in viewpoint. In addition to these properties, they are highly distinctive, relatively easy to extract and allow for correct object identification with low probability of mismatch. They are relatively easy to match against a (large) database of local features but, however, the high dimensionality can be an issue, and generally probabilistic algorithms such ask-d treeswithbest bin firstsearch are used. Object description by set of SIFT features is also robust to partial occlusion; as few as 3 SIFT features from an object are enough to compute its location and pose. Recognition can be performed in close-to-real time, at least for small databases and on modern computer hardware.[citation needed] Lowe's method for image feature generation transforms an image into a large collection of feature vectors, each of which is invariant to image translation, scaling, and rotation, partially invariant to illumination changes, and robust to local geometric distortion. These features share similar properties with neurons in theprimary visual cortexthat encode basic forms, color, and movement for object detection in primate vision.[13]Key locations are defined as maxima and minima of the result ofdifference of Gaussiansfunction applied inscale spaceto a series of smoothed and resampled images. Low-contrast candidate points and edge response points along an edge are discarded. Dominant orientations are assigned to localized key points. These steps ensure that the key points are more stable for matching and recognition. SIFT descriptors robust to local affine distortion are then obtained by considering pixels around a radius of the key location, blurring, and resampling local image orientation planes. Indexing consists of storing SIFT keys and identifying matching keys from the new image. Lowe used a modification of thek-d treealgorithm called thebest-bin-firstsearch(BBF) method[14]that can identify thenearest neighborswith high probability using only a limited amount of computation. The BBF algorithm uses a modified search ordering for thek-d treealgorithm so that bins in feature space are searched in the order of their closest distance from the query location. This search order requires the use of aheap-basedpriority queuefor efficient determination of the search order. We obtain a candidate for each keypoint by identifying its nearest neighbor in the database of keypoints from training images. The nearest neighbors are defined as the keypoints with minimumEuclidean distancefrom the given descriptor vector. The way Lowe[2]determined whether a given candidate should be kept or 'thrown out' is by checking the ratio between the distance from this given candidate and the distance from the closest keypoint which is not of the same object class as the candidate at hand (candidate feature vector / closest different class feature vector), the idea is that we can only be sure of candidates in which features/keypoints from distinct object classes don't "clutter" it (not geometrically clutter in the feature space necessarily but more so clutter along the right half (>0) of the real line), this is an obvious consequence of usingEuclidean distanceas our nearest neighbor measure. The ratio threshold for rejection is whenever it is above 0.8. This method eliminated 90% of false matches while discarding less than 5% of correct matches. To further improve the efficiency of the best-bin-first algorithm search was cut off after checking the first 200 nearest neighbor candidates. For a database of 100,000 keypoints, this provides a speedup over exact nearest neighbor search by about 2 orders of magnitude, yet results in less than a 5% loss in the number of correct matches. Hough transformis used to cluster reliable model hypotheses to search for keys that agree upon a particular modelpose. Hough transform identifies clusters of features with a consistent interpretation by using each feature to vote for all object poses that are consistent with the feature. When clusters of features are found to vote for the same pose of an object, the probability of the interpretation being correct is much higher than for any single feature. An entry in ahash tableis created predicting the model location, orientation, and scale from the match hypothesis. The hash table is searched to identify all clusters of at least 3 entries in a bin, and the bins are sorted into decreasing order of size. Each of the SIFT keypoints specifies 2D location, scale, and orientation, and each matched keypoint in the database has a record of its parameters relative to the training image in which it was found. The similarity transform implied by these 4 parameters is only an approximation to the full 6 degree-of-freedom pose space for a 3D object and also does not account for any non-rigid deformations. Therefore, Lowe[2]used broad bin sizes of 30 degrees for orientation, a factor of 2 for scale, and 0.25 times the maximum projected training image dimension (using the predicted scale) for location. The SIFT key samples generated at the larger scale are given twice the weight of those at the smaller scale. This means that the larger scale is in effect able to filter the most likely neighbors for checking at the smaller scale. This also improves recognition performance by giving more weight to the least-noisy scale. To avoid the problem of boundary effects in bin assignment, each keypoint match votes for the 2 closest bins in each dimension, giving a total of 16 entries for each hypothesis and further broadening the pose range. Each identified cluster is then subject to a verification procedure in which alinear least squaressolution is performed for the parameters of theaffine transformationrelating the model to the image. The affine transformation of a model point [x y]Tto an image point [u v]Tcan be written as below where the model translation is [txty]Tand the affine rotation, scale, and stretch are represented by the parameters m1, m2, m3and m4. To solve for the transformation parameters the equation above can be rewritten to gather the unknowns into a column vector. This equation shows a single match, but any number of further matches can be added, with each match contributing two more rows to the first and last matrix. At least 3 matches are needed to provide a solution. We can write this linear system as whereAis a knownm-by-nmatrix(usually withm>n),xis an unknownn-dimensional parametervector, andbis a knownm-dimensional measurement vector. Therefore, the minimizing vectorx^{\displaystyle {\hat {\mathbf {x} }}}is a solution of thenormal equation The solution of the system of linear equations is given in terms of the matrix(ATA)−1AT{\displaystyle (A^{T}A)^{-1}A^{T}}, called thepseudoinverseofA, by which minimizes the sum of the squares of the distances from the projected model locations to the corresponding image locations. Outlierscan now be removed by checking for agreement between each image feature and the model, given the parameter solution. Given thelinear least squaressolution, each match is required to agree within half the error range that was used for the parameters in theHough transformbins. As outliers are discarded, the linear least squares solution is re-solved with the remaining points, and the process iterated. If fewer than 3 points remain after discardingoutliers, then the match is rejected. In addition, a top-down matching phase is used to add any further matches that agree with the projected model position, which may have been missed from theHough transformbin due to the similarity transform approximation or other errors. The final decision to accept or reject a model hypothesis is based on a detailed probabilistic model.[15]This method first computes the expected number of false matches to the model pose, given the projected size of the model, the number of features within the region, and the accuracy of the fit. ABayesian probabilityanalysis then gives the probability that the object is present based on the actual number of matching features found. A model is accepted if the final probability for a correct interpretation is greater than 0.98. Lowe's SIFT based object recognition gives excellent results except under wide illumination variations and under non-rigid transformations. We begin by detecting points of interest, which are termedkeypointsin the SIFT framework. The image isconvolvedwith Gaussian filters at different scales, and then the difference of successiveGaussian-blurredimages are taken. Keypoints are then taken as maxima/minima of theDifference of Gaussians(DoG) that occur at multiple scales. Specifically, a DoG imageD(x,y,σ){\displaystyle D\left(x,y,\sigma \right)}is given by Hence a DoG image between scaleskiσ{\displaystyle k_{i}\sigma }andkjσ{\displaystyle k_{j}\sigma }is just the difference of the Gaussian-blurred images at scaleskiσ{\displaystyle k_{i}\sigma }andkjσ{\displaystyle k_{j}\sigma }. Forscale spaceextrema detection in the SIFT algorithm, the image is first convolved with Gaussian-blurs at different scales. The convolved images are grouped by octave (an octave corresponds to doubling the value ofσ{\displaystyle \sigma }), and the value ofki{\displaystyle k_{i}}is selected so that we obtain a fixed number of convolved images per octave. Then the Difference-of-Gaussian images are taken from adjacent Gaussian-blurred images per octave. Once DoG images have been obtained, keypoints are identified as local minima/maxima of the DoG images across scales. This is done by comparing each pixel in the DoG images to its eight neighbors at the same scale and nine corresponding neighboring pixels in each of the neighboring scales. If the pixel value is the maximum or minimum among all compared pixels, it is selected as a candidate keypoint. This keypoint detection step is a variation of one of theblob detectionmethods developed by Lindeberg by detecting scale-space extrema of the scale normalized Laplacian;[10][11]that is, detecting points that are local extrema with respect to both space and scale, in the discrete case by comparisons with the nearest 26 neighbors in a discretized scale-space volume. The difference of Gaussians operator can be seen as an approximation to the Laplacian, with the implicit normalization in thepyramidalso constituting a discrete approximation of the scale-normalized Laplacian.[12]Another real-time implementation of scale-space extrema of the Laplacian operator has been presented by Lindeberg and Bretzner based on a hybrid pyramid representation,[16]which was used for human-computer interaction by real-time gesture recognition in Bretzner et al. (2002).[17] Scale-space extrema detection produces too many keypoint candidates, some of which are unstable. The next step in the algorithm is to perform a detailed fit to the nearby data for accurate location, scale, and ratio ofprincipal curvatures. This information allows the rejection of points which are low contrast (and are therefore sensitive to noise) or poorly localized along an edge. First, for each candidate keypoint, interpolation of nearby data is used to accurately determine its position. The initial approach was to just locate each keypoint at the location and scale of the candidate keypoint.[1]The new approach calculates the interpolated location of the extremum, which substantially improves matching and stability.[2]The interpolation is done using the quadraticTaylor expansionof the Difference-of-Gaussian scale-space function,D(x,y,σ){\displaystyle D\left(x,y,\sigma \right)}with the candidate keypoint as the origin. This Taylor expansion is given by: where D and its derivatives are evaluated at the candidate keypoint andx=(x,y,σ)T{\displaystyle {\textbf {x}}=\left(x,y,\sigma \right)^{T}}is the offset from this point. The location of the extremum,x^{\displaystyle {\hat {\textbf {x}}}}, is determined by taking the derivative of this function with respect tox{\displaystyle {\textbf {x}}}and setting it to zero. If the offsetx^{\displaystyle {\hat {\textbf {x}}}}is larger than0.5{\displaystyle 0.5}in any dimension, then that's an indication that the extremum lies closer to another candidate keypoint. In this case, the candidate keypoint is changed and the interpolation performed instead about that point. Otherwise the offset is added to its candidate keypoint to get the interpolated estimate for the location of the extremum. A similar subpixel determination of the locations of scale-space extrema is performed in the real-time implementation based on hybrid pyramids developed by Lindeberg and his co-workers.[16] To discard the keypoints with low contrast, the value of the second-order Taylor expansionD(x){\displaystyle D({\textbf {x}})}is computed at the offsetx^{\displaystyle {\hat {\textbf {x}}}}. If this value is less than0.03{\displaystyle 0.03}, the candidate keypoint is discarded. Otherwise it is kept, with final scale-space locationy+x^{\displaystyle {\textbf {y}}+{\hat {\textbf {x}}}}, wherey{\displaystyle {\textbf {y}}}is the original location of the keypoint. The DoG function will have strong responses along edges, even if the candidate keypoint is not robust to small amounts of noise. Therefore, in order to increase stability, we need to eliminate the keypoints that have poorly determined locations but have high edge responses. For poorly defined peaks in the DoG function, theprincipal curvatureacross the edge would be much larger than the principal curvature along it. Finding these principal curvatures amounts to solving for theeigenvaluesof the second-orderHessian matrix,H: The eigenvalues ofHare proportional to the principal curvatures of D. It turns out that the ratio of the two eigenvalues, sayα{\displaystyle \alpha }is the larger one, andβ{\displaystyle \beta }the smaller one, with ratior=α/β{\displaystyle r=\alpha /\beta }, is sufficient for SIFT's purposes. The trace ofH, i.e.,Dxx+Dyy{\displaystyle D_{xx}+D_{yy}}, gives us the sum of the two eigenvalues, while its determinant, i.e.,DxxDyy−Dxy2{\displaystyle D_{xx}D_{yy}-D_{xy}^{2}}, yields the product. The ratioR=Tr⁡(H)2/Det⁡(H){\displaystyle {\text{R}}=\operatorname {Tr} ({\textbf {H}})^{2}/\operatorname {Det} ({\textbf {H}})}can be shown to be equal to(r+1)2/r{\displaystyle (r+1)^{2}/r}, which depends only on the ratio of the eigenvalues rather than their individual values. R is minimum when the eigenvalues are equal to each other. Therefore, the higher theabsolute differencebetween the two eigenvalues, which is equivalent to a higher absolute difference between the two principal curvatures of D, the higher the value of R. It follows that, for some threshold eigenvalue ratiorth{\displaystyle r_{\text{th}}}, if R for a candidate keypoint is larger than(rth+1)2/rth{\displaystyle (r_{\text{th}}+1)^{2}/r_{\text{th}}}, that keypoint is poorly localized and hence rejected. The new approach usesrth=10{\displaystyle r_{\text{th}}=10}.[2] This processing step for suppressing responses at edges is a transfer of a corresponding approach in theHarris operatorfor corner detection. The difference is that the measure for thresholding is computed from the Hessian matrix instead of asecond-moment matrix. In this step, each keypoint is assigned one or more orientations based on local image gradient directions. This is the key step in achievinginvariance to rotationas the keypoint descriptor can be represented relative to this orientation and therefore achieve invariance to image rotation. First, the Gaussian-smoothed imageL(x,y,σ){\displaystyle L\left(x,y,\sigma \right)}at the keypoint's scaleσ{\displaystyle \sigma }is taken so that all computations are performed in a scale-invariant manner. For an image sampleL(x,y){\displaystyle L\left(x,y\right)}at scaleσ{\displaystyle \sigma }, the gradient magnitude,m(x,y){\displaystyle m\left(x,y\right)}, and orientation,θ(x,y){\displaystyle \theta \left(x,y\right)}, are precomputed using pixel differences: The magnitude and direction calculations for the gradient are done for every pixel in a neighboring region around the keypoint in the Gaussian-blurred image L. An orientation histogram with 36 bins is formed, with each bin covering 10 degrees. Each sample in the neighboring window added to a histogram bin is weighted by its gradient magnitude and by a Gaussian-weighted circular window with aσ{\displaystyle \sigma }that is 1.5 times that of the scale of the keypoint. The peaks in this histogram correspond to dominant orientations. Once the histogram is filled, the orientations corresponding to the highest peak and local peaks that are within 80% of the highest peaks are assigned to the keypoint. In the case of multiple orientations being assigned, an additional keypoint is created having the same location and scale as the original keypoint for each additional orientation. Previous steps found keypoint locations at particular scales and assigned orientations to them. This ensured invariance to image location, scale and rotation. Now we want to compute a descriptor vector for each keypoint such that the descriptor is highly distinctive and partially invariant to the remaining variations such as illumination, 3D viewpoint, etc. This step is performed on the image closest in scale to the keypoint's scale. First a set of orientation histograms is created on 4×4 pixel neighborhoods with 8 bins each. These histograms are computed from magnitude and orientation values of samples in a 16×16 region around the keypoint such that each histogram contains samples from a 4×4 subregion of the original neighborhood region. The image gradient magnitudes and orientations are sampled around the keypoint location, using the scale of the keypoint to select the level of Gaussian blur for the image. In order to achieve orientation invariance, the coordinates of the descriptor and the gradient orientations are rotated relative to the keypoint orientation. The magnitudes are further weighted by a Gaussian function withσ{\displaystyle \sigma }equal to one half the width of the descriptor window. The descriptor then becomes a vector of all the values of these histograms. Since there are 4 × 4 = 16 histograms each with 8 bins the vector has 128 elements. This vector is then normalized to unit length in order to enhance invariance to affine changes in illumination. To reduce the effects of non-linear illumination a threshold of 0.2 is applied and the vector is again normalized. The thresholding process, also referred to as clamping, can improve matching results even when non-linear illumination effects are not present.[18]The threshold of 0.2 was empirically chosen, and by replacing the fixed threshold with one systematically calculated, matching results can be improved.[18] Although the dimension of the descriptor, i.e. 128, seems high, descriptors with lower dimension than this don't perform as well across the range of matching tasks[2]and the computational cost remains low due to the approximate BBF (see below) method used for finding the nearest neighbor. Longer descriptors continue to do better but not by much and there is an additional danger of increased sensitivity to distortion and occlusion. It is also shown that feature matching accuracy is above 50% for viewpoint changes of up to 50 degrees. Therefore, SIFT descriptors are invariant to minor affine changes. To test the distinctiveness of the SIFT descriptors, matching accuracy is also measured against varying number of keypoints in the testing database, and it is shown that matching accuracy decreases only very slightly for very large database sizes, thus indicating that SIFT features are highly distinctive. There has been an extensive study done on the performance evaluation of different local descriptors, including SIFT, using a range of detectors.[19]The main results are summarized below: The evaluations carried out suggests strongly that SIFT-based descriptors, which are region-based, are the most robust and distinctive, and are therefore best suited for feature matching. However, most recent feature descriptors such asSURFhave not been evaluated in this study. SURF has later been shown to have similar performance to SIFT, while at the same time being much faster.[20]Other studies conclude that when speed is not critical, SIFT outperforms SURF.[21][22]Specifically, disregarding discretization effects the pure image descriptor in SIFT is significantly better than the pure image descriptor in SURF, whereas the scale-space extrema of the determinant of the Hessian underlying the pure interest point detector in SURF constitute significantly better interest points compared to the scale-space extrema of the Laplacian to which the interest point detector in SIFT constitutes a numerical approximation.[21] The performance of image matching by SIFT descriptors can be improved in the sense of achieving higher efficiency scores and lower 1-precisionscores by replacing the scale-space extrema of the difference-of-Gaussians operator in original SIFT by scale-space extrema of the determinant of the Hessian, or more generally considering a more general family of generalized scale-space interest points.[21] Recently, a slight variation of the descriptor employing an irregular histogram grid has been proposed that significantly improves its performance.[23]Instead of using a 4×4 grid of histogram bins, all bins extend to the center of the feature. This improves the descriptor's robustness to scale changes. The SIFT-Rank[24]descriptor was shown to improve the performance of the standard SIFT descriptor for affine feature matching. A SIFT-Rank descriptor is generated from a standard SIFT descriptor, by setting each histogram bin to its rank in a sorted array of bins. The Euclidean distance between SIFT-Rank descriptors is invariant to arbitrary monotonic changes in histogram bin values, and is related toSpearman's rank correlation coefficient. Given SIFT's ability to find distinctive keypoints that are invariant to location, scale and rotation, and robust toaffine transformations(changes inscale,rotation,shear, and position) and changes in illumination, they are usable for object recognition. The steps are given below. SIFT features can essentially be applied to any task that requires identification of matching locations between images. Work has been done on applications such as recognition of particular object categories in 2D images, 3D reconstruction, motion tracking and segmentation, robot localization, image panorama stitching andepipolarcalibration. Some of these are discussed in more detail below. In this application,[26]a trinocular stereo system is used to determine 3D estimates for keypoint locations. Keypoints are used only when they appear in all 3 images with consistent disparities, resulting in very few outliers. As the robot moves, it localizes itself using feature matches to the existing 3D map, and then incrementally adds features to the map while updating their 3D positions using aKalman filter. This provides a robust and accurate solution to the problem of robot localization in unknown environments. Recent 3D solvers leverage the use of keypoint directions to solve trinocular geometry from three keypoints[27]and absolute pose from only two keypoints,[28]an often disregarded but useful measurement available in SIFT. These orientation measurements reduce the number of required correspondences, further increasing robustness exponentially. SIFT feature matching can be used inimage stitchingfor fully automatedpanoramareconstruction from non-panoramic images. The SIFT features extracted from the input images are matched against each other to findknearest-neighbors for each feature. These correspondences are then used to findmcandidate matching images for each image.Homographiesbetween pairs of images are then computed usingRANSACand a probabilistic model is used for verification. Because there is no restriction on the input images, graph search is applied to findconnected componentsof image matches such that each connected component will correspond to a panorama. Finally for each connected componentbundle adjustmentis performed to solve for joint camera parameters, and the panorama is rendered usingmulti-band blending. Because of the SIFT-inspired object recognition approach to panorama stitching, the resulting system is insensitive to the ordering, orientation, scale and illumination of the images. The input images can contain multiple panoramas and noise images (some of which may not even be part of the composite image), and panoramic sequences are recognized and rendered as output.[29] This application uses SIFT features for3D object recognitionand3D modelingin context ofaugmented reality, in which synthetic objects with accurate pose are superimposed on real images. SIFT matching is done for a number of 2D images of a scene or object taken from different angles. This is used withbundle adjustmentinitialized from anessential matrixortrifocal tensorto build a sparse 3D model of the viewed scene and to simultaneously recover camera poses andcalibrationparameters. Then the position, orientation and size of the virtual object are defined relative to the coordinate frame of the recovered model. For onlinematch moving, SIFT features again are extracted from the current video frame and matched to the features already computed for the world model, resulting in a set of 2D-to-3D correspondences. These correspondences are then used to compute the current camera pose for the virtual projection and final rendering. A regularization technique is used to reduce the jitter in the virtual projection.[30]The use of SIFT directions have also been used to increase robustness of this process.[27][28]3D extensions of SIFT have also been evaluated fortrue 3Dobject recognition and retrieval.[31][32] Extensions of the SIFT descriptor to 2+1-dimensional spatio-temporal data in context ofhuman action recognitionin video sequences have been studied.[31][33][34][35]The computation of local position-dependent histograms in the 2D SIFT algorithm are extended from two to three dimensions to describe SIFT features in a spatio-temporal domain. For application to human action recognition in a video sequence, sampling of the training videos is carried out either at spatio-temporal interest points or at randomly determined locations, times and scales. The spatio-temporal regions around these interest points are then described using the 3D SIFT descriptor. These descriptors are then clustered to form a spatio-temporalBag of words model. 3D SIFT descriptors extracted from the test videos are then matched against thesewordsfor human action classification. The authors report much better results with their 3D SIFT descriptor approach than with other approaches like simple 2D SIFT descriptors and Gradient Magnitude.[36] The Feature-basedMorphometry(FBM) technique[37]uses extrema in a difference of Gaussian scale-space to analyze and classify 3Dmagnetic resonance images(MRIs) of the human brain. FBM models the image probabilistically as a collage of independent features, conditional on image geometry and group labels, e.g. healthy subjects and subjects withAlzheimer's disease(AD). Features are first extracted in individual images from a 4D difference of Gaussian scale-space, then modeled in terms of their appearance, geometry and group co-occurrence statistics across a set of images. FBM was validated in the analysis of AD using a set of ~200 volumetric MRIs of the human brain, automatically identifying established indicators of AD in the brain and classifying mild AD in new images with a rate of 80%.[37] Alternative methods for scale-invariant object recognition under clutter / partial occlusion include the following. RIFT[38]is a rotation-invariant generalization of SIFT. The RIFT descriptor is constructed using circular normalized patches divided into concentric rings of equal width and within each ring a gradient orientation histogram is computed. To maintain rotation invariance, the orientation is measured at each point relative to the direction pointing outward from the center. RootSIFT[39]is a variant of SIFT that modifies descriptor normalization. Because SIFT descriptors are histograms (and so areprobability distributions),Euclidean distanceis not an accurate way to measure their similarity. Better similarity metrics turn out to be ones tailored to probability distributions, such asBhattacharyya coefficient(also called Hellinger kernel). For this purpose, the originallyℓ2{\displaystyle \ell ^{2}}-normalized descriptor is firstℓ1{\displaystyle \ell ^{1}}-normalized and the square root of each element is computed, followed byℓ2{\displaystyle \ell ^{2}}-renormalization. After these algebraic manipulations, RootSIFT descriptors can be normally compared usingEuclidean distance, which is equivalent to using the Hellinger kernel on the original SIFT descriptors. This normalization scheme termed “L1-sqrt” was previously introduced for the block normalization ofHOGfeatures whose rectangular block arrangement descriptor variant (R-HOG) is conceptually similar to the SIFT descriptor. G-RIF:[40]Generalized Robust Invariant Feature is a general context descriptor which encodes edge orientation, edge density andhueinformation in a unified form combining perceptual information with spatial encoding. The object recognition scheme uses neighboring context based voting to estimate object models. "SURF:[41]Speeded Up Robust Features" is a high-performance scale- and rotation-invariant interest point detector / descriptor claimed to approximate or even outperform previously proposed schemes with respect to repeatability, distinctiveness, and robustness. SURF relies onintegral imagesfor image convolutions to reduce computation time, builds on the strengths of the leading existing detectors and descriptors (using a fastHessian matrix-based measure for the detector and a distribution-based descriptor). It describes a distribution ofHaar waveletresponses within the interest point neighborhood. Integral images are used for speed and only 64 dimensions are used reducing the time for feature computation and matching. The indexing step is based on the sign of theLaplacian, which increases the matching speed and the robustness of the descriptor. PCA-SIFT[42]andGLOH[19]are variants of SIFT. PCA-SIFT descriptor is a vector of image gradients in x and y direction computed within the support region. The gradient region is sampled at 39×39 locations, therefore the vector is of dimension 3042. The dimension is reduced to 36 withPCA. Gradient location-orientation histogram (GLOH) is an extension of the SIFT descriptor designed to increase its robustness and distinctiveness. The SIFT descriptor is computed for a log-polar location grid with three bins in radial direction (the radius set to 6, 11, and 15) and 8 in angular direction, which results in 17 location bins. The central bin is not divided in angular directions. The gradient orientations are quantized in 16 bins resulting in 272-bin histogram. The size of this descriptor is reduced withPCA. Thecovariance matrixforPCAis estimated on image patches collected from various images. The 128 largesteigenvectorsare used for description. Gauss-SIFT[21]is a pure image descriptor defined by performing all image measurements underlying the pure image descriptor in SIFT by Gaussian derivative responses as opposed to derivative approximations in an image pyramid as done in regular SIFT. In this way, discretization effects over space and scale can be reduced to a minimum allowing for potentially more accurate image descriptors. In Lindeberg (2015)[21]such pure Gauss-SIFT image descriptors were combined with a set of generalized scale-space interest points comprising theLaplacian of the Gaussian, thedeterminant of the Hessian, four new unsigned or signedHessian feature strength measuresas well asHarris-LaplaceandShi-and-Tomasiinterests points. In an extensive experimental evaluation on a poster dataset comprising multiple views of 12 posters over scaling transformations up to a factor of 6 and viewing direction variations up to a slant angle of 45 degrees, it was shown that substantial increase in performance of image matching (higher efficiency scores and lower 1-precisionscores) could be obtained by replacing Laplacian of Gaussian interest points by determinant of the Hessian interest points. Since difference-of-Gaussians interest points constitute a numerical approximation of Laplacian of the Gaussian interest points, this shows that a substantial increase in matching performance is possible by replacing the difference-of-Gaussians interest points in SIFT by determinant of the Hessian interest points. Additional increase in performance can furthermore be obtained by considering the unsigned Hessian feature strength measureD1L=det⁡HL−ktrace2⁡HLifdet⁡HL−ktrace2⁡HL>0or 0 otherwise{\displaystyle D_{1}L=\operatorname {det} HL-k\,\operatorname {trace} ^{2}HL\,{\mbox{if}}\operatorname {det} HL-k\,\operatorname {trace} ^{2}HL>0\,{\mbox{or 0 otherwise}}}. A quantitative comparison between the Gauss-SIFT descriptor and a corresponding Gauss-SURF descriptor did also show that Gauss-SIFT does generally perform significantly better than Gauss-SURF for a large number of different scale-space interest point detectors. This study therefore shows that discregarding discretization effects the pure image descriptor in SIFT is significantly better than the pure image descriptor in SURF, whereas the underlying interest point detector in SURF, which can be seen as numerical approximation to scale-space extrema of the determinant of the Hessian, is significantly better than the underlying interest point detector in SIFT. Wagner et al. developed two object recognition algorithms especially designed with the limitations of current mobile phones in mind.[43]In contrast to the classic SIFT approach, Wagner et al. use theFASTcorner detector for feature detection. The algorithm also distinguishes between the off-line preparation phase where features are created at different scale levels and the on-line phase where features are only created at the current fixed scale level of the phone's camera image. In addition, features are created from a fixed patch size of 15×15 pixels and form a SIFT descriptor with only 36 dimensions. The approach has been further extended by integrating aScalable Vocabulary Treein the recognition pipeline.[44]This allows the efficient recognition of a larger number of objects on mobile phones. The approach is mainly restricted by the amount of availableRAM. KAZE and A-KAZE(KAZE Features and Accelerated-Kaze Features)is a new 2D feature detection and description method that perform better compared to SIFT and SURF. It gains a lot of popularity due to its open source code. KAZE was originally made by Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison.[45] Related studies: Tutorials: Implementations:
https://en.wikipedia.org/wiki/Scale-invariant_feature_transform
Time delay neural network(TDNN)[1]is a multilayerartificial neural networkarchitecture whose purpose is to 1) classify patterns with shift-invariance, and 2) model context at each layer of the network. Shift-invariant classification means that the classifier does not require explicit segmentation prior to classification. For the classification of a temporal pattern (such as speech), the TDNN thus avoids having to determine the beginning and end points of sounds before classifying them. For contextual modelling in a TDNN, each neural unit at each layer receives input not only from activations/features at the layer below, but from a pattern of unit output and its context. For time signals each unit receives as input the activation patterns over time from units below. Applied to two-dimensional classification (images, time-frequency patterns), the TDNN can be trained with shift-invariance in the coordinate space and avoids precise segmentation in the coordinate space. The TDNN was introduced in the late 1980s and applied to a task ofphonemeclassification for automaticspeech recognitionin speech signals where the automatic determination of precise segments or feature boundaries was difficult or impossible. Because the TDNN recognizes phonemes and their underlying acoustic/phonetic features, independent of position in time, it improved performance over static classification.[1][2]It was also applied to two-dimensional signals (time-frequency patterns in speech,[3]and coordinate space pattern in OCR[4]). Kunihiko Fukushimapublished theneocognitronin 1980.[5]Max poolingappears in a 1982 publication on the neocognitron[6]and was in the 1989 publication inLeNet-5.[7] In 1990, Yamaguchi et al. used max pooling in TDNNs in order to realize a speaker independent isolated word recognition system.[8] The Time Delay Neural Network, like other neural networks, operates with multiple interconnected layers ofperceptrons, and is implemented as afeedforward neural network. All neurons (at each layer) of a TDNN receive inputs from the outputs of neurons at the layer below but with two differences: In the case of a speech signal, inputs are spectral coefficients over time. In order to learn critical acoustic-phonetic features (for example formant transitions, bursts, frication, etc.) without first requiring precise localization, the TDNN is trained time-shift-invariantly. Time-shift invariance is achieved through weight sharing across time during training: Time shifted copies of the TDNN are made over the input range (from left to right in Fig.1). Backpropagation is then performed from an overall classification target vector (see TDNN diagram, three phoneme class targets (/b/, /d/, /g/) are shown in the output layer), resulting in gradients that will generally vary for each of the time-shifted network copies. Since such time-shifted networks are only copies, however, the position dependence is removed by weight sharing. In this example, this is done by averaging the gradients from each time-shifted copy before performing the weight update. In speech, time-shift invariant training was shown to learn weight matrices that are independent of precise positioning of the input. The weight matrices could also be shown to detect important acoustic-phonetic features that are known to be important for human speech perception, such as formant transitions, bursts, etc.[1]TDNNs could also be combined or grown by way of pre-training.[9] The precise architecture of TDNNs (time-delays, number of layers) is mostly determined by the designer depending on the classification problem and the most useful context sizes. The delays or context windows are chosen specific to each application. Work has also been done to create adaptable time-delay TDNNs[10]where this manual tuning is eliminated. TDNN-based phoneme recognizers compared favourably in early comparisons with HMM-based phone models.[1][9]Modern deep TDNN architectures include many more hidden layers and sub-sample or pool connections over broader contexts at higher layers. They achieve up to 50% word error reduction overGMM-based acoustic models.[11][12]While the different layers of TDNNs are intended to learn features of increasing context width, they do model local contexts. When longer-distance relationships and pattern sequences have to be processed, learning states and state-sequences is important and TDNNs can be combined with other modelling techniques.[13][3][4] TDNNs used to solve problems in speech recognition that were introduced in 1989[2]and initially focused on shift-invariant phoneme recognition. Speech lends itself nicely to TDNNs as spoken sounds are rarely of uniform length and precise segmentation is difficult or impossible. By scanning a sound over past and future, the TDNN is able to construct a model for the key elements of that sound in a time-shift invariant manner. This is particularly useful as sounds are smeared out through reverberation.[11][12]Large phonetic TDNNs can be constructed modularly through pre-training and combining smaller networks.[9] Large vocabulary speech recognition requires recognizing sequences of phonemes that make up words subject to the constraints of a large pronunciation vocabulary. Integration of TDNNs into large vocabulary speech recognizers is possible by introducing state transitions and search between phonemes that make up a word. The resulting Multi-State Time-Delay Neural Network (MS-TDNN) can be trained discriminative from the word level, thereby optimizing the entire arrangement toward word recognition instead of phoneme classification.[13][14][4] Two-dimensional variants of the TDNNs were proposed for speaker independence.[3]Here, shift-invariance is applied to the timeas well asto the frequency axis in order to learn hidden features that are independent of precise location in time and in frequency (the latter being due to speaker variability). One of the persistent problems in speech recognition is recognizing speech when it is corrupted by echo and reverberation (as is the case in large rooms and distant microphones). Reverberation can be viewed as corrupting speech with delayed versions of itself. In general, it is difficult, however, to de-reverberate a signal as the impulse response function (and thus the convolutional noise experienced by the signal) is not known for any arbitrary space. The TDNN was shown to be effective to recognize speech robustly despite different levels of reverberation.[11][12] TDNNs were also successfully used in early demonstrations of audio-visual speech, where the sounds of speech are complemented by visually reading lip movement.[14]Here, TDNN-based recognizers used visual and acoustic features jointly to achieve improved recognition accuracy, particularly in the presence of noise, where complementary information from an alternate modality could be fused nicely in a neural net. TDNNs have been used effectively in compact and high-performancehandwriting recognitionsystems. Shift-invariance was also adapted to spatial patterns (x/y-axes) in image offline handwriting recognition.[4] Video has a temporal dimension that makes a TDNN an ideal solution to analysing motion patterns. An example of this analysis is a combination of vehicle detection and recognizing pedestrians.[15]When examining videos, subsequent images are fed into the TDNN as input where each image is the next frame in the video. The strength of the TDNN comes from its ability to examine objects shifted in time forward and backward to define an object detectable as the time is altered. If an object can be recognized in this manner, an application can plan on that object to be found in the future and perform an optimal action. Two-dimensional TDNNs were later applied to other image-recognition tasks under the name of "Convolutional Neural Networks", where shift-invariant training is applied to the x/y axes of an image. [1] [2][3][4]
https://en.wikipedia.org/wiki/Time_delay_neural_network
Avision processing unit(VPU) is (as of 2023) an emerging class ofmicroprocessor; it is a specific type ofAI accelerator, designed toacceleratemachine visiontasks.[1][2] Vision processing units are distinct fromgraphics processing units(which are specialised forvideo encoding and decoding) in their suitability for runningmachine vision algorithmssuch as CNN (convolutional neural networks), SIFT (scale-invariant feature transform) and similar. They may includedirect interfacesto take data fromcameras(bypassing any off chip buffers), and have a greater emphasis on on-chipdataflowbetween manyparallel execution unitswithscratchpad memory, like amanycoreDSP. But, like video processing units, they may have a focus onlow precisionfixed point arithmeticforimage processing. They are distinct fromGPUs, which contain specialised hardware forrasterizationandtexture mapping(for3D graphics), and whosememory architectureis optimised for manipulatingbitmap imagesinoff-chip memory(readingtextures, and modifyingframe buffers, withrandom access patterns). VPUs are optimized for performance per watt, while GPUs mainly focus on absolute performance. Target markets arerobotics, theinternet of things(IoT), new classes ofdigital camerasforvirtual realityandaugmented reality,smart cameras, and integrating machine vision acceleration intosmartphonesand othermobile devices. Some processors are not described as VPUs, but are equally applicable to machine vision tasks. These may form a broader category ofAI accelerators(to which VPUs may also belong), however as of 2016 there is no consensus on the name:
https://en.wikipedia.org/wiki/Vision_processing_unit
Inprobability theoryandstatistics, thecoefficient of variation(CV), also known as normalizedroot-mean-square deviation(NRMSD),percent RMS, andrelative standard deviation(RSD), is astandardizedmeasure ofdispersionof aprobability distributionorfrequency distribution. It is defined as the ratio of thestandard deviationσ{\displaystyle \sigma }to themeanμ{\displaystyle \mu }(or itsabsolute value,|μ|{\displaystyle |\mu |}), and often expressed as a percentage ("%RSD"). The CV or RSD is widely used inanalytical chemistryto express the precision and repeatability of anassay. It is also commonly used in fields such asengineeringorphysicswhen doing quality assurance studies andANOVA gauge R&R,[citation needed]by economists and investors ineconomic models, inepidemiology, and inpsychology/neuroscience. The coefficient of variation (CV) is defined as the ratio of the standard deviationσ{\displaystyle \sigma }to the meanμ{\displaystyle \mu },CV=σμ.{\displaystyle CV={\frac {\sigma }{\mu }}.}[1] It shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on scales that have a meaningful zero (ratio scale) and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). The coefficient of variation may not have any meaning for data on aninterval scale.[2]For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on the scale used. On the other hand,Kelvintemperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variability. Measurements that arelog-normallydistributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements. A more robust possibility is thequartile coefficient of dispersion, half theinterquartile range(Q3−Q1)/2{\displaystyle {(Q_{3}-Q_{1})/2}}divided by the average of the quartiles (themidhinge),(Q1+Q3)/2{\displaystyle {(Q_{1}+Q_{3})/2}}. In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using amaximum-likelihood estimationapproach.[3] In the examples below, we will take the values given asrandomly chosen from a larger population of values. In these examples, we will take the values given asthe entire population of values. When only a sample of data from a population is available, the population CV can be estimated using the ratio of thesample standard deviations{\displaystyle s\,}to the sample meanx¯{\displaystyle {\bar {x}}}: But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is abiased estimator. Fornormally distributeddata, an unbiased estimator[4]for a sample of size n is: Many datasets follow an approximately log-normal distribution.[5]In such cases, a more accurate estimate, derived from the properties of thelog-normal distribution,[6][7][8]is defined as: wheresln{\displaystyle {s_{\ln }}\,}is the sample standard deviation of the data after anatural logtransformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviationsb{\displaystyle s_{b}\,}is converted to base e usingsln=sbln⁡(b){\displaystyle s_{\ln }=s_{b}\ln(b)\,}, and the formula forcv^raw{\displaystyle {\widehat {cv}}_{\rm {raw}}\,}remains the same.[9]) This estimate is sometimes referred to as the "geometric CV" (GCV)[10][11]in order to distinguish it from the simple estimate above. However, "geometric coefficient of variation" has also been defined by Kirkwood[12]as: This term was intended to beanalogousto the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate ofcv{\displaystyle c_{\rm {v}}\,}itself. For many practical purposes (such assample size determinationand calculation ofconfidence intervals) it issln{\displaystyle s_{ln}\,}which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate ofcv{\displaystyle c_{\rm {v}}\,}or GCV by inverting the corresponding formula. The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is adimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation. The coefficient of variation is also common in applied probability fields such asrenewal theory,queueing theory, andreliability theory. In these fields, theexponential distributionis often more important than thenormal distribution. The standard deviation of anexponential distributionis equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as anErlang distribution) are considered low-variance, while those with CV > 1 (such as ahyper-exponential distribution) are considered high-variance[citation needed]. Some formulas in these fields are expressed using thesquared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with theRoot Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constantabsolute errorover their working range. Inactuarial science, the CV is known asunitized risk.[13] In industrial solids processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached.[14] Influid dynamics, theCV, also referred to asPercent RMS,%RMS,%RMS Uniformity, orVelocity RMS, is a useful determination of flow uniformity for industrial processes. The term is used widely in the design of pollution control equipment, such as electrostatic precipitators (ESPs),[15]selective catalytic reduction (SCR), scrubbers, and similar devices. The Institute of Clean Air Companies (ICAC) references RMS deviation of velocity in the design of fabric filters (ICAC document F-7).[16]The guiding principle is that many of these pollution control devices require "uniform flow" entering and through the control zone. This can be related to uniformity of velocity profile, temperature distribution, gas species (such as ammonia for an SCR, or activated carbon injection for mercury absorption), and other flow-related parameters. ThePercent RMSalso is used to assess flow uniformity in combustion systems, HVAC systems, ductwork, inlets to fans and filters, air handling units, etc. where performance of the equipment is influenced by the incoming flow distribution. CV measures are often used as quality controls for quantitative laboratoryassays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required.[17]It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples − in this case standard error in percent is suggested to be superior.[18]If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as theintraclass correlationcoefficient are recommended.[19] The coefficient of variation fulfills therequirements for a measure of economic inequality.[20][21][22]Ifx(with entriesxi) is a list of the values of an economic indicator (e.g. wealth), withxibeing the wealth of agenti, then the following requirements are met: cvassumes its minimum value of zero for complete equality (allxiare equal).[22]Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like theGini coefficientwhich is constrained to be between 0 and 1).[22]It is, however, more mathematically tractable than the Gini coefficient. Archaeologists often use CV values to compare the degree of standardisation of ancient artefacts.[23][24]Variation in CVs has been interpreted to indicate different cultural transmission contexts for the adoption of new technologies.[25]Coefficients of variation have also been used to investigate pottery standardisation relating to changes in social organisation.[26]Archaeologists also use several methods for comparing CV values, for example the modified signed-likelihood ratio (MSLR) test for equality of CVs.[27][28] Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures inCelsiusandFahrenheit(both relative units, wherekelvinandRankine scaleare their associated absolute values): Celsius: [0, 10, 20, 30, 40] Fahrenheit: [32, 50, 68, 86, 104] Thesample standard deviationsare 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%. If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute. Comparing the same data set, now in absolute units: Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15] Rankine: [491.67, 509.67, 527.67, 545.67, 563.67] Thesample standard deviationsare still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%. Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variableX{\displaystyle X}, the coefficient of variation ofaX+b{\displaystyle aX+b}is equal to the coefficient of variation ofX{\displaystyle X}only whenb=0{\displaystyle b=0}. In the above example, Celsius can only be converted to Fahrenheit through a linear transformation of the formax+b{\displaystyle ax+b}withb≠0{\displaystyle b\neq 0}, whereas Kelvins can be converted to Rankines through a transformation of the formax{\displaystyle ax}. Provided that negative and small positive values of the sample mean occur with negligible frequency, theprobability distributionof the coefficient of variation for a sample of sizen{\displaystyle n}of i.i.d. normal random variables has been shown by Hendricks and Robey to be[29] dFcv=2π1/2Γ(n−12)exp⁡(−n2(σμ)2⋅cv21+cv2)cvn−2(1+cv2)n/2∑∑′i=0n−1⁡(n−1)!Γ(n−i2)(n−1−i)!i!⋅ni/22i/2⋅(σμ)i⋅1(1+cv2)i/2dcv,{\displaystyle \mathrm {d} F_{c_{\rm {v}}}={\frac {2}{\pi ^{1/2}\Gamma {\left({\frac {n-1}{2}}\right)}}}\exp \left(-{\frac {n}{2\left({\frac {\sigma }{\mu }}\right)^{2}}}\cdot {\frac {{c_{\rm {v}}}^{2}}{1+{c_{\rm {v}}}^{2}}}\right){\frac {{c_{\rm {v}}}^{n-2}}{(1+{c_{\rm {v}}}^{2})^{n/2}}}\sideset {}{^{\prime }}\sum _{i=0}^{n-1}{\frac {(n-1)!\,\Gamma \left({\frac {n-i}{2}}\right)}{(n-1-i)!\,i!\,}}\cdot {\frac {n^{i/2}}{2^{i/2}\cdot \left({\frac {\sigma }{\mu }}\right)^{i}}}\cdot {\frac {1}{(1+{c_{\rm {v}}}^{2})^{i/2}}}\,\mathrm {d} c_{\rm {v}},} where the symbol∑∑′{\textstyle \sideset {}{^{\prime }}\sum }indicates that the summation is over only even values ofn−1−i{\displaystyle n-1-i}, i.e., ifn{\displaystyle n}is odd, sum over even values ofi{\displaystyle i}and ifn{\displaystyle n}is even, sum only over odd values ofi{\displaystyle i}. This is useful, for instance, in the construction ofhypothesis testsorconfidence intervals. Statistical inference for the coefficient of variation in normally distributed data is often based onMcKay's chi-square approximationfor the coefficient of variation.[30][31][32][33][34][35] Liu (2012) reviews methods for the construction of a confidence interval for the coefficient of variation.[36]Notably, Lehmann (1986) derived the sampling distribution for the coefficient of variation using anon-central t-distributionto give an exact method for the construction of the CI.[37] Standardized momentsare similar ratios,μk/σk{\displaystyle {\mu _{k}}/{\sigma ^{k}}}whereμk{\displaystyle \mu _{k}}is thekthmoment about the mean, which are also dimensionless and scale invariant. Thevariance-to-mean ratio,σ2/μ{\displaystyle \sigma ^{2}/\mu }, is another similar ratio, but is not dimensionless, and hence not scale invariant. SeeNormalization (statistics)for further ratios. Insignal processing, particularlyimage processing, thereciprocalratioμ/σ{\displaystyle \mu /\sigma }(or its square) is referred to as thesignal-to-noise ratioin general andsignal-to-noise ratio (imaging)in particular. Other related ratios include:
https://en.wikipedia.org/wiki/Coefficient_of_variation
In mathematics, theerror function(also called theGauss error function), often denoted byerf, is a functionerf:C→C{\displaystyle \mathrm {erf} :\mathbb {C} \to \mathbb {C} }defined as:[1]erf⁡z=2π∫0ze−t2dt.{\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\int _{0}^{z}e^{-t^{2}}\,\mathrm {d} t.} The integral here is a complexcontour integralwhich is path-independent becauseexp⁡(−t2){\displaystyle \exp(-t^{2})}isholomorphicon the whole complex planeC{\displaystyle \mathbb {C} }. In many applications, the function argument is a real number, in which case the function value is also real. In some old texts,[2]the error function is defined without the factor of2π{\displaystyle {\frac {2}{\sqrt {\pi }}}}. Thisnonelementary integralis asigmoidfunction that occurs often inprobability,statistics, andpartial differential equations. In statistics, for non-negative real values ofx, the error function has the following interpretation: for a realrandom variableYthat isnormally distributedwithmean0 andstandard deviation12{\displaystyle {\frac {1}{\sqrt {2}}}},erfxis the probability thatYfalls in the range[−x,x]. Two closely related functions are thecomplementary error functionerfc:C→C{\displaystyle \mathrm {erfc} :\mathbb {C} \to \mathbb {C} }is defined as erfc⁡z=1−erf⁡z,{\displaystyle \operatorname {erfc} z=1-\operatorname {erf} z,} and theimaginary error functionerfi:C→C{\displaystyle \mathrm {erfi} :\mathbb {C} \to \mathbb {C} }is defined as erfi⁡z=−ierf⁡iz,{\displaystyle \operatorname {erfi} z=-i\operatorname {erf} iz,} whereiis theimaginary unit. The name "error function" and its abbreviationerfwere proposed byJ. W. L. Glaisherin 1871 on account of its connection with "the theory of Probability, and notably the theory ofErrors."[3]The error function complement was also discussed by Glaisher in a separate publication in the same year.[4]For the "law of facility" of errors whosedensityis given byf(x)=(cπ)1/2e−cx2{\displaystyle f(x)=\left({\frac {c}{\pi }}\right)^{1/2}e^{-cx^{2}}}(thenormal distribution), Glaisher calculates the probability of an error lying betweenpandqas:(cπ)12∫pqe−cx2dx=12(erf⁡(qc)−erf⁡(pc)).{\displaystyle \left({\frac {c}{\pi }}\right)^{\frac {1}{2}}\int _{p}^{q}e^{-cx^{2}}\,\mathrm {d} x={\tfrac {1}{2}}\left(\operatorname {erf} \left(q{\sqrt {c}}\right)-\operatorname {erf} \left(p{\sqrt {c}}\right)\right).} When the results of a series of measurements are described by anormal distributionwithstandard deviationσandexpected value0, thenerf (⁠a/σ√2⁠)is the probability that the error of a single measurement lies between−aand+a, for positivea. This is useful, for example, in determining thebit error rateof a digital communication system. The error and complementary error functions occur, for example, in solutions of theheat equationwhenboundary conditionsare given by theHeaviside step function. The error function and its approximations can be used to estimate results that holdwith high probabilityor with low probability. Given a random variableX~ Norm[μ,σ](a normal distribution with meanμand standard deviationσ) and a constantL>μ, it can be shown via integration by substitution:Pr[X≤L]=12+12erf⁡L−μ2σ≈Aexp⁡(−B(L−μσ)2){\displaystyle {\begin{aligned}\Pr[X\leq L]&={\frac {1}{2}}+{\frac {1}{2}}\operatorname {erf} {\frac {L-\mu }{{\sqrt {2}}\sigma }}\\&\approx A\exp \left(-B\left({\frac {L-\mu }{\sigma }}\right)^{2}\right)\end{aligned}}} whereAandBare certain numeric constants. IfLis sufficiently far from the mean, specificallyμ−L≥σ√lnk, then: Pr[X≤L]≤Aexp⁡(−Bln⁡k)=AkB{\displaystyle \Pr[X\leq L]\leq A\exp(-B\ln {k})={\frac {A}{k^{B}}}} so the probability goes to 0 ask→ ∞. The probability forXbeing in the interval[La,Lb]can be derived asPr[La≤X≤Lb]=∫LaLb12πσexp⁡(−(x−μ)22σ2)dx=12(erf⁡Lb−μ2σ−erf⁡La−μ2σ).{\displaystyle {\begin{aligned}\Pr[L_{a}\leq X\leq L_{b}]&=\int _{L_{a}}^{L_{b}}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\,\mathrm {d} x\\&={\frac {1}{2}}\left(\operatorname {erf} {\frac {L_{b}-\mu }{{\sqrt {2}}\sigma }}-\operatorname {erf} {\frac {L_{a}-\mu }{{\sqrt {2}}\sigma }}\right).\end{aligned}}} The propertyerf (−z) = −erfzmeans that the error function is anodd function. This directly results from the fact that the integrande−t2is aneven function(the antiderivative of an even function which is zero at the origin is an odd function and vice versa). Since the error function is anentire functionwhich takes real numbers to real numbers, for anycomplex numberz:erf⁡z¯=erf⁡z¯{\displaystyle \operatorname {erf} {\overline {z}}={\overline {\operatorname {erf} z}}}wherez¯{\displaystyle {\overline {z}}}denotes thecomplex conjugateofz{\displaystyle z}. The integrandf= exp(−z2)andf= erfzare shown in the complexz-plane in the figures at right withdomain coloring. The error function at+∞is exactly 1 (seeGaussian integral). At the real axis,erfzapproaches unity atz→ +∞and −1 atz→ −∞. At the imaginary axis, it tends to±i∞. The error function is anentire function; it has no singularities (except that at infinity) and itsTaylor expansionalways converges. Forx>> 1, however, cancellation of leading terms makes the Taylor expansion unpractical. The defining integral cannot be evaluated inclosed formin terms ofelementary functions(seeLiouville's theorem), but by expanding theintegrande−z2into itsMaclaurin seriesand integrating term by term, one obtains the error function's Maclaurin series as:erf⁡z=2π∑n=0∞(−1)nz2n+1n!(2n+1)=2π(z−z33+z510−z742+z9216−⋯){\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}-{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}-\cdots \right)\end{aligned}}}which holds for everycomplex numberz. The denominator terms are sequenceA007680in theOEIS. For iterative calculation of the above series, the following alternative formulation may be useful:erf⁡z=2π∑n=0∞(z∏k=1n−(2k−1)z2k(2k+1))=2π∑n=0∞z2n+1∏k=1n−z2k{\displaystyle {\begin{aligned}\operatorname {erf} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }\left(z\prod _{k=1}^{n}{\frac {-(2k-1)z^{2}}{k(2k+1)}}\right)\\[6pt]&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z}{2n+1}}\prod _{k=1}^{n}{\frac {-z^{2}}{k}}\end{aligned}}}because⁠−(2k− 1)z2/k(2k+ 1)⁠expresses the multiplier to turn thekth term into the(k+ 1)th term (consideringzas the first term). The imaginary error function has a very similar Maclaurin series, which is:erfi⁡z=2π∑n=0∞z2n+1n!(2n+1)=2π(z+z33+z510+z742+z9216+⋯){\displaystyle {\begin{aligned}\operatorname {erfi} z&={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z^{2n+1}}{n!(2n+1)}}\\[6pt]&={\frac {2}{\sqrt {\pi }}}\left(z+{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}+{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}+\cdots \right)\end{aligned}}}which holds for everycomplex numberz. The derivative of the error function follows immediately from its definition:ddzerf⁡z=2πe−z2.{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {erf} z={\frac {2}{\sqrt {\pi }}}e^{-z^{2}}.}From this, the derivative of the imaginary error function is also immediate:ddzerfi⁡z=2πez2.{\displaystyle {\frac {d}{dz}}\operatorname {erfi} z={\frac {2}{\sqrt {\pi }}}e^{z^{2}}.}Anantiderivativeof the error function, obtainable byintegration by parts, iszerf⁡z+e−z2π+C.{\displaystyle z\operatorname {erf} z+{\frac {e^{-z^{2}}}{\sqrt {\pi }}}+C.}An antiderivative of the imaginary error function, also obtainable by integration by parts, iszerfi⁡z−ez2π+C.{\displaystyle z\operatorname {erfi} z-{\frac {e^{z^{2}}}{\sqrt {\pi }}}+C.}Higher order derivatives are given byerf(k)⁡z=2(−1)k−1πHk−1(z)e−z2=2πdk−1dzk−1(e−z2),k=1,2,…{\displaystyle \operatorname {erf} ^{(k)}z={\frac {2(-1)^{k-1}}{\sqrt {\pi }}}{\mathit {H}}_{k-1}(z)e^{-z^{2}}={\frac {2}{\sqrt {\pi }}}{\frac {\mathrm {d} ^{k-1}}{\mathrm {d} z^{k-1}}}\left(e^{-z^{2}}\right),\qquad k=1,2,\dots }whereHare the physicists'Hermite polynomials.[5] An expansion,[6]which converges more rapidly for all real values ofxthan a Taylor expansion, is obtained by usingHans Heinrich Bürmann's theorem:[7]erf⁡x=2πsgn⁡x⋅1−e−x2(1−112(1−e−x2)−7480(1−e−x2)2−5896(1−e−x2)3−787276480(1−e−x2)4−⋯)=2πsgn⁡x⋅1−e−x2(π2+∑k=1∞cke−kx2).{\displaystyle {\begin{aligned}\operatorname {erf} x&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left(1-{\frac {1}{12}}\left(1-e^{-x^{2}}\right)-{\frac {7}{480}}\left(1-e^{-x^{2}}\right)^{2}-{\frac {5}{896}}\left(1-e^{-x^{2}}\right)^{3}-{\frac {787}{276480}}\left(1-e^{-x^{2}}\right)^{4}-\cdots \right)\\[10pt]&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+\sum _{k=1}^{\infty }c_{k}e^{-kx^{2}}\right).\end{aligned}}}wheresgnis thesign function. By keeping only the first two coefficients and choosingc1=⁠31/200⁠andc2= −⁠341/8000⁠, the resulting approximation shows its largest relative error atx= ±1.40587, where it is less than 0.0034361:erf⁡x≈2πsgn⁡x⋅1−e−x2(π2+31200e−x2−3418000e−2x2).{\displaystyle \operatorname {erf} x\approx {\frac {2}{\sqrt {\pi }}}\operatorname {sgn} x\cdot {\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+{\frac {31}{200}}e^{-x^{2}}-{\frac {341}{8000}}e^{-2x^{2}}\right).} Given a complex numberz, there is not auniquecomplex numberwsatisfyingerfw=z, so a true inverse function would be multivalued. However, for−1 <x< 1, there is a uniquerealnumber denotederf−1xsatisfyingerf⁡(erf−1⁡x)=x.{\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}x\right)=x.} Theinverse error functionis usually defined with domain(−1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk|z| < 1of the complex plane, using the Maclaurin series[8]erf−1⁡z=∑k=0∞ck2k+1(π2z)2k+1,{\displaystyle \operatorname {erf} ^{-1}z=\sum _{k=0}^{\infty }{\frac {c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}wherec0= 1andck=∑m=0k−1cmck−1−m(m+1)(2m+1)={1,1,76,12790,43692520,3480716200,…}.{\displaystyle {\begin{aligned}c_{k}&=\sum _{m=0}^{k-1}{\frac {c_{m}c_{k-1-m}}{(m+1)(2m+1)}}\\[1ex]&=\left\{1,1,{\frac {7}{6}},{\frac {127}{90}},{\frac {4369}{2520}},{\frac {34807}{16200}},\ldots \right\}.\end{aligned}}} So we have the series expansion (common factors have been canceled from numerators and denominators):erf−1⁡z=π2(z+π12z3+7π2480z5+127π340320z7+4369π45806080z9+34807π5182476800z11+⋯).{\displaystyle \operatorname {erf} ^{-1}z={\frac {\sqrt {\pi }}{2}}\left(z+{\frac {\pi }{12}}z^{3}+{\frac {7\pi ^{2}}{480}}z^{5}+{\frac {127\pi ^{3}}{40320}}z^{7}+{\frac {4369\pi ^{4}}{5806080}}z^{9}+{\frac {34807\pi ^{5}}{182476800}}z^{11}+\cdots \right).}(After cancellation the numerator and denominator values inOEIS:A092676andOEIS:A092677respectively; without cancellation the numerator terms are values inOEIS:A002067.) The error function's value at±∞is equal to±1. For|z| < 1, we haveerf(erf−1z) =z. Theinverse complementary error functionis defined aserfc−1⁡(1−z)=erf−1⁡z.{\displaystyle \operatorname {erfc} ^{-1}(1-z)=\operatorname {erf} ^{-1}z.}For realx, there is a uniquerealnumbererfi−1xsatisfyingerfi(erfi−1x) =x. Theinverse imaginary error functionis defined aserfi−1x.[9] For any realx,Newton's methodcan be used to computeerfi−1x, and for−1 ≤x≤ 1, the following Maclaurin series converges:erfi−1⁡z=∑k=0∞(−1)kck2k+1(π2z)2k+1,{\displaystyle \operatorname {erfi} ^{-1}z=\sum _{k=0}^{\infty }{\frac {(-1)^{k}c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}whereckis defined as above. A usefulasymptotic expansionof the complementary error function (and therefore also of the error function) for large realxiserfc⁡x=e−x2xπ(1+∑n=1∞(−1)n1⋅3⋅5⋯(2n−1)(2x2)n)=e−x2xπ∑n=0∞(−1)n(2n−1)!!(2x2)n,{\displaystyle {\begin{aligned}\operatorname {erfc} x&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\left(1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {1\cdot 3\cdot 5\cdots (2n-1)}{\left(2x^{2}\right)^{n}}}\right)\\[6pt]&={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}},\end{aligned}}}where(2n− 1)!!is thedouble factorialof(2n− 1), which is the product of all odd numbers up to(2n− 1). This series diverges for every finitex, and its meaning as asymptotic expansion is that for any integerN≥ 1one haserfc⁡x=e−x2xπ∑n=0N−1(−1)n(2n−1)!!(2x2)n+RN(x){\displaystyle \operatorname {erfc} x={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{N-1}(-1)^{n}{\frac {(2n-1)!!}{\left(2x^{2}\right)^{n}}}+R_{N}(x)}where the remainder isRN(x):=(−1)N(2N−1)!!π⋅2N−1∫x∞t−2Ne−t2dt,{\displaystyle R_{N}(x):={\frac {(-1)^{N}\,(2N-1)!!}{{\sqrt {\pi }}\cdot 2^{N-1}}}\int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t,}which follows easily by induction, writinge−t2=−12tddte−t2{\displaystyle e^{-t^{2}}=-{\frac {1}{2t}}\,{\frac {\mathrm {d} }{\mathrm {d} t}}e^{-t^{2}}}and integrating by parts. The asymptotic behavior of the remainder term, inLandau notation, isRN(x)=O(x−(1+2N)e−x2){\displaystyle R_{N}(x)=O\left(x^{-(1+2N)}e^{-x^{2}}\right)}asx→ ∞. This can be found byRN(x)∝∫x∞t−2Ne−t2dt=e−x2∫0∞(t+x)−2Ne−t2−2txdt≤e−x2∫0∞x−2Ne−2txdt∝x−(1+2N)e−x2.{\displaystyle R_{N}(x)\propto \int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,\mathrm {d} t=e^{-x^{2}}\int _{0}^{\infty }(t+x)^{-2N}e^{-t^{2}-2tx}\,\mathrm {d} t\leq e^{-x^{2}}\int _{0}^{\infty }x^{-2N}e^{-2tx}\,\mathrm {d} t\propto x^{-(1+2N)}e^{-x^{2}}.}For large enough values ofx, only the first few terms of this asymptotic expansion are needed to obtain a good approximation oferfcx(while for not too large values ofx, the above Taylor expansion at 0 provides a very fast convergence). Acontinued fractionexpansion of the complementary error function was found byLaplace:[10][11]erfc⁡z=zπe−z21z2+a11+a2z2+a31+⋯,am=m2.{\displaystyle \operatorname {erfc} z={\frac {z}{\sqrt {\pi }}}e^{-z^{2}}{\cfrac {1}{z^{2}+{\cfrac {a_{1}}{1+{\cfrac {a_{2}}{z^{2}+{\cfrac {a_{3}}{1+\dotsb }}}}}}}},\qquad a_{m}={\frac {m}{2}}.} The inversefactorial series:erfc⁡z=e−z2πz∑n=0∞(−1)nQn(z2+1)n¯=e−z2πz[1−121(z2+1)+141(z2+1)(z2+2)−⋯]{\displaystyle {\begin{aligned}\operatorname {erfc} z&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\sum _{n=0}^{\infty }{\frac {\left(-1\right)^{n}Q_{n}}{{\left(z^{2}+1\right)}^{\bar {n}}}}\\[1ex]&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\left[1-{\frac {1}{2}}{\frac {1}{(z^{2}+1)}}+{\frac {1}{4}}{\frac {1}{\left(z^{2}+1\right)\left(z^{2}+2\right)}}-\cdots \right]\end{aligned}}}converges forRe(z2) > 0. HereQn=def1Γ(12)∫0∞τ(τ−1)⋯(τ−n+1)τ−12e−τdτ=∑k=0n(12)k¯s(n,k),{\displaystyle {\begin{aligned}Q_{n}&{\overset {\text{def}}{{}={}}}{\frac {1}{\Gamma {\left({\frac {1}{2}}\right)}}}\int _{0}^{\infty }\tau (\tau -1)\cdots (\tau -n+1)\tau ^{-{\frac {1}{2}}}e^{-\tau }\,d\tau \\[1ex]&=\sum _{k=0}^{n}\left({\frac {1}{2}}\right)^{\bar {k}}s(n,k),\end{aligned}}}zndenotes therising factorial, ands(n,k)denotes a signedStirling number of the first kind.[12][13]There also exists a representation by an infinite sum containing thedouble factorial:erf⁡z=2π∑n=0∞(−2)n(2n−1)!!(2n+1)!z2n+1{\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-2)^{n}(2n-1)!!}{(2n+1)!}}z^{2n+1}} wherea1= 0.278393,a2= 0.230389,a3= 0.000972,a4= 0.078108 erf⁡x≈1−(a1t+a2t2+a3t3)e−x2,t=11+px,x≥0{\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+a_{3}t^{3}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}},\qquad x\geq 0}(maximum error:2.5×10−5) wherep= 0.47047,a1= 0.3480242,a2= −0.0958798,a3= 0.7478556 erf⁡x≈1−1(1+a1x+a2x2+⋯+a6x6)16,x≥0{\displaystyle \operatorname {erf} x\approx 1-{\frac {1}{\left(1+a_{1}x+a_{2}x^{2}+\cdots +a_{6}x^{6}\right)^{16}}},\qquad x\geq 0}(maximum error:3×10−7) wherea1= 0.0705230784,a2= 0.0422820123,a3= 0.0092705272,a4= 0.0001520143,a5= 0.0002765672,a6= 0.0000430638 erf⁡x≈1−(a1t+a2t2+⋯+a5t5)e−x2,t=11+px{\displaystyle \operatorname {erf} x\approx 1-\left(a_{1}t+a_{2}t^{2}+\cdots +a_{5}t^{5}\right)e^{-x^{2}},\quad t={\frac {1}{1+px}}}(maximum error:1.5×10−7) wherep= 0.3275911,a1= 0.254829592,a2= −0.284496736,a3= 1.421413741,a4= −1.453152027,a5= 1.061405429 All of these approximations are valid forx≥ 0. To use these approximations for negativex, use the fact thaterfxis an odd function, soerfx= −erf(−x). This approximation can be inverted to obtain an approximation for the inverse error function:erf−1⁡x≈sgn⁡x⋅(2πa+ln⁡(1−x2)2)2−ln⁡(1−x2)a−(2πa+ln⁡(1−x2)2).{\displaystyle \operatorname {erf} ^{-1}x\approx \operatorname {sgn} x\cdot {\sqrt {{\sqrt {\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)^{2}-{\frac {\ln \left(1-x^{2}\right)}{a}}}}-\left({\frac {2}{\pi a}}+{\frac {\ln \left(1-x^{2}\right)}{2}}\right)}}.} Thecomplementary error function, denotederfc, is defined as erfc⁡x=1−erf⁡x=2π∫x∞e−t2dt=e−x2erfcx⁡x,{\displaystyle {\begin{aligned}\operatorname {erfc} x&=1-\operatorname {erf} x\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{x}^{\infty }e^{-t^{2}}\,\mathrm {d} t\\[5pt]&=e^{-x^{2}}\operatorname {erfcx} x,\end{aligned}}}which also defineserfcx, thescaled complementary error function[26](which can be used instead oferfcto avoidarithmetic underflow[26][27]). Another form oferfcxforx≥ 0is known as Craig's formula, after its discoverer:[28]erfc⁡(x∣x≥0)=2π∫0π2exp⁡(−x2sin2⁡θ)dθ.{\displaystyle \operatorname {erfc} (x\mid x\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}\right)\,\mathrm {d} \theta .}This expression is valid only for positive values ofx, but it can be used in conjunction witherfcx= 2 − erfc(−x)to obtainerfc(x)for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for theerfcof the sum of two non-negative variables is as follows:[29]erfc⁡(x+y∣x,y≥0)=2π∫0π2exp⁡(−x2sin2⁡θ−y2cos2⁡θ)dθ.{\displaystyle \operatorname {erfc} (x+y\mid x,y\geq 0)={\frac {2}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}-{\frac {y^{2}}{\cos ^{2}\theta }}\right)\,\mathrm {d} \theta .} Theimaginary error function, denotederfi, is defined as erfi⁡x=−ierf⁡ix=2π∫0xet2dt=2πex2D(x),{\displaystyle {\begin{aligned}\operatorname {erfi} x&=-i\operatorname {erf} ix\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{t^{2}}\,\mathrm {d} t\\[5pt]&={\frac {2}{\sqrt {\pi }}}e^{x^{2}}D(x),\end{aligned}}}whereD(x)is theDawson function(which can be used instead oferfito avoidarithmetic overflow[26]). Despite the name "imaginary error function",erfixis real whenxis real. When the error function is evaluated for arbitrarycomplexargumentsz, the resultingcomplex error functionis usually discussed in scaled form as theFaddeeva function:w(z)=e−z2erfc⁡(−iz)=erfcx⁡(−iz).{\displaystyle w(z)=e^{-z^{2}}\operatorname {erfc} (-iz)=\operatorname {erfcx} (-iz).} The error function is essentially identical to the standardnormal cumulative distribution function, denotedΦ, also namednorm(x)by some software languages[citation needed], as they differ only by scaling and translation. Indeed, Φ(x)=12π∫−∞xe−t22dt=12(1+erf⁡x2)=12erfc⁡(−x2){\displaystyle {\begin{aligned}\Phi (x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{\tfrac {-t^{2}}{2}}\,\mathrm {d} t\\[6pt]&={\frac {1}{2}}\left(1+\operatorname {erf} {\frac {x}{\sqrt {2}}}\right)\\[6pt]&={\frac {1}{2}}\operatorname {erfc} \left(-{\frac {x}{\sqrt {2}}}\right)\end{aligned}}}or rearranged forerfanderfc:erf⁡(x)=2Φ(x2)−1erfc⁡(x)=2Φ(−x2)=2(1−Φ(x2)).{\displaystyle {\begin{aligned}\operatorname {erf} (x)&=2\Phi {\left(x{\sqrt {2}}\right)}-1\\[6pt]\operatorname {erfc} (x)&=2\Phi {\left(-x{\sqrt {2}}\right)}\\&=2\left(1-\Phi {\left(x{\sqrt {2}}\right)}\right).\end{aligned}}} Consequently, the error function is also closely related to theQ-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function asQ(x)=12−12erf⁡x2=12erfc⁡x2.{\displaystyle {\begin{aligned}Q(x)&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} {\frac {x}{\sqrt {2}}}\\&={\frac {1}{2}}\operatorname {erfc} {\frac {x}{\sqrt {2}}}.\end{aligned}}} TheinverseofΦis known as thenormal quantile function, orprobitfunction and may be expressed in terms of the inverse error function asprobit⁡(p)=Φ−1(p)=2erf−1⁡(2p−1)=−2erfc−1⁡(2p).{\displaystyle \operatorname {probit} (p)=\Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1)=-{\sqrt {2}}\operatorname {erfc} ^{-1}(2p).} The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics. The error function is a special case of theMittag-Leffler function, and can also be expressed as aconfluent hypergeometric function(Kummer's function):erf⁡x=2xπM(12,32,−x2).{\displaystyle \operatorname {erf} x={\frac {2x}{\sqrt {\pi }}}M\left({\tfrac {1}{2}},{\tfrac {3}{2}},-x^{2}\right).} It has a simple expression in terms of theFresnel integral.[further explanation needed] In terms of theregularized gamma functionPand theincomplete gamma function,erf⁡x=sgn⁡x⋅P(12,x2)=sgn⁡xπγ(12,x2).{\displaystyle \operatorname {erf} x=\operatorname {sgn} x\cdot P\left({\tfrac {1}{2}},x^{2}\right)={\frac {\operatorname {sgn} x}{\sqrt {\pi }}}\gamma {\left({\tfrac {1}{2}},x^{2}\right)}.}sgnxis thesign function. The iterated integrals of the complementary error function are defined by[30]inerfc⁡z=∫z∞in−1erfc⁡ζdζi0erfc⁡z=erfc⁡zi1erfc⁡z=ierfc⁡z=1πe−z2−zerfc⁡zi2erfc⁡z=14(erfc⁡z−2zierfc⁡z){\displaystyle {\begin{aligned}i^{n}\!\operatorname {erfc} z&=\int _{z}^{\infty }i^{n-1}\!\operatorname {erfc} \zeta \,\mathrm {d} \zeta \\[6pt]i^{0}\!\operatorname {erfc} z&=\operatorname {erfc} z\\i^{1}\!\operatorname {erfc} z&=\operatorname {ierfc} z={\frac {1}{\sqrt {\pi }}}e^{-z^{2}}-z\operatorname {erfc} z\\i^{2}\!\operatorname {erfc} z&={\tfrac {1}{4}}\left(\operatorname {erfc} z-2z\operatorname {ierfc} z\right)\\\end{aligned}}} The general recurrence formula is2n⋅inerfc⁡z=in−2erfc⁡z−2z⋅in−1erfc⁡z{\displaystyle 2n\cdot i^{n}\!\operatorname {erfc} z=i^{n-2}\!\operatorname {erfc} z-2z\cdot i^{n-1}\!\operatorname {erfc} z} They have the power seriesinerfc⁡z=∑j=0∞(−z)j2n−jj!Γ(1+n−j2),{\displaystyle i^{n}\!\operatorname {erfc} z=\sum _{j=0}^{\infty }{\frac {(-z)^{j}}{2^{n-j}j!\,\Gamma \left(1+{\frac {n-j}{2}}\right)}},}from which follow the symmetry propertiesi2merfc⁡(−z)=−i2merfc⁡z+∑q=0mz2q22(m−q)−1(2q)!(m−q)!{\displaystyle i^{2m}\!\operatorname {erfc} (-z)=-i^{2m}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q}}{2^{2(m-q)-1}(2q)!(m-q)!}}}andi2m+1erfc⁡(−z)=i2m+1erfc⁡z+∑q=0mz2q+122(m−q)−1(2q+1)!(m−q)!.{\displaystyle i^{2m+1}\!\operatorname {erfc} (-z)=i^{2m+1}\!\operatorname {erfc} z+\sum _{q=0}^{m}{\frac {z^{2q+1}}{2^{2(m-q)-1}(2q+1)!(m-q)!}}.}
https://en.wikipedia.org/wiki/Error_function
TheMahalanobis distanceis ameasure of the distancebetween a pointP{\displaystyle P}and aprobability distributionD{\displaystyle D}, introduced byP. C. Mahalanobisin 1936.[1]The mathematical details of Mahalanobis distance first appeared in theJournal of The Asiatic Society of Bengalin 1936.[2]Mahalanobis's definition was prompted by the problem ofidentifying the similaritiesof skulls based on measurements (the earliest work related to similarities of skulls are from 1922 and another later work is from 1927).[3][4]R.C. Boselater obtained the sampling distribution of Mahalanobis distance, under the assumption of equal dispersion.[5] It is a multivariate generalization of the square of thestandard scorez=(x−μ)/σ{\displaystyle z=(x-\mu )/\sigma }: how manystandard deviationsawayP{\displaystyle P}is from themeanofD{\displaystyle D}. This distance is zero forP{\displaystyle P}at the mean ofD{\displaystyle D}and grows asP{\displaystyle P}moves away from the mean along eachprincipal componentaxis. If each of these axes is re-scaled to have unit variance, then the Mahalanobis distance corresponds to standardEuclidean distancein the transformed space. The Mahalanobis distance is thusunitless,scale-invariant, and takes into account thecorrelationsof thedata set. Given a probability distributionQ{\displaystyle Q}onRN{\displaystyle \mathbb {R} ^{N}}, with meanμ→=(μ1,μ2,μ3,…,μN)T{\displaystyle {\vec {\mu }}=(\mu _{1},\mu _{2},\mu _{3},\dots ,\mu _{N})^{\mathsf {T}}}and positive semi-definitecovariance matrixΣ{\displaystyle \mathbf {\Sigma } }, the Mahalanobis distance of a pointx→=(x1,x2,x3,…,xN)T{\displaystyle {\vec {x}}=(x_{1},x_{2},x_{3},\dots ,x_{N})^{\mathsf {T}}}fromQ{\displaystyle Q}is[6]dM(x→,Q)=(x→−μ→)TΣ−1(x→−μ→).{\displaystyle d_{M}({\vec {x}},Q)={\sqrt {({\vec {x}}-{\vec {\mu }})^{\mathsf {T}}\mathbf {\Sigma } ^{-1}({\vec {x}}-{\vec {\mu }})}}.}Given two pointsx→{\displaystyle {\vec {x}}}andy→{\displaystyle {\vec {y}}}inRN{\displaystyle \mathbb {R} ^{N}}, the Mahalanobis distance between them with respect toQ{\displaystyle Q}isdM(x→,y→;Q)=(x→−y→)TΣ−1(x→−y→).{\displaystyle d_{M}({\vec {x}},{\vec {y}};Q)={\sqrt {({\vec {x}}-{\vec {y}})^{\mathsf {T}}\mathbf {\Sigma } ^{-1}({\vec {x}}-{\vec {y}})}}.}which means thatdM(x→,Q)=dM(x→,μ→;Q){\displaystyle d_{M}({\vec {x}},Q)=d_{M}({\vec {x}},{\vec {\mu }};Q)}. SinceΣ{\displaystyle \mathbf {\Sigma } }ispositive semi-definite, so isΣ−1{\displaystyle \mathbf {\Sigma } ^{-1}}, thus the square roots are always defined. We can find useful decompositions of the squared Mahalanobis distance that help to explain some reasons for the outlyingness of multivariate observations and also provide a graphical tool for identifying outliers.[7] By thespectral theorem,Σ{\displaystyle \mathbf {\Sigma } }can be decomposed asΣ=STS{\displaystyle \mathbf {\Sigma } =\mathbf {S} ^{T}\mathbf {S} }for some realN×N{\displaystyle N\times N}matrix. One choice forS{\displaystyle \mathbf {S} }is the symmetric square root ofΣ{\displaystyle \mathbf {\Sigma } }, which is thestandard deviation matrix.[8]This gives us the equivalent definitiondM(x→,y→;Q)=‖S−1(x→−y→)‖{\displaystyle d_{M}({\vec {x}},{\vec {y}};Q)=\|\mathbf {S} ^{-1}({\vec {x}}-{\vec {y}})\|}where‖⋅‖{\displaystyle \|\cdot \|}is the Euclidean norm. That is, the Mahalanobis distance is the Euclidean distance after awhitening transformation. The existence ofS{\displaystyle \mathbf {S} }is guaranteed by the spectral theorem, but it is not unique. Different choices have different theoretical and practical advantages.[9] In practice, the distributionQ{\displaystyle Q}is usually thesample distributionfrom a set ofIIDsamples from an underlying unknown distribution, soμ{\displaystyle \mu }is the sample mean, andΣ{\displaystyle \mathbf {\Sigma } }is the covariance matrix of the samples. When theaffine spanof the samples is not the entireRN{\displaystyle \mathbb {R} ^{N}}, the covariance matrix would not be positive-definite, which means the above definition would not work. However, in general, the Mahalanobis distance is preserved under any full-rank affine transformation of the affine span of the samples. So in case the affine span is not the entireRN{\displaystyle \mathbb {R} ^{N}}, the samples can be first orthogonally projected toRn{\displaystyle \mathbb {R} ^{n}}, wheren{\displaystyle n}is the dimension of the affine span of the samples, then the Mahalanobis distance can be computed as usual. Consider the problem of estimating the probability that a test point inN-dimensionalEuclidean spacebelongs to a set, where we are given sample points that definitely belong to that set. Our first step would be to find thecentroidor center of mass of the sample points. Intuitively, the closer the point in question is to this center of mass, the more likely it is to belong to the set. However, we also need to know if the set is spread out over a large range or a small range, so that we can decide whether a given distance from the center is noteworthy or not. The simplistic approach is to estimate thestandard deviationof the distances of the sample points from the center of mass. If the distance between the test point and the center of mass is less than one standard deviation, then we might conclude that it is highly probable that the test point belongs to the set. The further away it is, the more likely that the test point should not be classified as belonging to the set. This intuitive approach can be made quantitative by defining the normalized distance between the test point and the set to be‖x−μ‖2σ{\displaystyle {\frac {\lVert x-\mu \rVert _{2}}{\sigma }}}, which reads:testpoint−sample meanstandard deviation{\displaystyle {\frac {{\text{testpoint}}-{\text{sample mean}}}{\text{standard deviation}}}}. By plugging this into the normal distribution, we can derive the probability of the test point belonging to the set. The drawback of the above approach was that we assumed that the sample points are distributed about the center of mass in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of the test point belonging to the set to depend not only on the distance from the center of mass, but also on the direction. In those directions where the ellipsoid has a short axis the test point must be closer, while in those where the axis is long the test point can be further away from the center. Putting this on a mathematical basis, the ellipsoid that best represents the set's probability distribution can be estimated by building the covariance matrix of the samples. The Mahalanobis distance is the distance of the test point from the center of mass divided by the width of the ellipsoid in the direction of the test point. For anormal distributionin any number of dimensions, the probability density of an observationx→{\displaystyle {\vec {x}}}is uniquely determined by the Mahalanobis distanced{\displaystyle d}: Specifically,d2{\displaystyle d^{2}}follows thechi-squared distributionwithn{\displaystyle n}degrees of freedom, wheren{\displaystyle n}is the number of dimensions of the normal distribution. If the number of dimensions is 2, for example, the probability of a particular calculatedd{\displaystyle d}being less than some thresholdt{\displaystyle t}is1−e−t2/2{\displaystyle 1-e^{-t^{2}/2}}. To determine a threshold to achieve a particular probability,p{\displaystyle p}, uset=−2ln⁡(1−p){\textstyle t={\sqrt {-2\ln(1-p)}}}, for 2 dimensions. For number of dimensions other than 2, the cumulative chi-squared distribution should be consulted. In a normal distribution, the region where the Mahalanobis distance is less than one (i.e. the region inside the ellipsoid at distance one) is exactly the region where the probability distribution isconcave. The Mahalanobis distance is proportional, for a normal distribution, to the square root of the negativelog-likelihood(after adding a constant so the minimum is at zero). The sample mean and covariance matrix can be quite sensitive to outliers, therefore other approaches for calculating the multivariate location and scatter of data are also commonly used when calculating the Mahalanobis distance. The Minimum Covariance Determinant approach estimates multivariate location and scatter from a subset numberingh{\displaystyle h}data points that has the smallest variance-covariance matrix determinant.[10]The Minimum Volume Ellipsoid approach is similar to the Minimum Covariance Determinant approach in that it works with a subset of sizeh{\displaystyle h}data points, but the Minimum Volume Ellipsoid estimates multivariate location and scatter from the ellipsoid of minimal volume that encapsulates theh{\displaystyle h}data points.[11]Each method varies in its definition of the distribution of the data, and therefore produces different Mahalanobis distances. The Minimum Covariance Determinant and Minimum Volume Ellipsoid approaches are more robust to samples that contain outliers, while the sample mean and covariance matrix tends to be more reliable with small and biased data sets.[12] In general, given a normal (Gaussian) random variableX{\displaystyle X}with varianceS=1{\displaystyle S=1}and meanμ=0{\displaystyle \mu =0}, any other normal random variableR{\displaystyle R}(with meanμ1{\displaystyle \mu _{1}}and varianceS1{\displaystyle S_{1}}) can be defined in terms ofX{\displaystyle X}by the equationR=μ1+S1X.{\displaystyle R=\mu _{1}+{\sqrt {S_{1}}}X.}Conversely, to recover a normalized random variable from any normal random variable, one can typically solve forX=(R−μ1)/S1{\displaystyle X=(R-\mu _{1})/{\sqrt {S_{1}}}}. If we square both sides, and take the square-root, we will get an equation for a metric that looks a lot like the Mahalanobis distance: D=X2=(R−μ1)2/S1=(R−μ1)S1−1(R−μ1).{\displaystyle D={\sqrt {X^{2}}}={\sqrt {(R-\mu _{1})^{2}/S_{1}}}={\sqrt {(R-\mu _{1})S_{1}^{-1}(R-\mu _{1})}}.} The resulting magnitude is always non-negative and varies with the distance of the data from the mean, attributes that are convenient when trying to define a model for the data. Mahalanobis distance is closely related to theleverage statistic,h{\displaystyle h}, but has a different scale: D2=(N−1)(h−1N).{\displaystyle D^{2}=(N-1)\left(h-{\tfrac {1}{N}}\right).} Mahalanobis distance is widely used incluster analysisandclassificationtechniques. It is closely related toHotelling's T-square distributionused for multivariate statistical testing and Fisher'slinear discriminant analysisthat is used forsupervised classification.[13] In order to use the Mahalanobis distance to classify a test point as belonging to one ofNclasses, one firstestimates the covariance matrixof each class, usually based on samples known to belong to each class. Then, given a test sample, one computes the Mahalanobis distance to each class, and classifies the test point as belonging to that class for which the Mahalanobis distance is minimal. Mahalanobis distance and leverage are often used to detectoutliers, especially in the development oflinear regressionmodels. A point that has a greater Mahalanobis distance from the rest of the sample population of points is said to have higher leverage since it has a greater influence on the slope or coefficients of the regression equation. Mahalanobis distance is also used to determine multivariate outliers. Regression techniques can be used to determine if a specific case within a sample population is an outlier via the combination of two or more variable scores. Even for normal distributions, a point can be a multivariate outlier even if it is not a univariate outlier for any variable (consider a probability density concentrated along the linex1=x2{\displaystyle x_{1}=x_{2}}, for example), making Mahalanobis distance a more sensitive measure than checking dimensions individually. Mahalanobis distance has also been used inecological niche modelling,[14][15]as the convex elliptical shape of the distances relates well to the concept of thefundamental niche. Another example of usage is in finance, where Mahalanobis distance has been used to compute an indicator called the "turbulence index",[16]which is a statistical measure of financial markets abnormal behaviour. An implementation as a Web API of this indicator is available online.[17] Many programming languages and statistical packages, such asR,Python, etc., include implementations of Mahalanobis distance.
https://en.wikipedia.org/wiki/Mahalanobis_distance
TheOmega ratiois a risk-return performance measure of an investment asset, portfolio, or strategy. It was devised by Con Keating and William F. Shadwick in 2002 and is defined as the probability weighted ratio of gains versus losses for some threshold return target.[1]The ratio is an alternative for the widely usedSharpe ratioand is based on information the Sharpe ratio discards. Omega is calculated by creating a partition in the cumulative return distribution in order to create an area of losses and an area for gains relative to this threshold. The ratio is calculated as: whereF{\displaystyle F}is thecumulative probability distribution functionof the returns andθ{\displaystyle \theta }is the target return threshold defining what is considered a gain versus a loss. A larger ratio indicates that the asset provides more gains relative to losses for some thresholdθ{\displaystyle \theta }and so would be preferred by an investor. Whenθ{\displaystyle \theta }is set to zero the gain-loss-ratio by Bernardo and Ledoit arises as a special case.[2] Comparisons can be made with the commonly usedSharpe ratiowhich considers the ratio of return versus volatility.[3]The Sharpe ratio considers only the first twomomentsof the return distribution whereas the Omega ratio, by construction, considers all moments. The standard form of the Omega ratio is a non-convex function, but it is possible to optimize a transformed version usinglinear programming.[4]To begin with, Kapsos et al. show that the Omega ratio of a portfolio is:Ω(θ)=wTE⁡(r)−θE⁡[(θ−wTr)+]+1{\displaystyle \Omega (\theta )={w^{T}\operatorname {E} (r)-\theta \over {\operatorname {E} [(\theta -w^{T}r)_{+}]}}+1}The optimization problem that maximizes the Omega ratio is given by:maxwwTE⁡(r)−θE⁡[(θ−wTr)+],s.t.wTE⁡(r)≥θ,wT1=1,w≥0{\displaystyle \max _{w}{w^{T}\operatorname {E} (r)-\theta \over {\operatorname {E} [(\theta -w^{T}r)_{+}]}},\quad {\text{s.t. }}w^{T}\operatorname {E} (r)\geq \theta ,\;w^{T}{\bf {1}}=1,\;w\geq 0}The objective function is non-convex, so several modifications are made. First, note that the discrete analogue of the objective function is:wTE⁡(r)−θ∑jpj(θ−wTr)+{\displaystyle {w^{T}\operatorname {E} (r)-\theta \over {\sum _{j}p_{j}(\theta -w^{T}r)_{+}}}}Form{\displaystyle m}sampled asset class returns, letuj=(θ−wTrj)+{\displaystyle u_{j}=(\theta -w^{T}r_{j})_{+}}andpj=m−1{\displaystyle p_{j}=m^{-1}}. Then the discrete objective function becomes:wTE⁡(r)−θm−11Tu∝wTE⁡(r)−θ1Tu{\displaystyle {w^{T}\operatorname {E} (r)-\theta \over {m^{-1}{\bf {1}}^{T}u}}\propto {w^{T}\operatorname {E} (r)-\theta \over {{\bf {1}}^{T}u}}}Following these substitutions, the non-convex optimization problem is transformed into an instance oflinear-fractional programming. Assuming that the feasible region is non-empty and bounded, it is possible to transform a linear-fractional program into a linear program. Conversion from a linear-fractional program to a linear program yields the final form of the Omega ratio optimization problem:maxy,q,zyTE⁡(r)−θzs.t.yTE⁡(r)≥θz,qT1=1,yT1=zqj≥θz−yTrj,q,z≥0,zL≤y≤zU{\displaystyle {\begin{aligned}\max _{y,q,z}{}&y^{T}\operatorname {E} (r)-\theta z\\{\text{s.t. }}&y^{T}\operatorname {E} (r)\geq \theta z,\;q^{T}{\bf {1}}=1,\;y^{T}{\bf {1}}=z\\&q_{j}\geq \theta z-y^{T}r_{j},\;q,z\geq 0,\;z{\mathcal {L}}\leq y\leq z{\mathcal {U}}\end{aligned}}}whereL,U{\displaystyle {\mathcal {L}},\;{\mathcal {U}}}are the respective lower and upper bounds for the portfolio weights. To recover the portfolio weights, normalize the values ofy{\displaystyle y}so that their sum is equal to 1.
https://en.wikipedia.org/wiki/Omega_ratio
Astandard normal deviateis anormally distributeddeviate. It is arealizationof astandard normal random variable, defined as arandom variablewithexpected value0 andvariance1.[1]Where collections of such random variables are used, there is often an associated (possibly unstated) assumption that members of such collections arestatistically independent. Standard normal variables play a major role in theoretical statistics in the description of many types of models, particularly inregression analysis, theanalysis of varianceandtime series analysis. When the term "deviate" is used, rather than "variable", there is a connotation that the value concerned is treated as the no-longer-random outcome of a standard normal random variable. The terminology here is the same as that forrandom variableandrandom variate. Standard normal deviates arise in practicalstatisticsin two ways. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Standard_normal_deviate
Instatistics, astudentized residualis thedimensionless ratioresulting from the division of aresidualby anestimateof itsstandard deviation, both expressed in the sameunits. It is a form of aStudent'st-statistic, with the estimate of error varying between points. This is an important technique in the detection ofoutliers. It is among several named in honor ofWilliam Sealey Gosset, who wrote under the pseudonym "Student" (e.g.,Student's distribution). Dividing a statistic by asample standard deviationis calledstudentizing, in analogy withstandardizingandnormalizing. The key reason for studentizing is that, inregression analysisof amultivariate distribution, the variances of theresidualsat different input variable values may differ, even if the variances of theerrorsat these different input variable values are equal. The issue is the difference betweenerrors and residuals in statistics, particularly the behavior of residuals in regressions. Consider thesimple linear regressionmodel Given a random sample (Xi,Yi),i= 1, ...,n, each pair (Xi,Yi) satisfies where theerrorsεi{\displaystyle \varepsilon _{i}}, areindependentand all have the same varianceσ2{\displaystyle \sigma ^{2}}. Theresidualsare not the true errors, butestimates, based on the observable data. When the method of least squares is used to estimateα0{\displaystyle \alpha _{0}}andα1{\displaystyle \alpha _{1}}, then the residualsε^{\displaystyle {\widehat {\varepsilon \,}}}, unlike the errorsε{\displaystyle \varepsilon }, cannot be independent since they satisfy the two constraints and (Hereεiis theith error, andε^i{\displaystyle {\widehat {\varepsilon \,}}_{i}}is theith residual.) The residuals, unlike the errors,do not all have the same variance:the variance decreases as the correspondingx-value gets farther from the averagex-value. This is not a feature of the data itself, but of the regression better fitting values at the ends of the domain. It is also reflected in theinfluence functionsof various data points on theregression coefficients: endpoints have more influence. This can also be seen because the residuals at endpoints depend greatly on the slope of a fitted line, while the residuals at the middle are relatively insensitive to the slope. The fact thatthe variances of the residuals differ,even thoughthe variances of the true errors are all equalto each other, is theprincipal reasonfor the need for studentization. It is not simply a matter of the population parameters (mean and standard deviation) being unknown – it is thatregressionsyielddifferent residual distributionsatdifferent data points,unlikepointestimatorsofunivariate distributions, which share acommon distributionfor residuals. For this simple model, thedesign matrixis and thehat matrixHis the matrix of theorthogonal projectiononto the column space of the design matrix: Theleveragehiiis theith diagonal entry in the hat matrix. The variance of theith residual is In case the design matrixXhas only two columns (as in the example above), this is equal to In the case of anarithmetic mean, the design matrixXhas only one column (avector of ones), and this is simply: Given the definitions above, theStudentized residualis then wherehiiis theleverage, andσ^{\displaystyle {\widehat {\sigma }}}is an appropriate estimate ofσ(see below). In the case of a mean, this is equal to: The usual estimate ofσ2is theinternally studentizedresidual wheremis the number of parameters in the model (2 in our example). But if theith case is suspected of being improbably large, then it would also not be normally distributed. Hence it is prudent to exclude theith observation from the process of estimating the variance when one is considering whether theith case may be an outlier, and instead use theexternally studentizedresidual, which is based on all the residualsexceptthe suspectith residual. Here is to emphasize thatε^j2(j≠i){\displaystyle {\widehat {\varepsilon \,}}_{j}^{\,2}(j\neq i)}for suspectiare computed withith case excluded. If the estimateσ2includestheith case, then it is called theinternally studentizedresidual,ti{\displaystyle t_{i}}(also known as thestandardized residual[1]). If the estimateσ^(i)2{\displaystyle {\widehat {\sigma }}_{(i)}^{2}}is used instead,excludingtheith case, then it is called theexternally studentized,ti(i){\displaystyle t_{i(i)}}. If the errors are independent andnormally distributedwithexpected value0 and varianceσ2, then theprobability distributionof theith externally studentized residualti(i){\displaystyle t_{i(i)}}is aStudent's t-distributionwithn−m− 1degrees of freedom, and can range from−∞{\displaystyle \scriptstyle -\infty }to+∞{\displaystyle \scriptstyle +\infty }. On the other hand, the internally studentized residuals are in the range0±ν{\displaystyle 0\,\pm \,{\sqrt {\nu }}}, whereν=n−mis the number of residual degrees of freedom. Iftirepresents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then:[2] wheretis a random variable distributed asStudent's t-distributionwithν− 1 degrees of freedom. In fact, this implies thatti2/νfollows thebeta distributionB(1/2,(ν− 1)/2). The distribution above is sometimes referred to as thetau distribution;[2]it was first derived by Thompson in 1935.[3] Whenν= 3, the internally studentized residuals areuniformly distributedbetween−3{\displaystyle \scriptstyle -{\sqrt {3}}}and+3{\displaystyle \scriptstyle +{\sqrt {3}}}. If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, thetiare all either +1 or −1, with 50% chance for each. The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all thetiof a particular experiment is 1. For instance, the internally studentized residuals when fitting a straight line going through (0, 0) to the points (1, 4), (2, −1), (2, −1) are2,−5/5,−5/5{\displaystyle {\sqrt {2}},\ -{\sqrt {5}}/5,\ -{\sqrt {5}}/5}, and the standard deviation of these is not 1. Note that any pair of studentized residualtiandtj(wherei≠j{\displaystyle i\neq j}),are NOT i.i.d. They have the same distribution, but are not independent due to constraints on the residuals having to sum to 0 and to have them be orthogonal to the design matrix. Many programs and statistics packages, such asR,Python, etc., include implementations of Studentized residual.
https://en.wikipedia.org/wiki/Studentized_residual
The termnormal scoreis used with two different meanings instatistics. One of them relates to creating a single value which can be treated as if it had arisen from astandard normal distribution(zero mean, unit variance). The second one relates to assigning alternative values to data points within a dataset, with the broad intention of creating data values than can be interpreted as being approximations for values that might have been observed had the data arisen from a standard normal distribution.[citation needed] The first meaning is as an alternative name for thestandard scoreorz score, where values are standardised by subtracting the sample or estimated mean and dividing by the sample or other estimate of the standard deviation. Particularly in applications where the name "normal score" is used, there is usually a presumption that the value can be referred to a table of standard normal probabilities as a means of providing asignificance testof some hypothesis, such as a difference in means.[citation needed] The second meaning of normal score is associated with data values derived from theranksof the observations within the dataset. A given data point is assigned a value which is either exactly, or an approximation, to the expectation of theorder statisticof the same rank in a sample ofstandard normal random variablesof the same size as the observed data set.[1]Thus the meaning of a normal score of this type is essentially the same as arankit, although the term "rankit" is becoming obsolete. In this case the transformation creates a set of values which is matched in a certain way to what would be expected had the original set of data values arisen from a normal distribution.
https://en.wikipedia.org/wiki/Normal_score
Aratio distribution(also known as aquotient distribution) is aprobability distributionconstructed as the distribution of theratioofrandom variableshaving two other known distributions. Given two (usuallyindependent) random variablesXandY, the distribution of the random variableZthat is formed as the ratioZ=X/Yis aratio distribution. An example is theCauchy distribution(also called thenormal ratio distribution), which comes about as the ratio of twonormally distributedvariables with zero mean. Two other distributions often used in test-statistics are also ratio distributions: thet-distributionarises from aGaussianrandom variable divided by an independentchi-distributedrandom variable, while theF-distributionoriginates from the ratio of two independentchi-squared distributedrandom variables. More general ratio distributions have been considered in the literature.[1][2][3][4][5][6][7][8][9] Often the ratio distributions areheavy-tailed, and it may be difficult to work with such distributions and develop an associatedstatistical test. A method based on themedianhas been suggested as a "work-around".[10] The ratio is one type of algebra for random variables: Related to the ratio distribution are theproduct distribution,sum distributionanddifference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described inMelvin D. Springer's book from 1979The Algebra of Random Variables.[8] The algebraic rules known with ordinary numbers do not apply for the algebra of random variables. For example, if a product isC = ABand a ratio isD=C/Ait does not necessarily mean that the distributions ofDandBare the same. Indeed, a peculiar effect is seen for theCauchy distribution: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution.[8]This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables,C1{\displaystyle C_{1}}andC2{\displaystyle C_{2}}each constructed from two Gaussian distributionsC1=G1/G2{\displaystyle C_{1}=G_{1}/G_{2}}andC2=G3/G4{\displaystyle C_{2}=G_{3}/G_{4}}then whereFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle C_3 = G_4/G_3}. The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions. A way of deriving the ratio distribution ofZ=X/Y{\displaystyle Z=X/Y}from the joint distribution of the two other random variablesX , Y, with joint pdfpX,Y(x,y){\displaystyle p_{X,Y}(x,y)}, is by integration of the following form[3] If the two variables are independent thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle p_{XY}(x,y) = p_X(x) p_Y(y) }and this becomes This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is DefiningZ=X/Y{\displaystyle Z=X/Y}we have Using the known definite integral∫0∞xexp⁡(−cx2)dx=12c{\textstyle \int _{0}^{\infty }\,x\,\exp \left(-cx^{2}\right)\,dx={\frac {1}{2c}}}we get which is the Cauchy distribution, or Student'stdistribution withn= 1 TheMellin transformhas also been suggested for derivation of ratio distributions.[8] In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distributionfx,y(x,y)=fx(x)fy(y){\displaystyle f_{x,y}(x,y)=f_{x}(x)f_{y}(y)}which has support in the positive quadrantx,y>0{\displaystyle x,y>0}and we wish to find the pdf of the ratioR=X/Y{\displaystyle R=X/Y}. The hatched volume above the liney=x/R{\displaystyle y=x/R}represents the cumulative distribution of the functionfx,y(x,y){\displaystyle f_{x,y}(x,y)}multiplied with the logical functionX/Y≤R{\displaystyle X/Y\leq R}. The density is first integrated in horizontal strips; the horizontal strip at heightyextends fromx= 0 tox = Ryand has incremental probabilityfy(y)dy∫0Ryfx(x)dx{\textstyle f_{y}(y)dy\int _{0}^{Ry}f_{x}(x)\,dx}.Secondly, integrating the horizontal strips upward over allyyields the volume of probability above the line Finally, differentiateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle F_R(R)}with respect toR{\displaystyle R}to get the pdffR(R){\displaystyle f_{R}(R)}. Move the differentiation inside the integral: and since then As an example, find the pdf of the ratioRwhen We have thus Differentiation wrt.Ryields the pdf ofR FromMellin transformtheory, for distributions existing only on the positive half-linex≥0{\displaystyle x\geq 0}, we have the product identityE⁡[(UV)p]=E⁡[Up]E⁡[Vp]{\displaystyle \operatorname {E} [(UV)^{p}]=\operatorname {E} [U^{p}]\;\;\operatorname {E} [V^{p}]}providedU,V{\displaystyle U,\;V}are independent. For the case of a ratio of samples likeE⁡[(X/Y)p]{\displaystyle \operatorname {E} [(X/Y)^{p}]}, in order to make use of this identity it is necessary to use moments of the inverse distribution. SetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle 1/Y = Z }such thatE⁡[(XZ)p]=E⁡[Xp]E⁡[Y−p]{\displaystyle \operatorname {E} [(XZ)^{p}]=\operatorname {E} [X^{p}]\;\operatorname {E} [Y^{-p}]}. Thus, if the moments ofXp{\displaystyle X^{p}}andY−p{\displaystyle Y^{-p}}can be determined separately, then the moments ofX/Y{\displaystyle X/Y}can be found. The moments ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y^{-p} }are determined from the inverse pdf ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}, often a tractable exercise. At simplest,E⁡[Y−p]=∫0∞y−pfy(y)dy{\textstyle \operatorname {E} [Y^{-p}]=\int _{0}^{\infty }y^{-p}f_{y}(y)\,dy}. To illustrate, letX{\displaystyle X}be sampled from a standardGamma distribution Z=Y−1{\displaystyle Z=Y^{-1}}is sampled from an inverse Gamma distribution with parameterβ{\displaystyle \beta }and has pdfΓ−1(β)z−(1+β)e−1/z{\displaystyle \;\Gamma ^{-1}(\beta )z^{-(1+\beta )}e^{-1/z}}. The moments of this pdf are Multiplying the corresponding moments gives Independently, it is known that the ratio of the two Gamma samplesR=X/Y{\displaystyle R=X/Y}follows the Beta Prime distribution: SubstitutingB(α,β)=Γ(α)Γ(β)Γ(α+β){\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}}we haveE⁡[Rp]=Γ(α+p)Γ(β−p)Γ(α+β)/Γ(α)Γ(β)Γ(α+β)=Γ(α+p)Γ(β−p)Γ(α)Γ(β){\displaystyle \operatorname {E} [R^{p}]={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha +\beta )}}{\Bigg /}{\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha )\Gamma (\beta )}}}which is consistent with the product of moments above. In theProduct distributionsection, and derived fromMellin transformtheory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have which, in terms of probability distributions, is equivalent to Note thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \operatorname{E}(1/Y) \neq \frac{1}{\operatorname{E}(Y)} }i.e.,∫−∞∞y−1fy(y)dy≠1∫−∞∞yfy(y)dy{\displaystyle \int _{-\infty }^{\infty }y^{-1}f_{y}(y)\,dy\neq {\frac {1}{\int _{-\infty }^{\infty }yf_{y}(y)\,dy}}} The variance of a ratio of independent variables is WhenXandYare independent and have aGaussian distributionwith zero mean, the form of their ratio distribution is aCauchy distribution. This can be derived by settingZ=X/Y=tan⁡θ{\displaystyle Z=X/Y=\tan \theta }then showing thatθ{\displaystyle \theta }has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have Ifp(x,y){\displaystyle p(x,y)}is a function only ofrthenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta }is uniformly distributed onFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle [0, 2\pi ] }with density1/2π{\displaystyle 1/2\pi }so the problem reduces to finding the probability distribution ofZunder the mapping We have, by conservation of probability and sincedz/dθ=1/cos2⁡θ{\displaystyle dz/d\theta =1/\cos ^{2}\theta } and settingcos2⁡θ=11+(tan⁡θ)2=11+z2{\textstyle \cos ^{2}\theta ={\frac {1}{1+(\tan \theta )^{2}}}={\frac {1}{1+z^{2}}}}we get There is a spurious factor of 2 here. Actually, two values ofθ{\displaystyle \theta }spaced byπ{\displaystyle \pi }map onto the same value ofz, the density is doubled, and the final result is When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented byDavid Hinkley.[6]The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Studenttin which the density depends only on radiusr=x2+y2{\textstyle r={\sqrt {x^{2}+y^{2}}}}. It does not extend to the ratio of two independent Studenttdistributions which give the Cauchy ratio shown in a section below for one degree of freedom. In the absence of correlation(cor⁡(X,Y)=0){\displaystyle (\operatorname {cor} (X,Y)=0)}, theprobability density functionof the ratioZ=X/Yof two normal variablesX=N(μX,σX2) andY=N(μY,σY2) is given exactly by the following expression, derived in several sources:[6] where The above expression becomes more complicated when the variablesXandYare correlated. Ifμx=μy=0{\displaystyle \mu _{x}=\mu _{y}=0}butFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \sigma_X \neq \sigma_Y}andρ≠0{\displaystyle \rho \neq 0}the more general Cauchy distribution is obtained whereρis thecorrelation coefficientbetweenXandYand The complex distribution has also been expressed with Kummer'sconfluent hypergeometric functionor theHermite function.[9] This was shown in Springer 1979 problem 4.28. A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be Take logs to get Sinceloge⁡(1+δ)=δ−δ22+δ33+⋯{\displaystyle \log _{e}(1+\delta )=\delta -{\frac {\delta ^{2}}{2}}+{\frac {\delta ^{3}}{3}}+\cdots }then asymptotically Alternatively, Geary (1930) suggested that has approximately astandard Gaussian distribution:[1]This transformation has been called theGeary–Hinkley transformation;[7]the approximation is good ifYis unlikely to assume negative values, basicallyμy>3σy{\displaystyle \mu _{y}>3\sigma _{y}}. This is developed by Dale (Springer 1979 problem 4.28) and Hinkley 1969. Geary showed how the correlated ratioz{\displaystyle z}could be transformed into a near-Gaussian form and developed an approximation fort{\displaystyle t}dependent on the probability of negative denominator valuesx+μx<0{\displaystyle x+\mu _{x}<0}being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when combining modern math packages with verbal conditions in the older literature. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can also be transformed into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version. Let the ratio be: in whichx,y{\displaystyle x,y}are zero-mean correlated normal variables with variancesσx2,σy2{\displaystyle \sigma _{x}^{2},\sigma _{y}^{2}}andX,Y{\displaystyle X,Y}have meansμx,μy.{\displaystyle \mu _{x},\mu _{y}.}Writex′=x−ρyσx/σy{\displaystyle x'=x-\rho y\sigma _{x}/\sigma _{y}}such thatx′,y{\displaystyle x',y}become uncorrelated andx′{\displaystyle x'}has standard deviation The ratio: is invariant under this transformation and retains the same pdf. They{\displaystyle y}term in the numerator appears to be made separable by expanding: to get in whichμx′=μx−ρμyσxσy{\textstyle \mu '_{x}=\mu _{x}-\rho \mu _{y}{\frac {\sigma _{x}}{\sigma _{y}}}}andzhas now become a ratio of uncorrelated non-central normal samples with an invariantz-offset (this is not formally proven, though appears to have been used by Geary), Finally, to be explicit, the pdf of the ratioz{\displaystyle z}for correlated variables is found by inputting the modified parametersσx′,μx′,σy,μy{\displaystyle \sigma _{x}',\mu _{x}',\sigma _{y},\mu _{y}}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \rho'=0 }into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset−ρσxσy{\displaystyle -\rho {\frac {\sigma _{x}}{\sigma _{y}}}}onz{\displaystyle z}. The figures above show an example of a positively correlated ratio withσx=σy=1,μx=0,μy=0.5,ρ=0.975{\displaystyle \sigma _{x}=\sigma _{y}=1,\mu _{x}=0,\mu _{y}=0.5,\rho =0.975}in which the shaded wedges represent the increment of area selected by given ratiox/y∈[r,r+δ]{\displaystyle x/y\in [r,r+\delta ]}which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is clear that for a ratioz=x/y≈1{\displaystyle z=x/y\approx 1}the wedge has almost bypassed the main distribution mass altogether and this explains the local minimum in the theoretical pdfpZ(x/y){\displaystyle p_{Z}(x/y)}. Conversely asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x/y}moves either toward or away from one the wedge spans more of the central mass, accumulating a higher probability. The ratio of correlated zero-mean circularly symmetriccomplex normal distributedvariables was determined by Baxley et al.[13]and has since been extended to the nonzero-mean and nonsymmetric case.[14]In the correlated zero-mean case, the joint distribution ofx,yis where (⋅)H{\displaystyle (\cdot )^{H}}is an Hermitian transpose and The PDF ofZ=X/Y{\displaystyle Z=X/Y}is found to be In the usual event thatσx=σy{\displaystyle \sigma _{x}=\sigma _{y}}we get Further closed-form results for the CDF are also given. The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient ofρ=0.7exp⁡(iπ/4){\displaystyle \rho =0.7\exp(i\pi /4)}. The pdf peak occurs at roughly the complex conjugate of a scaled downρ{\displaystyle \rho }. The ratio of independent or correlated log-normals is log-normal. This follows, because ifX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}arelog-normally distributed, thenln⁡(X1){\displaystyle \ln(X_{1})}andln⁡(X2){\displaystyle \ln(X_{2})}are normally distributed. If they are independent or their logarithms follow abivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.[note 1] This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution ofX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}is adequately approximated by a log-normal. This is a common result of themultiplicative central limit theorem, also known asGibrat's law, whenXi{\displaystyle X_{i}}is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.[15] With two independent random variables following auniform distribution, e.g., the ratio distribution becomes If two independent random variables,XandYeach follow aCauchy distributionwith median equal to zero and shape factora{\displaystyle a} then the ratio distribution for the random variableZ=X/Y{\displaystyle Z=X/Y}is[16] This distribution does not depend ona{\displaystyle a}and the result stated by Springer[8](p158 Question 4.6) is not correct. The ratio distribution is similar to but not the same as theproduct distributionof the random variableW=XY{\displaystyle W=XY}: More generally, if two independent random variablesXandYeach follow aCauchy distributionwith median equal to zero and shape factorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle a}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}respectively, then: The result for the ratio distribution can be obtained from the product distribution by replacingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}with1b.{\displaystyle {\frac {1}{b}}.} IfXhas a standard normal distribution andYhas a standard uniform distribution, thenZ=X/Yhas a distribution known as theslash distribution, with probability density function where φ(z) is the probability density function of the standard normal distribution.[17] LetGbe a normal(0,1) distribution,YandZbechi-squared distributionswithmandndegrees of freedomrespectively, all independent, withfχ(x,k)=xk2−1e−x/22k/2Γ(k/2){\displaystyle f_{\chi }(x,k)={\frac {x^{{\frac {k}{2}}-1}e^{-x/2}}{2^{k/2}\Gamma (k/2)}}}. Then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V_1 \sim {\chi'}_{k_1}^2(\lambda)}, anoncentral chi-squared distribution, andV2∼χ′k22(0){\displaystyle V_{2}\sim {\chi '}_{k_{2}}^{2}(0)}andV1{\displaystyle V_{1}}is independent ofV2{\displaystyle V_{2}}then mnFm,n′=β′(m2,n2)orFm,n′=β′(m2,n2,1,nm){\displaystyle {\frac {m}{n}}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}}){\text{ or }}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}},1,{\tfrac {n}{m}})}definesFm,n′{\displaystyle F'_{m,n}}, Fisher's F density distribution, the PDF of the ratio of two Chi-squares withm, ndegrees of freedom. The CDF of the Fisher density, found inF-tables is defined in thebeta prime distributionarticle. If we enter anF-test table withm= 3,n= 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral Forgamma distributionsUandVwith arbitraryshape parametersα1andα2and their scale parameters both set to unity, that is,U∼Γ(α1,1),V∼Γ(α2,1){\displaystyle U\sim \Gamma (\alpha _{1},1),V\sim \Gamma (\alpha _{2},1)}, whereΓ(x;α,1)=xα−1e−xΓ(α){\displaystyle \Gamma (x;\alpha ,1)={\frac {x^{\alpha -1}e^{-x}}{\Gamma (\alpha )}}}, then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma (x;\alpha,1) }, thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta U \sim \Gamma (x;\alpha,\theta) = \frac { x^{\alpha-1} e^{- \frac{x}{\theta}}}{ \theta^k \Gamma(\alpha)} }. Note that hereθis ascale parameter, rather than a rate parameter. IfU∼Γ(α1,θ1),V∼Γ(α2,θ2){\displaystyle U\sim \Gamma (\alpha _{1},\theta _{1}),\;V\sim \Gamma (\alpha _{2},\theta _{2})}, then by rescaling theθ{\displaystyle \theta }parameter to unity we have Thus in whichFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \beta'(\alpha,\beta,p,q)}represents thegeneralisedbeta prime distribution. In the foregoing it is apparent that ifX∼β′(α1,α2,1,1)≡β′(α1,α2){\displaystyle X\sim \beta '(\alpha _{1},\alpha _{2},1,1)\equiv \beta '(\alpha _{1},\alpha _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta X \sim \beta'( \alpha_1, \alpha_2, 1, \theta ) }. More explicitly, since ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1, \theta_1 ), V \sim \Gamma(\alpha_2, \theta_2 ) }then where IfX,Yare independent samples from theRayleigh distributionFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_r(r) = (r/\sigma^2) e^ {-r^2/2\sigma^2}, \;\; r \ge 0 }, the ratioZ=X/Yfollows the distribution[18] and has cdf The Rayleigh distribution has scaling as its only parameter. The distribution ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Z = \alpha X/Y }follows and has cdf Thegeneralized gamma distributionis which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that hereais ascale parameter, rather than a rate parameter;dis a shape parameter. In the ratios above, Gamma samples,U,Vmay have differing sample sizesα1,α2{\displaystyle \alpha _{1},\alpha _{2}}but must be drawn from the same distributionxα−1e−xθθkΓ(α){\displaystyle {\frac {x^{\alpha -1}e^{-{\frac {x}{\theta }}}}{\theta ^{k}\Gamma (\alpha )}}}with equal scalingθ{\displaystyle \theta }. In situations whereUandVare differently scaled, a variables transformation allows the modified random ratio pdf to be determined. LetX=UU+V=11+B{\displaystyle X={\frac {U}{U+V}}={\frac {1}{1+B}}}whereU∼Γ(α1,θ),V∼Γ(α2,θ),θ{\displaystyle U\sim \Gamma (\alpha _{1},\theta ),V\sim \Gamma (\alpha _{2},\theta ),\theta }arbitrary and, from above,X∼Beta(α1,α2),B=V/U∼Beta′(α2,α1){\displaystyle X\sim Beta(\alpha _{1},\alpha _{2}),B=V/U\sim Beta'(\alpha _{2},\alpha _{1})}. RescaleVarbitrarily, definingY∼UU+φV=11+φB,0≤φ≤∞{\displaystyle Y\sim {\frac {U}{U+\varphi V}}={\frac {1}{1+\varphi B}},\;\;0\leq \varphi \leq \infty } We haveFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle B = \frac{1-X}{X} }and substitution intoYgivesY=Xφ+(1−φ)X,dY/dX=φ(φ+(1−φ)X)2{\displaystyle Y={\frac {X}{\varphi +(1-\varphi )X}},dY/dX={\frac {\varphi }{(\varphi +(1-\varphi )X)^{2}}}} TransformingXtoYgivesFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y) = \frac{f_X (X) } {| dY/dX|} = \frac {\beta(X,\alpha_1,\alpha_2)}{ \varphi / [\varphi + (1-\varphi) X]^2} } NotingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle X = \frac {\varphi Y}{ 1-(1 - \varphi)Y} }we finally have Thus, ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1,\theta_1) }andV∼Γ(α2,θ2){\displaystyle V\sim \Gamma (\alpha _{2},\theta _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y = \frac {U} { U + V} }is distributed asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y, \varphi) }withFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \varphi = \frac {\theta_2}{\theta_1} } The distribution ofYis limited here to the interval [0,1]. It can be generalized by scaling such that ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y \sim f_Y(Y,\varphi) }then wherefY(Y,φ,Θ)=φ/Θ[1−(1−φ)Y/Θ]2β(φY/Θ1−(1−φ)Y/Θ,α1,α2),0≤Y≤Θ{\displaystyle f_{Y}(Y,\varphi ,\Theta )={\frac {\varphi /\Theta }{[1-(1-\varphi )Y/\Theta ]^{2}}}\beta \left({\frac {\varphi Y/\Theta }{1-(1-\varphi )Y/\Theta }},\alpha _{1},\alpha _{2}\right),\;\;\;0\leq Y\leq \Theta } Though not ratio distributions of two variables, the following identities for one variable are useful: combining the latter two equations yields Corollary IfU∼Γ(α,1),V∼Γ(β,1){\displaystyle U\sim \Gamma (\alpha ,1),V\sim \Gamma (\beta ,1)}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \frac{U}{V} \sim \beta' ( \alpha, \beta )}and Further results can be found in theInverse distributionarticle. This result was derived by Katz et al.[20] SupposeX∼Binomial(n,p1){\displaystyle X\sim {\text{Binomial}}(n,p_{1})}andY∼Binomial(m,p2){\displaystyle Y\sim {\text{Binomial}}(m,p_{2})}andX{\displaystyle X},Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}are independent. LetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle T=\frac{X/n}{Y/m}}. Thenlog⁡(T){\displaystyle \log(T)}is approximately normally distributed with meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \log(p_1/p_2)}and variance(1/p1)−1n+(1/p2)−1m{\displaystyle {\frac {(1/p_{1})-1}{n}}+{\frac {(1/p_{2})-1}{m}}}. The binomial ratio distribution is of significance in clinical trials: if the distribution ofTis known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.[citation needed] In the ratio of Poisson variablesR = X/Ythere is a problem thatYis zero with finite probability soRis undefined. To counter this, consider the truncated, or censored, ratioR' = X/Y'where zero sample ofYare discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway. The probability of a null Poisson sample beinge−λ{\displaystyle e^{-\lambda }}, the generic pdf of a left truncated Poisson distribution is which sums to unity. Following Cohen,[21]fornindependent trials, the multidimensional truncated pdf is and the log likelihood becomes On differentiation we get and setting to zero gives the maximum likelihood estimateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} } Note that asλ^→0{\displaystyle {\hat {\lambda }}\to 0}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x \to 1 }so the truncated maximum likelihoodλ{\displaystyle \lambda }estimate, though correct for both truncated and untruncated distributions, gives a truncated meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }value which is highly biassed relative to the untruncated one. Nevertheless it appears thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }is asufficient statisticforFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }sinceFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} }depends on the data only through the sample meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x = \frac{1}{n} \sum_{i=1}^n x_i }in the previous equation which is consistent with the methodology of the conventionalPoisson distribution. Absent any closed form solutions, the following approximate reversion for truncatedFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }is valid over the whole range0≤λ≤∞;1≤x¯≤∞{\displaystyle 0\leq \lambda \leq \infty ;\;1\leq {\bar {x}}\leq \infty }. which compares with the non-truncated version which is simplyFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda = \bar x }. Taking the ratioFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle R = \hat \lambda_X / \hat \lambda_Y }is a valid operation even thoughFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_X }may use a non-truncated model whileFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_Y }has a left-truncated one. The asymptotic large-Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle n\lambda \text{ variance of }\hat \lambda }(andCramér–Rao bound) is in which substitutingLgives Then substitutingx¯{\displaystyle {\bar {x}}}from the equation above, we get Cohen's variance estimate The variance of the point estimate of the meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }, on the basis ofntrials, decreases asymptotically to zero asnincreases to infinity. For smallλ{\displaystyle \lambda }it diverges from the truncated pdf variance in Springael[22]for example, who quotes a variance of fornsamples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf,Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbb {Var} ( \hat \lambda) / \mathbb {Var} ( \lambda) }, ranges from 1 for largeFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }(100% efficient) up to 2 asλ{\displaystyle \lambda }approaches zero (50% efficient). These mean and variance parameter estimates, together with parallel estimates forX, can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning[23]and there is aZero-truncated Poisson distributionWikipedia entry. This distribution is the ratio of twoLaplace distributions.[24]LetXandYbe standard Laplace identically distributed random variables and letz=X/Y. Then the probability distribution ofzis Let the mean of theXandYbea. Then the standard double Lomax distribution is symmetric arounda. This distribution has an infinite mean and variance. IfZhas a standard double Lomax distribution, then 1/Zalso has a standard double Lomax distribution. The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution. For 0 <a< 1, thea-th moment exists. where Γ is thegamma function. Ratio distributions also appear inmultivariate analysis.[25]If the random matricesXandYfollow aWishart distributionthen the ratio of thedeterminants is proportional to the product of independentFrandom variables. In the case whereXandYare from independent standardizedWishart distributionsthen the ratio has aWilks' lambda distribution. In relation to Wishart matrix distributions ifS∼Wp(Σ,ν+1){\displaystyle S\sim W_{p}(\Sigma ,\nu +1)}is a sample Wishart matrix and vectorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V }is arbitrary, but statistically independent, corollary 3.2.9 of Muirhead[26]states The discrepancy of one in the sample numbers arises from estimation of the sample mean when forming the sample covariance, a consequence ofCochran's theorem. Similarly which is Theorem 3.2.12 of Muirhead[26]
https://en.wikipedia.org/wiki/Ratio_distribution
Feature scalingis a method used to normalize the range of independent variables or features of data. Indata processing, it is also known asdata normalizationand is generally performed during thedata preprocessingstep. Since the range of values of raw data varies widely, in somemachine learningalgorithms, objective functions will not work properly withoutnormalization. For example, manyclassifierscalculate the distance between two points by theEuclidean distance. If one of the features has a broad range of values, the distance will be governed by this particular feature. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance. Another reason why feature scaling is applied is thatgradient descentconverges much faster with feature scaling than without it.[1] It's also important to apply feature scaling ifregularizationis used as part of the loss function (so that coefficients are penalized appropriately). Empirically, feature scaling can improve the convergence speed ofstochastic gradient descent. In support vector machines,[2]it can reduce the time to find support vectors. Feature scaling is also often used in applications involving distances and similarities between data points, such as clustering and similarity search. As an example, theK-means clusteringalgorithm is sensitive to feature scales. Also known as min-max scaling or min-max normalization, rescaling is the simplest method and consists in rescaling the range of features to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula for a min-max of [0, 1] is given as:[3] wherex{\displaystyle x}is an original value,x′{\displaystyle x'}is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights). To rescale a range between an arbitrary set of values [a, b], the formula becomes: wherea,b{\displaystyle a,b}are the min-max values. wherex{\displaystyle x}is an original value,x′{\displaystyle x'}is the normalized value,x¯=average(x){\displaystyle {\bar {x}}={\text{average}}(x)}is the mean of that feature vector. There is another form of the means normalization which divides by the standard deviation which is also called standardization. In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multipledimensions. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting the mean in the numerator) and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g.,support vector machines,logistic regression, andartificial neural networks).[4][5]The general method of calculation is to determine the distributionmeanandstandard deviationfor each feature. Next we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation. Wherex{\displaystyle x}is the original feature vector,x¯=average(x){\displaystyle {\bar {x}}={\text{average}}(x)}is the mean of that feature vector, andσ{\displaystyle \sigma }is its standard deviation. Robust scaling, also known as standardization usingmedianandinterquartile range(IQR), is designed to berobusttooutliers. It scales features using the median and IQR as reference points instead of the mean and standard deviation:x′=x−Q2(x)Q3(x)−Q1(x){\displaystyle x'={\frac {x-Q_{2}(x)}{Q_{3}(x)-Q_{1}(x)}}}whereQ1(x),Q2(x),Q3(x){\displaystyle Q_{1}(x),Q_{2}(x),Q_{3}(x)}are the three quartiles (25th, 50th, 75th percentile) of the feature. Unit vector normalization regards each individual data point as a vector, and divide each by itsvector norm, to obtainx′=x/‖x‖{\displaystyle x'=x/\|x\|}. Any vector norm can be used, but the most common ones are theL1 norm and the L2 norm. For example, ifx=(v1,v2,v3){\displaystyle x=(v_{1},v_{2},v_{3})}, then its Lp-normalized version is:(v1(|v1|p+|v2|p+|v3|p)1/p,v2(|v1|p+|v2|p+|v3|p)1/p,v3(|v1|p+|v2|p+|v3|p)1/p){\displaystyle \left({\frac {v_{1}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}},{\frac {v_{2}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}},{\frac {v_{3}}{(|v_{1}|^{p}+|v_{2}|^{p}+|v_{3}|^{p})^{1/p}}}\right)}
https://en.wikipedia.org/wiki/Feature_scaling
Inmachine learning, ahyperparameteris aparameterthat can be set in order to define any configurable part of amodel's learning process. Hyperparameters can be classified as either model hyperparameters (such as the topology and size of aneural network) or algorithm hyperparameters (such as thelearning rateand the batch size of anoptimizer). These are namedhyperparameters in contrast toparameters, which are characteristics that the model learns from the data. Hyperparameters are not required by every model or algorithm. Some simple algorithms such asordinary least squaresregression require none. However, theLASSOalgorithm, for example, adds aregularizationhyperparameter to ordinary least squares which must be set before training.[1]Even models and algorithms without a strict requirement to define hyperparameters may not produce meaningful results if these are not carefully chosen. However, optimal values for hyperparameters are not always easy to predict. Some hyperparameters may have no meaningful effect, or one important variable may be conditional upon the value of another. Often a separate process ofhyperparameter tuningis needed to find a suitable combination for the data and task. As well was improving model performance, hyperparameters can be used by researchers to introducerobustnessandreproducibilityinto their work, especially if it uses models that incorporaterandom number generation. The time required to train and test a model can depend upon the choice of its hyperparameters.[2]A hyperparameter is usually of continuous or integer type, leading to mixed-type optimization problems.[2]The existence of some hyperparameters is conditional upon the value of others, e.g. the size of each hidden layer in a neural network can be conditional upon the number of layers.[2] Theobjective functionis typicallynon-differentiablewith respect to hyperparameters.[clarification needed]As a result, in most instances, hyperparameters cannot be learned usinggradient-based optimization methods(such as gradient descent), which are commonly employed to learn model parameters. These hyperparameters are those parameters describing a model representation that cannot be learned by common optimization methods, but nonetheless affect the loss function. An example would be the tolerance hyperparameter for errors insupport vector machines. Sometimes, hyperparameters cannot be learned from the training data because they aggressively increase the capacity of a model and can push the loss function to an undesired minimum (overfittingto the data), as opposed to correctly mapping the richness of the structure in the data. For example, if we treat the degree of a polynomial equation fitting a regression model as atrainable parameter, the degree would increase until the model perfectly fit the data, yielding low training error, but poor generalization performance. Most performance variation can be attributed to just a few hyperparameters.[3][2][4]The tunability of an algorithm, hyperparameter, or interacting hyperparameters is a measure of how much performance can be gained by tuning it.[5]For anLSTM, while thelearning ratefollowed by the network size are its most crucial hyperparameters,[6]batching and momentum have no significant effect on its performance.[7] Although some research has advocated the use of mini-batch sizes in the thousands, other work has found the best performance with mini-batch sizes between 2 and 32.[8] An inherent stochasticity in learning directly implies that the empirical hyperparameter performance is not necessarily its true performance.[2]Methods that are notrobustto simple changes in hyperparameters,random seeds, or even different implementations of the same algorithm cannot be integrated into mission critical control systems without significant simplification and robustification.[9] Reinforcement learningalgorithms, in particular, require measuring their performance over a large number of random seeds, and also measuring their sensitivity to choices of hyperparameters.[9]Their evaluation with a small number of random seeds does not capture performance adequately due to high variance.[9]Some reinforcement learning methods, e.g. DDPG (Deep Deterministic Policy Gradient), are more sensitive to hyperparameter choices than others.[9] Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefinedloss functionon given test data.[2]The objective function takes a tuple of hyperparameters and returns the associated loss.[2]Typically these methods are not gradient based, and instead apply concepts fromderivative-free optimizationor black box optimization. Apart from tuning hyperparameters, machine learning involves storing and organizing the parameters and results, and making sure they are reproducible.[10]In the absence of a robust infrastructure for this purpose, research code often evolves quickly and compromises essential aspects like bookkeeping andreproducibility.[11]Online collaboration platforms for machine learning go further by allowing scientists to automatically share, organize and discuss experiments, data, and algorithms.[12]Reproducibility can be particularly difficult fordeep learningmodels.[13]For example, research has shown that deep learning models depend very heavily even on therandom seedselection of therandom number generator.[14]
https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)
Innumerical analysis, aquasi-Newton methodis aniterative numerical methodused either tofind zeroesor tofind local maxima and minimaof functions via an iterativerecurrence formulamuch like the one forNewton's method, except using approximations of thederivativesof the functions in place of exact derivatives. Newton's method requires theJacobian matrixof allpartial derivativesof a multivariate function when used to search for zeros or theHessian matrixwhen usedfor finding extrema. Quasi-Newton methods, on the other hand, can be used when the Jacobian matrices or Hessian matrices are unavailable or are impractical to compute at every iteration. Someiterative methodsthat reduce to Newton's method, such assequential quadratic programming, may also be considered quasi-Newton methods. Newton's methodto find zeroes of a functiong{\displaystyle g}of multiple variables is given byxn+1=xn−[Jg(xn)]−1g(xn){\displaystyle x_{n+1}=x_{n}-[J_{g}(x_{n})]^{-1}g(x_{n})}, where[Jg(xn)]−1{\displaystyle [J_{g}(x_{n})]^{-1}}is theleft inverseof theJacobian matrixJg(xn){\displaystyle J_{g}(x_{n})}ofg{\displaystyle g}evaluated forxn{\displaystyle x_{n}}. Strictly speaking, any method that replaces the exact JacobianJg(xn){\displaystyle J_{g}(x_{n})}with an approximation is a quasi-Newton method.[1]For instance, the chord method (whereJg(xn){\displaystyle J_{g}(x_{n})}is replaced byJg(x0){\displaystyle J_{g}(x_{0})}for all iterations) is a simple example. The methods given below foroptimizationrefer to an important subclass of quasi-Newton methods,secant methods.[2] Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes.Broyden's "good" and "bad" methodsare two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are thecolumn-updating method, theinverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method. More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found.[2][3] The search for a minimum or maximum of a scalar-valued function is closely related to the search for the zeroes of thegradientof that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, ifg{\displaystyle g}is the gradient off{\displaystyle f}, then searching for the zeroes of the vector-valued functiong{\displaystyle g}corresponds to the search for the extrema of the scalar-valued functionf{\displaystyle f}; the Jacobian ofg{\displaystyle g}now becomes the Hessian off{\displaystyle f}. The main difference is thatthe Hessian matrix is a symmetric matrix, unlike the Jacobian whensearching for zeroes. Most quasi-Newton methods used in optimization exploit this symmetry. Inoptimization,quasi-Newton methods(a special case ofvariable-metric methods) are algorithms for finding localmaxima and minimaoffunctions. Quasi-Newton methods for optimization are based onNewton's methodto find thestationary pointsof a function, points where the gradient is 0. Newton's method assumes that the function can be locally approximated as aquadraticin the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and theHessian matrixof secondderivativesof the function to be minimized. In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of thesecant methodto find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation isunder-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian. The first quasi-Newton algorithm was proposed byWilliam C. Davidon, a physicist working atArgonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: theDFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently theSR1 formula(for "symmetric rank-one"), theBHHHmethod, the widespreadBFGS method(suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extensionL-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods. The SR1 formula does not guarantee the update matrix to maintainpositive-definitenessand can be used for indefinite problems. TheBroyden's methoddoes not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating theJacobian(rather than the Hessian). One of the chief advantages of quasi-Newton methods overNewton's methodis that theHessian matrix(or, in the case of quasi-Newton methods, its approximation)B{\displaystyle B}does not need to be inverted. Newton's method, and its derivatives such asinterior point methods, require the Hessian to be inverted, which is typically implemented by solving asystem of linear equationsand is often quite costly. In contrast, quasi-Newton methods usually generate an estimate ofB−1{\displaystyle B^{-1}}directly. As inNewton's method, one uses a second-order approximation to find the minimum of a functionf(x){\displaystyle f(x)}. TheTaylor seriesoff(x){\displaystyle f(x)}around an iterate is where (∇f{\displaystyle \nabla f}) is thegradient, andB{\displaystyle B}an approximation to theHessian matrix.[4]The gradient of this approximation (with respect toΔx{\displaystyle \Delta x}) is and setting this gradient to zero (which is the goal of optimization) provides the Newton step: The Hessian approximationB{\displaystyle B}is chosen to satisfy which is called thesecant equation(the Taylor series of the gradient itself). In more than one dimensionB{\displaystyle B}isunderdetermined. In one dimension, solving forB{\displaystyle B}and applying the Newton's step with the updated value is equivalent to thesecant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such asBroyden's method) seek a symmetric solution (BT=B{\displaystyle B^{T}=B}); furthermore, the variants listed below can be motivated by finding an updateBk+1{\displaystyle B_{k+1}}that is as close as possible toBk{\displaystyle B_{k}}in somenorm; that is,Bk+1=argminB⁡‖B−Bk‖V{\displaystyle B_{k+1}=\operatorname {argmin} _{B}\|B-B_{k}\|_{V}}, whereV{\displaystyle V}is somepositive-definite matrixthat defines the norm. An approximate initial valueB0=βI{\displaystyle B_{0}=\beta I}is often sufficient to achieve rapid convergence, although there is no general strategy to chooseβ{\displaystyle \beta }.[5]Note thatB0{\displaystyle B_{0}}should be positive-definite. The unknownxk{\displaystyle x_{k}}is updated applying the Newton's step calculated using the current approximate Hessian matrixBk{\displaystyle B_{k}}: is used to update the approximate HessianBk+1{\displaystyle B_{k+1}}, or directly its inverseHk+1=Bk+1−1{\displaystyle H_{k+1}=B_{k+1}^{-1}}using theSherman–Morrison formula. The most popular update formulas are: Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method.[2]These recursive low-rank matrix updates can also represented as an initial matrix plus a low-rank correction. This is theCompact quasi-Newton representation, which is particularly effective for constrained and/or large problems. Whenf{\displaystyle f}is a convex quadratic function with positive-definite HessianB{\displaystyle B}, one would expect the matricesHk{\displaystyle H_{k}}generated by a quasi-Newton method to converge to the inverse HessianH=B−1{\displaystyle H=B^{-1}}. This is indeed the case for the class of quasi-Newton methods based on least-change updates.[6] Implementations of quasi-Newton methods are available in many programming languages. Notable open source implementations include: Notable proprietary implementations include:
https://en.wikipedia.org/wiki/Variable_metric_methods
Automated machine learning(AutoML) is the process ofautomatingthe tasks of applyingmachine learningto real-world problems. It is the combination of automation and ML.[1] AutoML potentially includes every stage from beginning with a raw dataset to building a machine learning model ready for deployment. AutoML was proposed as anartificial intelligence-based solution to the growing challenge of applying machine learning.[2][3]The high degree of automation in AutoML aims to allow non-experts to make use of machine learning models and techniques without requiring them to become experts in machine learning. Automating the process of applying machine learning end-to-end additionally offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform hand-designed models.[4] Common techniques used in AutoML includehyperparameter optimization,meta-learningandneural architecture search. In a typical machine learning application, practitioners have a set of input data points to be used for training.[5]The raw data may not be in a form that all algorithms can be applied to. To make the data amenable for machine learning, an expert may have to apply appropriatedata pre-processing,feature engineering,feature extraction, andfeature selectionmethods. After these steps, practitioners must then performalgorithm selectionandhyperparameter optimizationto maximize the predictive performance of their model. If deep learning is used, the architecture of the neural network must also be chosen manually by the machine learning expert. Each of these steps may be challenging, resulting in significant hurdles to using machine learning. AutoML aims to simplify these steps for non-experts, and to make it easier for them to use machine learning techniques correctly and effectively. AutoML plays an important role within the broader approach of automatingdata science, which also includes challenging tasks such as data engineering, data exploration and model interpretation and prediction.[6] Automated machine learning can target various stages of the machine learning process.[3]Steps to automate are: There are a number of key challenges being tackled around automated machine learning. A big issue surrounding the field is referred to as "development as a cottage industry".[8]This phrase refers to the issue in machine learning where development relies on manual decisions and biases of experts. This is contrasted to the goal of machine learning which is to create systems that can learn and improve from their own usage and analysis of the data. Basically, it's the struggle between how much experts should get involved in the learning of the systems versus how much freedom they should be giving the machines. However, experts and developers must help create and guide these machines to prepare them for their own learning. To create this system, it requires labor intensive work with knowledge of machine learning algorithms andsystem design.[9] Additionally, some other challenges include meta-learning challenges[10]and computational resource allocation.
https://en.wikipedia.org/wiki/AutoML
Incontrol theoryaself-tuningsystem is capable of optimizing its own internal running parameters in order to maximize or minimize the fulfilment of anobjective function; typically the maximization ofefficiencyorerrorminimization. Self-tuning and auto-tuning often refer to the same concept. Many software research groups consider auto-tuning the proper nomenclature. Self-tuning systems typically exhibitnon-linearadaptive control. Self-tuning systems have been a hallmark of the aerospace industry for decades, as this sort of feedback is necessary to generateoptimal multi-variable controlfor non-linear processes. In the telecommunications industry,adaptive communicationsare often used to dynamically modify operational system parameters to maximize efficiency and robustness. Examples of self-tuning systems in computing include: Performance benefits can be substantial. ProfessorJack Dongarra, an American computer scientist, claims self-tuning boosts performance, often on the order of 300%.[1] Digital self-tuning controllers are an example of self-tuning systems at the hardware level. Self-tuning systems are typically composed of four components: expectations, measurement, analysis, and actions. The expectations describe how the system should behave given exogenous conditions. Measurements gather data about the conditions and behaviour. Analysis helps determine whether the expectations are being met- and which subsequent actions should be performed. Common actions are gathering more data and performing dynamic reconfiguration of the system. Self-tuning (self-adapting) systems of automatic control are systems whereby adaptation to randomly changing conditions is performed by means of automatically changing parameters or via automatically determining their optimum configuration.[2]In any non-self-tuning automatic control system there are parameters which have an influence on system stability and control quality and which can be tuned. If these parameters remain constant whilst operating conditions (such as input signals or different characteristics of controlled objects) are substantially varying, control can degrade or even become unstable. Manual tuning is often cumbersome and sometimes impossible. In such cases, not only is using self-tuning systems technically and economically worthwhile, but it could be the only means of robust control. Self-tuning systems can be with or without parameter determination. In systems with parameter determination the required level of control quality is achieved by automatically searching for an optimum (in some sense) set of parameter values. Control quality is described by a generalised characteristic which is usually a complex and not completely known or stable function of the primary parameters. This characteristic is either measured directly or computed based on the primary parameter values. The parameters are then tentatively varied. An analysis of the control quality characteristic oscillations caused by the varying of the parameters makes it possible to figure out if the parameters have optimum values, i.e.. if those values deliver extreme (minimum or maximum) values of the control quality characteristic. If the characteristic values deviate from an extremum, the parameters need to be varied until optimum values are found. Self-tuning systems with parameter determination can reliably operate in environments characterised by wide variations of exogenous conditions. In practice systems with parameter determination require considerable time to find an optimum tuning, i.e. time necessary for self-tuning in such systems is bounded from below. Self-tuning systems without parameter determination do not have this disadvantage. In such systems, some characteristic of control quality is used (e.g., the first time derivative of a controlled parameter). Automatic tuning makes sure that this characteristic is kept within given bounds. Different self-tuning systems without parameter determination exist that are based on controlling transitional processes, frequency characteristics, etc. All of those are examples of closed-circuit self-tuning systems, whereby parameters are automatically corrected every time the quality characteristic value falls outside the allowable bounds. In contrast, open-circuit self-tuning systems are systems with para-metrical compensation, whereby input signal itself is controlled and system parameters are changed according to a specified procedure. This type of self-tuning can be close to instantaneous. However, in order to realise such self-tuning one needs to control the environment in which the system operates and a good enough understanding of how the environment influences the controlled system is required. In practice self-tuning is done through the use of specialised hardware or adaptive software algorithms. Giving software the ability to self-tune (adapt):
https://en.wikipedia.org/wiki/Self-tuning
Inmachine learning, thevanishing gradient problemis the problem of greatly diverginggradientmagnitudes between earlier and later layers encountered when trainingneural networkswithbackpropagation. In such methods, neural network weights are updated proportional to theirpartial derivativeof theloss function.[1]As the number of forward propagation steps in a network increases, for instance due to greater network depth, the gradients of earlier weights are calculated with increasingly many multiplications. These multiplications shrink the gradient magnitude. Consequently, the gradients of earlier weights will be exponentially smaller than the gradients of later weights. This difference in gradient magnitude might introduce instability in the training process, slow it, or halt it entirely.[1]For instance, consider thehyperbolic tangentactivation function. The gradients of this function are in range[-1,1]. The product of repeated multiplication with such gradients decreases exponentially. The inverse problem, when weight gradients at earlier layers get exponentially larger, is called theexploding gradient problem. Backpropagation allowed researchers to trainsuperviseddeep artificial neural networks from scratch, initially with little success.Hochreiter'sdiplomthesis of 1991 formally identified the reason for this failure in the "vanishing gradient problem",[2][3]which not only affectsmany-layeredfeedforward networks,[4]but alsorecurrent networks.[5][6]The latter are trained by unfolding them into very deep feedforward networks, where a new layer is created for each time-step of an input sequence processed by the network (the combination of unfolding and backpropagation is termedbackpropagation through time). This section is based on the paperOn the difficulty of training Recurrent Neural Networksby Pascanu, Mikolov, and Bengio.[6] A generic recurrent network has hidden statesh1,h2,...{\displaystyle h_{1},h_{2},...}, inputsu1,u2,...{\displaystyle u_{1},u_{2},...}, and outputsx1,x2,...{\displaystyle x_{1},x_{2},...}. Let it be parametrized byθ{\displaystyle \theta }, so that the system evolves as(ht,xt)=F(ht−1,ut,θ){\displaystyle (h_{t},x_{t})=F(h_{t-1},u_{t},\theta )}Often, the outputxt{\displaystyle x_{t}}is a function ofht{\displaystyle h_{t}}, as somext=G(ht){\displaystyle x_{t}=G(h_{t})}. The vanishing gradient problem already presents itself clearly whenxt=ht{\displaystyle x_{t}=h_{t}}, so we simplify our notation to the special case with:xt=F(xt−1,ut,θ){\displaystyle x_{t}=F(x_{t-1},u_{t},\theta )}Now, take itsdifferential:dxt=∇θF(xt−1,ut,θ)dθ+∇xF(xt−1,ut,θ)dxt−1=∇θF(xt−1,ut,θ)dθ+∇xF(xt−1,ut,θ)(∇θF(xt−2,ut−1,θ)dθ+∇xF(xt−2,ut−1,θ)dxt−2)=⋯=(∇θF(xt−1,ut,θ)+∇xF(xt−1,ut,θ)∇θF(xt−2,ut−1,θ)+⋯)dθ{\displaystyle {\begin{aligned}dx_{t}&=\nabla _{\theta }F(x_{t-1},u_{t},\theta )d\theta +\nabla _{x}F(x_{t-1},u_{t},\theta )dx_{t-1}\\&=\nabla _{\theta }F(x_{t-1},u_{t},\theta )d\theta +\nabla _{x}F(x_{t-1},u_{t},\theta )(\nabla _{\theta }F(x_{t-2},u_{t-1},\theta )d\theta +\nabla _{x}F(x_{t-2},u_{t-1},\theta )dx_{t-2})\\&=\cdots \\&=\left(\nabla _{\theta }F(x_{t-1},u_{t},\theta )+\nabla _{x}F(x_{t-1},u_{t},\theta )\nabla _{\theta }F(x_{t-2},u_{t-1},\theta )+\cdots \right)d\theta \end{aligned}}}Training the network requires us to define a loss function to be minimized. Let it beL(xT,u1,...,uT){\displaystyle L(x_{T},u_{1},...,u_{T})}[note 1], then minimizing it by gradient descent gives Δθ=−η⋅[∇xL(xT)(∇θF(xt−1,ut,θ)+∇xF(xt−1,ut,θ)∇θF(xt−2,ut−1,θ)+⋯)]T{\displaystyle \Delta \theta =-\eta \cdot \left[\nabla _{x}L(x_{T})\left(\nabla _{\theta }F(x_{t-1},u_{t},\theta )+\nabla _{x}F(x_{t-1},u_{t},\theta )\nabla _{\theta }F(x_{t-2},u_{t-1},\theta )+\cdots \right)\right]^{T}}whereη{\displaystyle \eta }is the learning rate. The vanishing/exploding gradient problem appears because there are repeated multiplications, of the form∇xF(xt−1,ut,θ)∇xF(xt−2,ut−1,θ)∇xF(xt−3,ut−2,θ)⋯{\displaystyle \nabla _{x}F(x_{t-1},u_{t},\theta )\nabla _{x}F(x_{t-2},u_{t-1},\theta )\nabla _{x}F(x_{t-3},u_{t-2},\theta )\cdots } For a concrete example, consider a typical recurrent network defined by xt=F(xt−1,ut,θ)=Wrecσ(xt−1)+Winut+b{\displaystyle x_{t}=F(x_{t-1},u_{t},\theta )=W_{rec}\sigma (x_{t-1})+W_{in}u_{t}+b}whereθ=(Wrec,Win){\displaystyle \theta =(W_{rec},W_{in})}is the network parameter,σ{\displaystyle \sigma }is thesigmoid activation function[note 2], applied to each vector coordinate separately, andb{\displaystyle b}is the bias vector. Then,∇xF(xt−1,ut,θ)=Wrecdiag⁡(σ′(xt−1)){\displaystyle \nabla _{x}F(x_{t-1},u_{t},\theta )=W_{rec}\mathop {diag} (\sigma '(x_{t-1}))}, and so∇xF(xt−1,ut,θ)∇xF(xt−2,ut−1,θ)⋯∇xF(xt−k,ut−k+1,θ)=Wrecdiag⁡(σ′(xt−1))Wrecdiag⁡(σ′(xt−2))⋯Wrecdiag⁡(σ′(xt−k)){\displaystyle {\begin{aligned}\nabla _{x}F(x_{t-1},u_{t},\theta )&\nabla _{x}F(x_{t-2},u_{t-1},\theta )\cdots \nabla _{x}F(x_{t-k},u_{t-k+1},\theta )\\=W_{rec}\mathop {diag} (\sigma '(x_{t-1}))&W_{rec}\mathop {diag} (\sigma '(x_{t-2}))\cdots W_{rec}\mathop {diag} (\sigma '(x_{t-k}))\end{aligned}}}Since|σ′|≤1{\displaystyle |\sigma '|\leq 1}, theoperator normof the above multiplication is bounded above by‖Wrec‖k{\displaystyle \|W_{rec}\|^{k}}. So if thespectral radiusofWrec{\displaystyle W_{rec}}isγ<1{\displaystyle \gamma <1}, then at largek{\displaystyle k}, the above multiplication has operator norm bounded above byγk→0{\displaystyle \gamma ^{k}\to 0}. This is the prototypical vanishing gradient problem. The effect of a vanishing gradient is that the network cannot learn long-range effects. Recall Equation (loss differential):∇θL=∇xL(xT,u1,...,uT)(∇θF(xt−1,ut,θ)+∇xF(xt−1,ut,θ)∇θF(xt−2,ut−1,θ)+⋯){\displaystyle \nabla _{\theta }L=\nabla _{x}L(x_{T},u_{1},...,u_{T})\left(\nabla _{\theta }F(x_{t-1},u_{t},\theta )+\nabla _{x}F(x_{t-1},u_{t},\theta )\nabla _{\theta }F(x_{t-2},u_{t-1},\theta )+\cdots \right)}The components of∇θF(x,u,θ){\displaystyle \nabla _{\theta }F(x,u,\theta )}are just components ofσ(x){\displaystyle \sigma (x)}andu{\displaystyle u}, so ifut,ut−1,...{\displaystyle u_{t},u_{t-1},...}are bounded, then‖∇θF(xt−k−1,ut−k,θ)‖{\displaystyle \|\nabla _{\theta }F(x_{t-k-1},u_{t-k},\theta )\|}is also bounded by someM>0{\displaystyle M>0}, and so the terms in∇θL{\displaystyle \nabla _{\theta }L}decay asMγk{\displaystyle M\gamma ^{k}}. This means that, effectively,∇θL{\displaystyle \nabla _{\theta }L}is affected only by the firstO(γ−1){\displaystyle O(\gamma ^{-1})}terms in the sum. Ifγ≥1{\displaystyle \gamma \geq 1}, the above analysis does not quite work.[note 3]For the prototypical exploding gradient problem, the next model is clearer. Following (Doya, 1993),[7]consider this one-neuron recurrent network with sigmoid activation:xt+1=(1−ϵ)xt+ϵσ(wxt+b)+ϵw′ut{\displaystyle x_{t+1}=(1-\epsilon )x_{t}+\epsilon \sigma (wx_{t}+b)+\epsilon w'u_{t}}At the smallϵ{\displaystyle \epsilon }limit, the dynamics of the network becomesdxdt=−x(t)+σ(wx(t)+b)+w′u(t){\displaystyle {\frac {dx}{dt}}=-x(t)+\sigma (wx(t)+b)+w'u(t)}Consider first theautonomouscase, withu=0{\displaystyle u=0}. Setw=5.0{\displaystyle w=5.0}, and varyb{\displaystyle b}in[−3,−2]{\displaystyle [-3,-2]}. Asb{\displaystyle b}decreases, the system has 1 stable point, then has 2 stable points and 1 unstable point, and finally has 1 stable point again. Explicitly, the stable points are(x,b)=(x,ln⁡(x1−x)−5x){\displaystyle (x,b)=\left(x,\ln \left({\frac {x}{1-x}}\right)-5x\right)}. Now considerΔx(T)Δx(0){\displaystyle {\frac {\Delta x(T)}{\Delta x(0)}}}andΔx(T)Δb{\displaystyle {\frac {\Delta x(T)}{\Delta b}}}, whereT{\displaystyle T}is large enough that the system has settled into one of the stable points. If(x(0),b){\displaystyle (x(0),b)}puts the system very close to an unstable point, then a tiny variation inx(0){\displaystyle x(0)}orb{\displaystyle b}would makex(T){\displaystyle x(T)}move from one stable point to the other. This makesΔx(T)Δx(0){\displaystyle {\frac {\Delta x(T)}{\Delta x(0)}}}andΔx(T)Δb{\displaystyle {\frac {\Delta x(T)}{\Delta b}}}both very large, a case of the exploding gradient. If(x(0),b){\displaystyle (x(0),b)}puts the system far from an unstable point, then a small variation inx(0){\displaystyle x(0)}would have no effect onx(T){\displaystyle x(T)}, makingΔx(T)Δx(0)=0{\displaystyle {\frac {\Delta x(T)}{\Delta x(0)}}=0}, a case of the vanishing gradient. Note that in this case,Δx(T)Δb≈∂x(T)∂b=(1x(T)(1−x(T))−5)−1{\displaystyle {\frac {\Delta x(T)}{\Delta b}}\approx {\frac {\partial x(T)}{\partial b}}=\left({\frac {1}{x(T)(1-x(T))}}-5\right)^{-1}}neither decays to zero nor blows up to infinity. Indeed, it's the only well-behaved gradient, which explains why early researches focused on learning or designing recurrent networks systems that could perform long-ranged computations (such as outputting the first input it sees at the very end of an episode) by shaping its stable attractors.[8] For the general case, the intuition still holds ([6]Figures 3, 4, and 5). Continue using the above one-neuron network, fixingw=5,x(0)=0.5,u(t)=0{\displaystyle w=5,x(0)=0.5,u(t)=0}, and consider a loss function defined byL(x(T))=(0.855−x(T))2{\displaystyle L(x(T))=(0.855-x(T))^{2}}. This produces a rather pathological loss landscape: asb{\displaystyle b}approach−2.5{\displaystyle -2.5}from above, the loss approaches zero, but as soon asb{\displaystyle b}crosses−2.5{\displaystyle -2.5}, the attractor basin changes, and loss jumps to 0.50.[note 4] Consequently, attempting to trainb{\displaystyle b}by gradient descent would "hit a wall in the loss landscape", and cause exploding gradient. A slightly more complex situation is plotted in,[6]Figures 6. To overcome this problem, several methods were proposed. Forrecurrent neural networks, thelong short-term memory(LSTM) network was designed to solve the problem (Hochreiter&Schmidhuber, 1997).[9] For the exploding gradient problem, (Pascanu et al, 2012)[6]recommended gradient clipping, meaning dividing the gradient vectorg{\displaystyle g}by‖g‖/gmax{\displaystyle \|g\|/g_{max}}if‖g‖>gmax{\displaystyle \|g\|>g_{max}}. This restricts the gradient vectors within a ball of radiusgmax{\displaystyle g_{max}}. Batch normalizationis a standard method for solving both the exploding and the vanishing gradient problems.[10][11] In multi-level hierarchy of networks (Schmidhuber, 1992), pre-trained one level at a time throughunsupervised learning, fine-tuned throughbackpropagation.[12]Here each level learns a compressed representation of the observations that is fed to the next level. Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally usefulfeature detectors. Then the network is trained further by supervisedbackpropagationto classify labeled data. Thedeep belief networkmodel by Hinton et al. (2006) involves learning the distribution of a high-level representation using successive layers of binary or real-valuedlatent variables. It uses arestricted Boltzmann machineto model each new layer of higher level features. Each new layer guarantees an increase on thelower-boundof thelog likelihoodof the data, thus improving the model, if trained properly. Once sufficiently many layers have been learned the deep architecture may be used as agenerative modelby reproducing the data when sampling down the model (an "ancestral pass") from the top level feature activations.[13]Hinton reports that his models are effective feature extractors over high-dimensional, structured data.[14] Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered byGPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this "is basically what is winning many of the image recognition competitions now", but that it "does not really overcome the problem in a fundamental way"[15]since the original models tackling the vanishing gradient problem by Hinton and others were trained in aXeon processor, not GPUs.[13] Residual connections, or skip connections, refers to the architectural motif ofx↦f(x)+x{\displaystyle x\mapsto f(x)+x},wheref{\displaystyle f}is an arbitrary neural network module. This gives the gradient of∇f+I{\displaystyle \nabla f+I}, where the identity matrix do not suffer from the vanishing or exploding gradient. During backpropagation, part of the gradient flows through the residual connections.[16] Concretely, let the neural network (without residual connections) befn∘fn−1∘⋯∘f1{\displaystyle f_{n}\circ f_{n-1}\circ \cdots \circ f_{1}}, then with residual connections, the gradient of output with respect to the activations at layerl{\displaystyle l}isI+∇fl+1+∇fl+2∇fl+1+⋯{\displaystyle I+\nabla f_{l+1}+\nabla f_{l+2}\nabla f_{l+1}+\cdots }. The gradient thus does not vanish in arbitrarily deep networks. Feedforward networks with residual connections can be regarded as an ensemble of relatively shallow nets. In this perspective, they resolve the vanishing gradient problem by being equivalent to ensembles of many shallow networks, for which there is no vanishing gradient problem.[17] Rectifierssuch asReLUsuffer less from the vanishing gradient problem, because they only saturate in one direction.[18] Weight initializationis another approach that has been proposed to reduce the vanishing gradient problem in deep networks. Kumar suggested that the distribution of initial weights should vary according to activation function used and proposed to initialize the weights in networks with the logistic activation function using a Gaussian distribution with a zero mean and a standard deviation of3.6/sqrt(N), whereNis the number of neurons in a layer.[19] Recently, Yilmaz and Poli[20]performed a theoretical analysis on how gradients are affected by the mean of the initial weights in deep neural networks using the logistic activation function and found that gradients do not vanish if themeanof the initial weights is set according to the formula:max(−1,-8/N). This simple strategy allows networks with 10 or 15 hidden layers to be trained very efficiently and effectively using the standardbackpropagation. Behnke relied only on the sign of the gradient (Rprop) when training hisNeural Abstraction Pyramid[21]to solve problems like image reconstruction and face localization.[citation needed] Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g.,random guessor more systematicallygenetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem.[22]
https://en.wikipedia.org/wiki/Vanishing_gradient_problem
Artificial intelligence(AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as ageneral-purpose technology. AI programs are designed to simulate human perception and understanding. These systems are capable of adapting to new information and responding to changing situations.Machine learninghas been used for various scientific and commercial purposes[1]includinglanguage translation,image recognition,decision-making,[2][3]credit scoring, ande-commerce. Artificial Intelligence (AI) is all about creating computer systems that act like people. This means they can understand information and human language and make decisions similar to how we do.  Artificial intelligence technologies are now being used across various industries, transforming how they function and creating new opportunities. This article provides an overview of the applications of AI in fields like health care, finance, and education, while also discussing the challenges and future prospects in these areas. Machine learning has been used forrecommendation systemsin determining which posts should show up insocial media feeds.[4][5]Various types ofsocial media analysisalso make use of machine learning[6][7]and there is research into its use for (semi-)automated tagging/enhancement/correction ofonline misinformationand relatedfilter bubbles.[8][9][10] AI has been used to customize shopping options and personalize offers.[11]Online gamblingcompanies have used AI for targeting gamblers.[12] Intelligent personal assistantsuse AI to understand many natural language requests in other ways than rudimentary commands. Common examples are Apple's Siri, Amazon's Alexa, and a more recent AI, ChatGPT by OpenAI.[13] Bing Chathas used artificial intelligence as part of itssearch engine.[14] Machine learning can be used to combat spam, scams, andphishing. It can scrutinize the contents of spam and phishing attacks to attempt to identify malicious elements.[15]Some models built via machine learning algorithms have over 90% accuracy in distinguishing between spam and legitimate emails.[16]These models can be refined using new data and evolving spam tactics. Machine learning also analyzes traits such as sender behavior, email header information, and attachment types, potentially enhancing spam detection.[17] Speech translation technology attempts to convert one language's spoken words into another language. This potentially reduces language barriers in global commerce and cross-cultural exchange, enabling speakers of various languages to communicate with one another.[18] AI has been used to automatically translate spoken language and textual content in products such asMicrosoft Translator,Google Translate, andDeepL Translator.[19]Additionally, research and development are in progress to decode and conduct animal communication.[20][21] Meaning is conveyed not only by text, but also through usage and context (seesemanticsandpragmatics). As a result, the two primary categorization approaches for machine translations arestatistical machine translation(SMT) andneural machine translations(NMTs). The old method of performing translation was to use statistical methodology to forecast the best probable output with specific algorithms. However, with NMT, the approach employs dynamic algorithms to achieve better translations based on context.[22] AI has been used infacial recognition systems. Some examples are Apple'sFace IDand Android'sFace Unlock, which are used to secure mobile devices.[23] Image labeling has been used byGoogle Image Labelerto detect products in photos and to allow people to search based on a photo. Image labeling has also been demonstrated to generate speech to describe images to blind people.[19]Facebook'sDeepFaceidentifies human faces in digital images.[citation needed] Social media sites and content aggregators use AI systems to make personalized news feeds by watching users' actions and engagement history.[24]Content moderation often relies on AI to spot harmful content, though these systems have trouble with understanding the bigger picture. Search engines use rules to rank results and understand what people want, while virtual helpers like Siri and Alexa use everyday language to read user queries. Email services use learning tools to find spam by checking the content and patterns.[25] Neural machine translation systems have gotten much better at translating text. It works by examining complete sentences to maintain accuracy. Computer vision systems can identify people in images and videos. This assists with tasks like sorting photos and performing security checks. AI is often used for surveillance for credit systems, targeted advertising and automation we can erode privacy and concentrate power. It also led to dystopian outcomes such as autonomous systems making unaccountable decisions.[26] Games have been a major application[relevant?]of AI's capabilities since the 1950s. In the 21st century, AIs have beaten human players in many games, includingchess(Deep Blue),Jeopardy!(Watson),[27]Go(AlphaGo),[28][29][30][31][32][33][34]poker(Pluribus[35]andCepheus),[36]E-sports(StarCraft),[37][38]andgeneral game playing(AlphaZero[39][40][41]andMuZero).[42][43][44][45] Kuki AI is a set ofchatbotsand other apps which were designed for entertainment and as a marketing tool.[46][47]Character.aiis another example of a chatbot being used for recreation.[citation needed] AI has changed gaming by making smart non-player characters (NPCs) that can adjust.[48]Algorithms can now create game worlds and situations on their own, which reduces development costs and revives the excitement to play again. In digital art and music, AI tools help people express themselves in fresh, new ways using generative algorithms.[49] Recommendation systems on streaming platforms check how people watch to suggest content. This greatly affects the way viewers enjoy the media.[citation needed] AI for Goodis a platform launched in 2017 by theInternational Telecommunication Union(ITU) agency of the United Nations (UN). The goal of the platform is to use AI to help achieve the UN'sSustainable Development Goals.[citation needed] TheUniversity of Southern Californialaunched the Center for Artificial Intelligence in Society, with the goal of using AI to address problems such as homelessness.Stanfordresearchers useAIto analyze satellite images to identify high poverty areas.[50] In agriculture, AI has been proposed as a way for farmers to identify areas that need irrigation, fertilization, or pesticide treatments to increase yields, thereby improving efficiency.[51]AI has been used to attempt toclassify livestock pig call emotions,[20]automategreenhouses,[52]detect diseases and pests,[53]and optimize irrigation.[54] Precision farming uses machine learning and data from satellites, drones and sensors to water, fertilize and manage pests. Computer vision helps keep an eye on plant health, spot diseases and even help with automated harvesting of specific crops. With predictive analytics farmers can make better decisions by predicting weather patterns and knowing when to plant.[55] AI helps with livestock management by tracking animal health and production. These are the tools of "smart farming". They make farming better and more sustainable.[citation needed] Cyber securitycompanies are adoptingneural networks,machine learning, andnatural language processingto improve their systems.[56] Applications of AI in cyber security include: Machine learning tools look at traffic patterns. They find an unusual activity that might be a security breach. Automated systems gather and analyze data. Their goal is to find new threats before they do significant damage.[62]User behavior analytics establish normal patterns for users and systems that alert when there is a change that might mean a hacked account.[citation needed] AI brings new challenges to cybersecurity. Attackers are using the same tools to plan smarter attacks. This is an ongoing race to technological arms race.[citation needed] AI elevates teaching, focusing on significant issues like the knowledge nexus and educational equality. The evolution of AI in education and technology should be used to improve human capabilities in relationships where they do not replace humans.UNESCOrecognizes the future of AI in education as an instrument to reach Sustainable Development Goal 4, called "Inclusive and Equitable Quality Education."[63] TheWorld Economic Forumalso stresses AI's contribution to students' overall improvement and transforming teaching into a more enjoyable process.[63] AI driven tutoring systems, such as Khan Academy, Duolingo and Carnegie Learning are the forefoot of delivering personalized education.[64] These platforms leverage AI algorithms to analyze individual learning patterns, strengths, and weaknesses, enabling the customization of content and Algorithm to suit each student's pace and style of learning.[64] In educational institutions, AI is increasingly used to automate routine tasks like attendance tracking, grading and marking, which allows educators to devote more time to interactive teaching and direct student engagement.[65] Furthermore, AI tools are employed to monitor student progress, analyze learning behaviors, and predict academic challenges, facilitating timely and proactive interventions for students who may be at risk of falling behind.[65] Despite the benefits, the integration of AI in education raises significant ethical and privacy concerns, particularly regarding the handling of sensitive student data.[64] It is imperative that AI systems in education are designed and operated with a strong emphasis on transparency, security, and respect for privacy to maintain trust and uphold the integrity of educational practices.[64] Much of the regulation will be influenced by the AI Act, the world's first comprehensive AI law.[66] Intelligent tutoring systems provide personalized learning by adapting content based on how each student performs.  Automated assessment tools check student work and give fast feedback which reduces the tutor workload.[67]Learning analytics platforms can find students who might have trouble sooner. They do this by looking for patterns connected to learning issues.[citation needed] Content creation tools assist teachers in making learning materials that fit each student's needs. This includes turning text into several languages. Even though these tools offer many benefits, there are still concerns about data privacy. People worry it could also widen the current gaps in education.[68] Financial institutionshave long usedartificial neural networksystems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI inbankingbegan in 1987 whenSecurity Pacific National Banklaunched a fraud prevention task-force to counter the unauthorized use of debit cards.[69] Banks use AI to organize operations for bookkeeping, investing in stocks, and managing properties. AI can adapt to changes during non-business hours.[70]AI is used to combat fraudand financial crimes by monitoring behavioral patterns for anyabnormal changes or anomalies.[71][72][73] The use of AI in applications such as online trading and decision-making has changed major economic theories.[74]For example, AI-based buying and selling platforms estimate personalized demand and supply curves, thus enabling individualized pricing. AI systems reduceinformation asymmetryin the market and thusmake markets more efficient.[75]The application of artificial intelligence in the financial industry can alleviate the financing constraints of non-state-owned enterprises, especially for smaller and more innovative enterprises.[76] Algorithmic trading systems make trades much quicker and in larger amounts than human traders. Robo-advisors provide automatic advice for investing and managing your money at a lower cost than human advisors. Insurance and lending companies use machine learning to determine risks and set prices.[citation needed] Financial groups use AI systems to check transactions for money laundering. They do this by spotting strange patterns.[77]Auditing gets better with detection algorithms. These algorithms find unusual financial transactions.[citation needed] Algorithmic tradinginvolves using AI systems to make trading decisions at speeds of magnitude greater than any human is capable of, making millions of trades in a day without human intervention. Suchhigh-frequency tradingrepresents a fast-growing sector. Many banks, funds, and proprietary trading firms now have AI-managed portfolios.Automated trading systemsare typically used by large institutional investors but include smaller firms trading with their own AI systems.[78] Large financial institutions use AI to assist with their investment practices.BlackRock's AI engine,Aladdin, is used both within the company and by clients to help with investment decisions. Its functions include the use ofnatural language processingto analyze text such as news, broker reports, and social media feeds. It then gauges the sentiment on the companies mentioned and assigns a score. Banks such asUBSandDeutsche Bankuse SQREEM (Sequential Quantum Reduction and Extraction Model) to mine data to develop consumer profiles and match them withwealth managementproducts.[79] Online lenderUpstartuses machine learning forunderwriting.[80] ZestFinance's Zest Automated Machine Learning (ZAML) platform is used for credit underwriting. This platform uses machine learning to analyze data, including purchase transactions and how a customer fills out a form, to score borrowers. The platform is handy for assigning credit scores to those with limited credit histories.[81] AI makes continuous auditing possible. Potential benefits include reducing audit risk, increasing the level of assurance, and reducing audit duration.[82][quantify] Continuous auditing with AI allows real-time monitoring and reporting of financial activities and provides businesses with timely insights that can lead to quick decision-making.[83] AI software, such as LaundroGraph which uses contemporary suboptimal datasets, could be used foranti-money laundering(AML).[84][85] In the 1980s, AI started to become prominent in finance asexpert systemswere commercialized. For example, Dupont created 100 expert systems, which helped them to save almost $10 million per year.[86]One of the first systems was the Pro-trader expert system that predicted the 87-point drop in theDow Jones Industrial Averagein 1986. "The major junctions of the system were to monitor premiums in the market, determine the optimum investment strategy, execute transactions when appropriate and modify the knowledge base through a learning mechanism."[87] One of the first expert systems to help with financial plans was PlanPowerm and Client Profiling System, created by Applied Expert Systems (APEX). It was launched in 1986. It helped create personal financial plans for people.[88] In the 1990s, AI was applied tofraud detection. In 1993, FinCEN Artificial Intelligence System (FAIS) was launched. It was able to review over 200,000 transactions per week, and over two years, it helped identify 400 potential cases ofmoney launderingequal to $1 billion.[89]These expert systems were later replaced by machine learning systems.[90] AI can enhance entrepreneurial activity, and AI is one of the most dynamic areas for start-ups, with significant venture capital flowing into AI.[91] AIfacial recognition systemsare used formass surveillance, notably in China.[92][93]In 2019,Bengaluru, Indiadeployed AI-managed traffic signals. This system uses cameras to monitor traffic density and adjust signal timing based on the interval needed to clear traffic.[94] Various countries are deploying AI military applications.[95]The main applications enhancecommand and control, communications, sensors, integration and interoperability.[citation needed]Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous andautonomous vehicles.[95]AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions,target acquisition, coordination and deconfliction of distributedJoint Firesbetween networked combat vehicles involving manned and unmanned teams.[citation needed] AI has been used in military operations in Iraq, Syria, Israel and Ukraine.[95][96][97][98] AI in healthcare is often used for classification, to evaluate aCT scanorelectrocardiogramor to identify high-risk patients for population health. AI is helping with the high-cost problem of dosing. One study suggested that AI could save $16 billion. In 2016, a study reported that an AI-derived formula derived the proper dose of immunosuppressant drugs to give to transplant patients.[99]Current research has indicated that non-cardiac vascular illnesses are also being treated with artificial intelligence (AI). For certain disorders, AI algorithms can aid in diagnosis, recommended treatments, outcome prediction, and patient progress tracking. As AI technology advances, it is anticipated that it will become more significant in the healthcare industry.[100] The early detection of diseases like cancer is made possible by AI algorithms, which diagnose diseases by analyzing complex sets of medical data. For example, the IBM Watson system might be used to comb through massive data such as medical records and clinical trials to help diagnose a problem.[101]Microsoft's AI project Hanover helps doctors choosecancer treatmentsfrom among the more than 800 medicines and vaccines.[102][103]Its goal is to memorize all the relevant papers to predict which (combinations of) drugs will be most effective for each patient.Myeloid leukemiais one target. Another study reported on an AI that was as good as doctors in identifying skin cancers.[104]Another project monitors multiple high-risk patients by asking each patient questions based on data acquired from doctor/patient interactions.[105]In one study done withtransfer learning, an AI diagnosed eye conditions similar to anophthalmologistand recommended treatment referrals.[106] Another study demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel judged better than a surgeon.[107] Artificial neural networksare used asclinical decision support systemsfor medical diagnosis,[108]such as inconcept processingtechnology inEMRsoftware. Other healthcare tasks thought suitable for an AI that are in development include: AI-enabledchatbotsdecrease the need for humans to perform basic call center tasks.[124] Machine learning insentiment analysiscan spot fatigue in order to preventoverwork.[124]Similarly,decision support systemscan preventindustrial disastersand makedisaster responsemore efficient.[125]For manual workers inmaterial handling,predictive analyticsmay be used to reducemusculoskeletal injury.[126]Data collected fromwearable sensorscan improveworkplace health surveillance,risk assessment, and research.[125][how?] AI can auto-codeworkers' compensationclaims.[127][128]AI-enabledvirtual realitysystems can enhance safety training for hazard recognition.[125]AI can more efficiently detect accidentnear misses, which are important in reducing accident rates, but are often underreported.[129] AlphaFold 2can determine the 3D structure of a (folded) protein in hours rather than the months required by earlier automated approaches and was used to provide the likely structures of all proteins in the human body and essentially all proteins known to science (more than 200 million).[130][131][132][133] Medical imaging analysis systems can spot patterns that indicate diseases such as cancer. They can do this just as well as human experts. Predictive analytics can help identify patients with a higher risk for specific conditions. This helps in starting treatments earlier.[134] Natural language processing gets key information from electronic health records. This helps doctors make better choices. Machine learning helps find new drugs by predicting how molecules will work together. This can quicken the development of new treatments.[135]Personalized medicine uses AI to change treatments to match each patient's needs. Machine learning has been used fordrug design.[136]It has also been used for predicting molecular properties and exploring large chemical/reaction spaces.[137]Computer-planned syntheses via computational reaction networks, described as a platform that combines "computational synthesis with AI algorithms to predict molecular properties",[138]have been used to explore theorigins of life on Earth,[139]drug-syntheses and developing routes forrecycling200 industrialwaste chemicalsinto important drugs and agrochemicals (chemical synthesis design).[140]There is research about which types of computer-aided chemistry would benefit from machine learning.[141]It can also be used for "drug discoveryand development,drug repurposing, improving pharmaceutical productivity, and clinical trials".[142]It has been used for thedesign of proteinswith prespecified functional sites.[143][144] It has been used with databases for the development of a 46-day process to design, synthesize and test a drug which inhibits enzymes of a particular gene,DDR1. DDR1 is involved in cancers and fibrosis which is one reason for the high-quality datasets that enabled these results.[145] There are various types of applications for machine learning in decoding human biology, such as helping to mapgene expressionpatterns to functional activation patterns[146]or identifying functionalDNA motifs.[147]It is widely used in genetic research.[148] There also is some use of machine learning insynthetic biology,[149][150]disease biology,[150]nanotechnology (e.g. nanostructured materials andbionanotechnology),[151][152]andmaterials science.[153][154][155] There are alsoprototype robot scientists, including robot-embodied ones like the twoRobot Scientists, which show a form of "machine learning" not commonly associated with the term.[156][157] Similarly, there is research and development of biological "wetware computers" that can learn (e.g. for use asbiosensors) and/or implantation into an organism's body (e.g. for use to control prosthetics).[158][159][160]Polymer-based artificial neurons operate directly in biological environments and define biohybrid neurons made of artificial and living components.[161][162] Moreover, ifwhole brain emulationis possible via both scanning and replicating, at a minimum, the bio-chemical brain – as premised in the form of digital replication inThe Age of Em, possibly usingphysical neural networks– that may have applications as or more extensive than e.g. valued human activities and may imply that society would face substantial moral choices, societal risks and ethical problems[163][164]such as whether (and how) such are built,sent through spaceand used compared to potentially competing e.g. potentially more synthetic and/or less human and/or non/less-sentient types of artificial/semi-artificial intelligence.[additional citation(s) needed]An alternative or additive approach to scanning are types of reverse engineering of the brain.[165][166] A subcategory of artificial intelligence is embodied,[167][168]some of which are mobile robotic systems that each consist of one or multiple robots that are able to learn in the physical world. Additionally,biological computers, even if both artificial and highly intelligent, are typically distinguishable from synthetic, predominantly silicon-based, computers. The two technologies could, however, be combined and used for the design of either. Moreover, many tasks may be poorly carried out by AI even if it uses algorithms that are transparent, understood, bias-free, apparently effective and goal-aligned in addition to having trained data sets that are sufficiently large andcleansed. This may occur, for instance, when the underlying data, available metrics,valuesor training methods are incorrect, flawed or used inappropriately.Computer-aidedis a phrase used to describe human activities that make use of computing as tool in more comprehensive activities and systems such as AI for narrow tasks or making use of such without substantially relying on its results (see also:human-in-the-loop).[citation needed]One study described the biological component as a limitation of AI stating that "as long as the biological system cannot be understood, formalized, and imitated, we will not be able to develop technologies that can mimic it" and that, even if it were understood, this does not necessarily mean there will be "a technological solution to imitate natural intelligence".[169]Technologies that integrate biology and AI includebiorobotics. Artificial intelligence is used inastronomyto analyze increasing amounts of available data[170][171]and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights" for example for discoveringexoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects ingravitational wave astronomy.[172]It could also be used for activities in space such asspace exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance,[173]and more autonomous operation.[174][175][176][171] In thesearch for extraterrestrial intelligence(SETI), machine learning has been used in attempts to identify artificially generatedelectromagnetic wavesin available data[177][178]– such as real-time observations[179]– and othertechnosignatures, e.g. viaanomaly detection.[180]Inufology, the SkyCAM-5 project headed by Prof. Hakan Kayal[181]and theGalileo Projectheaded byAvi Loebuse machine learning to attempt to detect and classify types of UFOs.[182][183][184][185][186]The Galileo Project also seeks to detect two further types of potential extraterrestrial technological signatures with the use of AI:'Oumuamua-likeinterstellar objects, and non-manmade artificial satellites.[187][188] Machine learning can also be used to produce datasets of spectral signatures of molecules that may be involved in the atmospheric production or consumption of particular chemicals – such asphosphine possibly detected on Venus– which could prevent miss assignments and, if accuracy is improved, be used in future detections and identifications of molecules on other planets.[189] In April 2024, theScientific Advice Mechanismto theEuropean Commissionpublished advice[190]including a comprehensive evidence review of the opportunities and challenges posed by artificial intelligence in scientific research. As benefits, the evidence review[191]highlighted: As challenges: Machine learning can help to restore and attribute ancient texts.[192]It can help to index texts for example to enable better and easier searching and classification of fragments.[193] Artificial intelligence can also be used to investigate genomes to uncovergenetic history, such asinterbreeding between archaic and modern humansby which for example the past existence of aghost population, notNeanderthalorDenisovan, was inferred.[194] It can also be used for "non-invasive and non-destructive access to internal structures of archaeological remains".[195] Adeep learningsystem was reported to learn intuitive physics from visual data (of virtual 3D environments) based on anunpublishedapproach inspired by studies of visual cognition in infants.[196][197]Other researchers have developed a machine learning algorithm that could discover sets of basic variables of various physical systems and predict the systems' future dynamics from video recordings of their behavior.[198][199]In the future, it may be possible that such can be used to automate the discovery of physical laws of complex systems.[198] AI could be used for materials optimization and discovery such as the discovery of stable materials and the prediction of their crystal structure.[200][201][202] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[203][204][205] Machine learning is used in diverse types ofreverse engineering. For example, machine learning has been used to reverse engineer a composite material part, enabling unauthorized production of high quality parts,[206]and for quickly understanding the behavior ofmalware.[207][208][209]It can be used to reverse engineer artificial intelligence models.[210]It can also design components by engaging in a type of reverse engineering of not-yet existent virtual components such as inverse molecular design for particular desired functionality[211]orprotein designfor prespecified functional sites.[143][144]Biological network reverse engineering could model interactions in a human understandable way, e.g. bas on time series data of gene expression levels.[212] AI is a mainstay of law-related professions. Algorithms and machine learning do some tasks previously done by entry-level lawyers.[213]While its use is common, it is not expected to replace most work done by lawyers in the near future.[214] Theelectronic discoveryindustry uses machine learning to reduce manual searching.[215] Law enforcement has begun usingfacial recognition systems(FRS) to identify suspects from visual data. FRS results have proven to be more accurate when compared to eyewitness results. Furthermore, FRS has shown to have much a better ability to identify individuals when video clarity and visibility are low in comparison to human participants.[216] COMPASis a commercial system used byU.S. courtsto assess the likelihood ofrecidivism.[217] One concern relates toalgorithmic bias, AI programs may become biased after processing data that exhibits bias.[218]ProPublicaclaims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than that of white defendants.[217] In 2019, the city ofHangzhou, China established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to ecommerce and internet-relatedintellectual propertyclaims.[219]: 124Parties appear before the court via videoconference and AI evaluates the evidence presented and applies relevant legal standards.[219]: 124 Another application of AI is in human resources. AI can screen resumes and rank candidates based on their qualifications, predict candidate success in given roles, and automate repetitive communication tasks via chatbots.[citation needed] AI has simplified the recruiting/job search process for both recruiters and job seekers. According toRaj MukherjeefromIndeed, 65% of job searchers search again within 91 days after hire. An AI-powered engine streamlines the complexity of job hunting by assessing information on job skills, salaries, and user tendencies, matching job seekers to the most relevant positions. Machine intelligence calculates appropriate wages and highlights resume information for recruiters using NLP, which extracts relevant words and phrases from text. Another application is an AI resume builder that compiles a CV in 5 minutes.[citation needed]Chatbotsassist website visitors and refine workflows. AI underliesavatars(automated online assistants) on web pages.[220]It can reduce operation and training costs.[220]Pypestreamautomated customer service for its mobile application to streamline communication with customers.[221] A Google app analyzes language and converts speech into text. The platform can identify angry customers through their language and respond appropriately.[222]Amazon uses a chatbot for customer service that can perform tasks like checking the status of an order, cancelling orders, offering refunds and connecting the customer with a human representative.[223]Generative AI (GenAI), such as ChatGPT, is increasingly used in business to automate tasks and enhance decision-making.[224] In the hospitality industry, AI is used to reduce repetitive tasks, analyze trends, interact with guests, and predict customer needs.[225]AI hotel services come in the form of a chatbot,[226]application, virtual voice assistant and service robots. AI applications analyze media content such as movies, TV programs, advertisement videos oruser-generated content. The solutions often involvecomputer vision. Typical scenarios include the analysis of images usingobject recognitionor face recognition techniques, or theanalysis of videofor scene recognizing scenes, objects or faces. AI-based media analysis can facilitate media search, the creation of descriptive keywords for content, content policy monitoring (such as verifying the suitability of content for a particular TV viewing time),speech to textfor archival or other purposes, and the detection of logos, products or celebrity faces for ad placement. Deep-fakescan be used for comedic purposes but are better known forfake newsand hoaxes. Deepfakes can portray individuals in harmful or compromising situations, causing significant reputational damage and emotional distress, especially when the content is defamatory or violates personal ethics. While defamation and false light laws offer some recourse, their focus on false statements rather than fabricated images or videos often leaves victims with limited legal protection and a challenging burden of proof.[240] In January 2016,[241]theHorizon 2020program financed the InVID Project[242][243]to help journalists and researchers detect fake documents, made available as browser plugins.[244][245] In June 2016, the visual computing group of theTechnical University of Munichand fromStanford Universitydeveloped Face2Face,[246]a program that animates photographs of faces, mimicking the facial expressions of another person. The technology has been demonstrated animating the faces of people includingBarack ObamaandVladimir Putin. Other methods have been demonstrated based ondeep neural networks, from which the namedeep fakewas taken. In September 2018, U.S. SenatorMark Warnerproposed to penalizesocial mediacompanies that allow sharing of deep-fake documents on their platforms.[247] In 2018, Darius Afchar and Vincent Nozick found a way to detect faked content by analyzing the mesoscopic properties of video frames.[248]DARPAgave 68 million dollars to work on deep-fake detection.[248] Audio deepfakes[249][250]and AI software capable of detecting deep-fakes and cloning human voices have been developed.[251][252] Respeecheris a program that enables one person to speak with the voice of another. AI algorithms have been used to detect deepfake videos.[253][254] Artificial intelligenceis also starting to be used in video production, with tools and software being developed that utilize generative AI in order to create new video, or alter existing video. Some of the major tools that are being used in these processes currently are DALL-E, Mid-journey, and Runway.[255]Way mark Studios utilized the tools offered by bothDALL-EandMid-journeyto create a fully AI generated film calledThe Frostin the summer of 2023.[255]Way mark Studios is experimenting with using these AI tools to generate advertisements and commercials for companies in mere seconds.[255]Yves Bergquist, a director of the AI &Neurosciencein Media Project at USC's Entertainment Technology Center, says post production crews in Hollywood are already using generative AI, and predicts that in the future more companies will embrace this new technology.[256] AI has been used to compose music of various genres. David Copecreated an AI calledEmily Howellthat managed to become well known in the field of algorithmic computer music.[257]The algorithm behind Emily Howell is registered as a US patent.[258] In 2012, AIIamuscreated the first complete classical album.[259] AIVA(Artificial Intelligence Virtual Artist), composes symphonic music, mainlyclassical musicforfilm scores.[260]It achieved a world first by becoming the first virtual composer to be recognized by a musicalprofessional association.[261] Melomicscreates computer-generated music for stress and pain relief.[262] At Sony CSL Research Laboratory, the Flow Machines software creates pop songs by learning music styles from a huge database of songs. It can compose in multiple styles. The Watson Beat usesreinforcement learninganddeep belief networksto compose music on a simple seed input melody and a select style. The software was open sourced[263]and musicians such asTaryn Southern[264]collaborated with the project to create music. South Korean singer, Hayeon's, debut song, "Eyes on You" was composed using AI which was supervised by real composers, including NUVO.[265] Narrative Sciencesellscomputer-generated newsand reports. It summarizes sporting events based on statistical data from the game. It also creates financial reports and real estate analyses.[266]Automated Insightsgenerates personalized recaps and previews forYahoo SportsFantasy Football.[267] Yseop, uses AI to turn structured data into natural language comments and recommendations.Yseopwrites financial reports, executive summaries, personalized sales or marketing documents and more in multiple languages, including English, Spanish, French, and German.[268] TALESPIN made up stories similar to thefables of Aesop. The program started with a set of characters who wanted to achieve certain goals. The story narrated their attempts to satisfy these goals.[citation needed]Mark Riedl and Vadim Bulitko asserted that the essence of storytelling was experience management, or "how to balance the need for a coherent story progression with user agency, which is often at odds".[269] While AI storytelling focuses on story generation (character and plot), story communication also received attention. In 2002, researchers developed an architectural framework for narrative prose generation. They faithfully reproduced text variety and complexity on stories such asLittle Red Riding Hood.[270]In 2016, a Japanese AI co-wrote a short story and almost won a literary prize.[271] South Korean company Hanteo Global uses a journalism bot to write articles.[272] Literary authors are also exploring uses of AI. An example isDavid Jhave Johnston's workReRites(2017–2019), where the poet created a daily rite of editing the poetic output of a neural network to create a series of performances and publications. In 2010, artificial intelligence usedbaseballstatistics to automatically generate news articles. This was launched byThe Big Ten Networkusing software fromNarrative Science.[273] After being unable to cover everyMinor League Baseballgame with a large team,Associated Presscollaborated withAutomated Insightsin 2016 to create game recaps that were automated by artificial intelligence.[274] UOL in Brazil expanded the use of AI in its writing. Rather than just generating news stories, they programmed the AI to include commonly searched words onGoogle.[274] El Pais, a Spanish news site that covers many things including sports, allows users to make comments on each news article. They use thePerspective APIto moderate these comments and if the software deems a comment to contain toxic language, the commenter must modify it in order to publish it.[274] A local Dutch media group used AI to create automatic coverage of amateur soccer, set to cover 60,000 games in just a single season. NDC partnered with United Robots to create this algorithm and cover what would have never been possible before without an extremely large team.[274] Lede AI has been used in 2023 to take scores fromhigh school footballgames to generate stories automatically for the local newspaper. This was met with significant criticism from readers for the very robotic diction that was published. With some descriptions of games being a "close encounter of the athletic kind," readers were not pleased and let the publishing company,Gannett, know on social media. Gannett has since halted their used of Lede AI until they come up with a solution for what they call an experiment.[275] Millions of its articles have been edited by bots[279]which however are usually not artificial intelligence software. Many AI platforms use Wikipedia data,[280]mainly for training machine learning applications. There is research and development of various artificial intelligence applications for Wikipedia such as for identifying outdated sentences,[281]detecting covert vandalism[282]or recommending articles and tasks to new editors. Machine translation(seeabove)has also be used for translating Wikipedia articles and could play a larger role in creating, updating, expanding, and generally improving articles in the future. A content translation tool allows editors of some Wikipedias to more easily translate articles across several select languages.[283][284] In video games, AI is routinely used to generate behavior innon-player characters(NPCs). In addition, AI is used forpathfinding. Some researchers consider NPC AI in games to be a "solved problem" for most production tasks.[who?]Games with less typical AI include the AI director ofLeft 4 Dead(2008) and the neuroevolutionary training of platoons inSupreme Commander 2(2010).[285][286]AI is also used inAlien Isolation(2014) as a way to control the actions the Alien will perform next.[287] Kinect, which provides a 3D body–motion interface for theXbox 360and theXbox One, uses algorithms that emerged from AI research.[288][which?] AI has been used to produce visual art. The first AI art program, calledAARON, was developed byHarold Cohenin 1968[289]with the goal of being able to code the act of drawing. It started by creating simple black and white drawings, and later to painting using special brushes and dyes that were chosen by the program itself without mediation from Cohen.[290] AI platforms such asDALL-E,[291]Stable Diffusion,[291]Imagen,[292]andMidjourney[293]have been used for generating visual images from inputs such as text or other images.[294]Some AI tools allow users to input images and output changed versions of that image, such as to display an object or product in different environments. AI image models can also attempt to replicate the specific styles of artists, and can add visual complexity to rough sketches. Since their design in 2014,generative adversarial networks(GANs) have been used by AI artists. GAN computer programming, generates technical images through machine learning frameworks that surpass the need for human operators.[289]Examples of GAN programs that generate art includeArtbreederandDeepDream. In addition to the creation of original art, research methods that utilize AI have been generated to quantitatively analyze digital art collections. Although the main goal of the large-scale digitization of artwork in the past few decades was to allow for accessibility and exploration of these collections, the use of AI in analyzing them has brought about new research perspectives.[295]Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art.[296]While distant viewing includes the analysis of large collections, close reading involves one piece of artwork. AI has been in use since the early 2000s, most notably by a system designed by Pixar called "Genesis".[297]It was designed to learn algorithms and create 3D models for its characters and props. Notable movies that used this technology included Up and The Good Dinosaur.[298]AI has been used less ceremoniously in recent years. In 2023, it was revealed Netflix of Japan was using AI to generate background images for their upcoming show to be met with backlash online.[299]In recent years, motion capture became an easily accessible form of AI animation. For example, Move AI is a program built to capture any human movement and reanimate it in its animation program using learning AI.[300] Power electronicsconverters are used inrenewable energy,energy storage,electric vehiclesandhigh-voltage direct currenttransmission. These converters are failure-prone, which can interrupt service and require costly maintenance or catastrophic consequences in mission critical applications.[citation needed]AI can guide the design process for reliable power electronics converters, by calculating exact design parameters that ensure the required lifetime.[301] The U.S. Department of Energy underscores AI's pivotal role in realizing national climate goals. With AI, the ambitious target of achieving net-zero greenhouse gas emissions across the economy becomes feasible. AI also helps make room for wind and solar on the grid by avoiding congestion and increasing grid reliability.[302] Machine learning can be used for energy consumption prediction and scheduling, e.g. to help withrenewable energy intermittency management(see also:smart gridandclimate change mitigation in the power grid).[303][304][305][306][136] Many telecommunications companies make use ofheuristic searchto manage their workforces. For example,BT Groupdeployed heuristic search[307]in an application that schedules 20,000 engineers. Machine learning is also used forspeech recognition(SR), including of voice-controlled devices, and SR-related transcription, including of videos.[308][309] Artificial intelligence has been combined with digitalspectrometryby IdeaCuria Inc.,[310][311]enable applications such as at-home water quality monitoring. In the 1990s, early artificial intelligence tools controlledTamagotchisandGiga Pets, theInternet, and the first widely released robot,Furby.Aibowas adomestic robotin the form of a robotic dog with intelligent features andautonomy. Mattel created an assortment of AI-enabled toys that "understand" conversations, give intelligent responses, and learn.[312] Oil and gascompanies have used artificial intelligence tools to automate functions, foresee equipment issues, and increase oil and gas output.[313][314] Industrial sensors and AI tools work together to watch manufacturing processes and equipment in real-time. Detection programs find strange patterns. The patterns may show quality issues or issues with equipment. Supply chain management improves with better predictions of demand and managing inventory.[315] AI in transport is expected to provide safe, efficient, and reliable transportation while minimizing the impact on the environment and communities. The major development challenge is the complexity of transportation systems that involves independent components and parties, with potentially conflicting objectives.[316] AI-basedfuzzy logiccontrollers operategearboxes. For example, the 2006Audi TT,VW Touareg[citation needed]andVW Caravellfeature the DSP transmission. A number of Škoda variants (Škoda Fabia) include a fuzzy logic-based controller. Cars have AI-baseddriver-assistfeatures such asself-parkingandadaptive cruise control. There are also prototypes of autonomous automotive public transport vehicles such as electric mini-buses[317][318][319][320]as well asautonomous rail transportinoperation.[321][322][323] There also are prototypes of autonomous delivery vehicles, sometimes includingdelivery robots.[324][325][326][327][328][329][330] Transportation's complexity means that in most cases training an AI in a real-world driving environment is impractical. Simulator-based testing can reduce the risks of on-road training.[331] AI underpins self-driving vehicles. Companies involved with AI includeTesla,Waymo, andGeneral Motors. AI-based systems control functions such as braking, lane changing, collision prevention, navigation and mapping.[332] Autonomous trucks are in the testing phase. The UK government passed legislation to begin testing of autonomous truck platoons in 2018.[333]A group of autonomous trucks follow closely behind each other. German corporationDaimleris testing itsFreightliner Inspiration.[334] Autonomous vehicles require accurate maps to be able to navigate between destinations.[335]Some autonomous vehicles do not allow human drivers (they have no steering wheels or pedals).[336] AI has been used to optimize traffic management, which reduces wait times, energy use, and emissions by as much as 25 percent.[337] Smart traffic lightshave been developed atCarnegie Mellonsince 2009. Professor Stephen Smith has started a company since thenSurtracthat has installed smart traffic control systems in 22 cities. It costs about $20,000 per intersection to install. Drive time has been reduced by 25% and traffic jam waiting time has been reduced by 40% at the intersections it has been installed.[338] TheRoyal Australian AirForce (RAAF)Air Operations Division(AOD) uses AI forexpert systems. AIs operate as surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.[339] Aircraft simulators use AI for training aviators. Flight conditions can be simulated that allow pilots to make mistakes without risking themselves or expensive aircraft. Air combat can also be simulated. AI can also be used to operate planes analogously to their control of ground vehicles. Autonomous drones can fly independently or inswarms.[340] AOD uses the Interactive Fault Diagnosis and Isolation System, or IFDIS, which is a rule-based expert system using information fromTF-30documents and expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for theF-111C. The system replaced specialized workers. The system allowed regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers. Speech recognitionallows traffic controllers to give verbal directions to drones. Artificial intelligence supported design of aircraft,[341]or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools. The AIDA uses rule-based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective. In 2003 aDryden Flight Research Centerproject created software that could enable a damaged aircraft to continue flight until a safe landing can be achieved.[342]The software compensated for damaged components by relying on the remaining undamaged components.[343] The 2016 Intelligent Autopilot System combinedapprenticeship learningand behavioral cloning whereby the autopilot observed low-level actions required to maneuver the airplane and high-level strategy used to apply those actions.[344] Neural networksare used bysituational awarenesssystems in ships and boats.[345]There also areautonomous boats. The development of self-driving cars is progressing. Machine learning systems help use sensor data to navigate tricky areas.[346]Advanced driver assistance systems provide features such as keeping the car in its lane and preventing accidents.[347]City traffic management systems change traffic lights based on the current traffic conditions. Maritime shipping uses AI to find the best paths. It checks weather and fuel usage. Automated navigation systems help operate the ships. AI also improves loading and placing containers at ports.[citation needed] Autonomous ships that monitor the ocean, AI-driven satellite data analysis,passive acoustics[348]orremote sensingand other applications ofenvironmental monitoringmake use of machine learning.[349][350][351][176] For example, "Global Plastic Watch" is an AI-basedsatellite monitoring-platform for analysis/tracking ofplastic wastesites to helppreventionofplastic pollution– primarilyocean pollution– by helping identify who and where mismanages plastic waste, dumping it into oceans.[352][353] Machine learning can be used tospot early-warning signsof disasters and environmental issues, possibly including naturalpandemics,[354][355]earthquakes,[356][357][358]landslides,[359]heavy rainfall,[360]long-term water supply vulnerability,[361]tipping-points ofecosystem collapse,[362]cyanobacterial bloomoutbreaks,[363]and droughts.[364][365][366] AI early warning systems can warn us about natural disasters like floods, wildfires, and earthquakes.[367]Climate change monitoring uses machine learning to spot patterns in temperature, rainfall, and other signs in the environment. Wildlife conservation gets better when we use automatic tools to identify animals in camera trap pictures and sound recordings. Tools for monitoring the ocean look at key details that help us learn about the health of ocean ecosystems.[368] AI can be used for real-time code completion, chat, and automated test generation. These tools are typically integrated with editors andIDEsasplugins. They differ in functionality, quality, speed, and approach to privacy.[369]Code suggestions could be incorrect, and should be carefully reviewed by software developers before accepted. GitHub Copilotis an artificial intelligence model developed byGitHubandOpenAIthat is able to autocomplete code in multiple programming languages.[370]Price for individuals: $10/mo or $100/yr, with one free month trial. Tabninewas created by Jacob Jackson and was originally owned by Tabnine company. In late 2019, Tabnine was acquired byCodota.[371]Tabnine tool is available aspluginto most popularIDEs. It offers multiple pricing options, including limited "starter" free version.[372] CodiumAIby CodiumAI, small startup in Tel Aviv, offers automated test creation. Currently supports Python, JS, and TS.[373] GhostwriterbyReplitoffers code completion and chat.[374]They have multiple pricing plans, including a free one and a "Hacker" plan for $7/month. CodeWhispererbyAmazoncollects individual users' content, including files open in the IDE. They claim to focus on security both during transmission and when storing.[375]Individual plan is free, professional plan is $19/user/month. Other tools: SourceGraph Cody, CodeCompleteFauxPilot, Tabby[369] AI can be used to create other AIs. For example, around November 2017, Google's AutoML project to evolve new neural net topologies createdNASNet, a system optimized forImageNetand POCO F1. NASNet's performance exceeded all previously published performance on ImageNet.[376] Machine learning has been used for noise-cancelling inquantum technology,[377]includingquantum sensors.[378]Moreover, there is substantial research and development of using quantum computers with machine learning algorithms. For example, there is a prototype, photonic,quantummemristive deviceforneuromorphic (quantum-)computers(NC)/artificial neural networksand NC-using quantum materials with some variety of potential neuromorphic computing-related applications,[379][380]andquantum machine learningis a field with some variety of applications under development. AI could be used forquantum simulatorswhich may have the application of solving physics andchemistry[381][382]problems as well as forquantum annealersfor training of neural networks for AI applications.[383]There may also be some usefulness in chemistry, e.g. for drug discovery, and in materials science, e.g. for materials optimization/discovery (with possible relevance to quantum materials manufacturing[201][202]).[384][385][386][better source needed] AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered AI. All of the following were originally developed in AI laboratories:[387] Anoptical character readeris used in the extraction of data in business documents like invoices and receipts. It can also be used in business contract documents e.g.employment agreementsto extract critical data like employment terms, delivery terms, termination clauses, etc.[388] Artificial intelligence in architecturedescribes the use ofartificial intelligencein automation, design and planning in the architectural process or in assisting human skills in the field of architecture.[389]Artificial Intelligenceis thought to potentially lead to and ensue major changes in architecture.[390][391][392] AI in architecture has created a way for architects to create things beyond human understanding. AI implementation of machine learning text-to-render technologies, like DALL-E and stable Diffusion, gives power to visualization complex.[393] AI allows designers to demonstrate their creativity and even invent new ideas while designing. In future, AI will not replace architects; instead, it will improve the speed of translating ideas sketching.[393] The use of AI raises some important ethical issues like privacy, bias, and accountability. When algorithms are trained on biased data, they can end up reinforcing existing inequalities, for example consider how facial recognition technology often performs poorly or fails in certain demographics.[394]Plus, AI's use in surveillance makes people worry about their personal rights and data privacy. The integrartion of these technologies raises some issues that we need to look at. First, it's important to make sure that the data we use is accurate and fair. We also need to address issues of bias to ensure that AI systems treat everyone equally. Creating rules and guidelines for how AI is used is another important step. Data privacy and ensuring security is very essential to maintain trust. While adaptation of these technologies can give us more efficiency in the work pattern, there might be a challenge for human workforce.[citation needed] Looking ahead, we aim to develop AI that can explain its decisions clearly so that people can understand how it works. There's also a goal to create more advanced AI that can handle a wider range of problems. Researchers are starting to emphasize the importance of working together across different fields to develop better technologies for AI so that it provides fair benefits for everyone.[395]
https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence
The following tables compare notablesoftware frameworks,libraries, andcomputer programsfordeep learningapplications. [further explanation needed]
https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software
Compressed sensing(also known ascompressive sensing,compressive sampling, orsparse sampling) is asignal processingtechnique for efficiently acquiring and reconstructing asignalby finding solutions tounderdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by theNyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one issparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.[1][2]Compressed sensing has applications in, for example,magnetic resonance imaging(MRI) where the incoherence condition is typically satisfied.[3] A common goal of the engineering field ofsignal processingis to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is calledsampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was theNyquist–Shannon sampling theorem. It states that if arealsignal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly by means ofsinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal. Around 2004,Emmanuel Candès,Justin Romberg,Terence Tao, andDavid Donohoproved that given knowledge about a signal'ssparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires.[4][5]This idea is the basis of compressed sensing. Compressed sensing relies onL1{\displaystyle L^{1}}techniques, which several other scientific fields have used historically.[6]In statistics, theleast squaresmethod was complemented by theL1{\displaystyle L^{1}}-norm, which was introduced byLaplace. Following the introduction oflinear programmingandDantzig'ssimplex algorithm, theL1{\displaystyle L^{1}}-norm was used incomputational statistics. In statistical theory, theL1{\displaystyle L^{1}}-norm was used byGeorge W. Brownand later writers onmedian-unbiased estimators. It was used by Peter J. Huber and others working onrobust statistics. TheL1{\displaystyle L^{1}}-norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy theNyquist–Shannon criterion.[7]It was used inmatching pursuitin 1993, theLASSO estimatorbyRobert Tibshiraniin 1996[8]andbasis pursuitin 1998.[9] At first glance, compressed sensing might seem to violatethe sampling theorem, because compressed sensing depends on thesparsityof the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling.[10] Anunderdetermined systemof linear equations has more unknowns than equations and generally has an infinite number of solutions. The figure below shows such an equation systemy=Dx{\displaystyle \mathbf {y} =D\mathbf {x} }where we want to find a solution forx{\displaystyle \mathbf {x} }. In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution. Compressed sensing takes advantage of the redundancy in many interesting signals—they are not pure noise. In particular, many signals aresparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain.[11]This is the same insight used in many forms oflossy compression. Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in abasisdifferent from the basis in which the signal is known to be sparse. The results found byEmmanuel Candès,Justin Romberg,Terence Tao, andDavid Donohoshowed that the number of these compressive measurements can be small and still contain nearly all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdeterminedmatrix equationsince the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdeterminedsystem of linear equations. The least-squares solution to such problems is to minimize theL2{\displaystyle L^{2}}norm—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only amatrix multiplicationby thepseudo-inverseof the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy. To enforce the sparsity constraint when solving for the underdetermined system of linear equations, one can minimize the number of nonzero components of the solution. The function counting the number of non-zero components of a vector was called theL0{\displaystyle L^{0}}"norm"by David Donoho.[note 1] Candèset al. proved that for many problems it is probable that theL1{\displaystyle L^{1}}normis equivalent to theL0{\displaystyle L^{0}}norm, in a technical sense: This equivalence result allows one to solve theL1{\displaystyle L^{1}}problem, which is easier than theL0{\displaystyle L^{0}}problem. Finding the candidate with the smallestL1{\displaystyle L^{1}}norm can be expressed relatively easily as alinear program, for which efficient solution methods already exist.[13]When measurements may contain a finite amount of noise,basis pursuit denoisingis preferred over linear programming, since it preserves sparsity in the face of noise and can be solved faster than an exact linear program. Total variationcan be seen as anon-negativereal-valuedfunctionaldefined on the space ofreal-valuedfunctions(for the case of functions of one variable) or on the space ofintegrable functions(for the case of functions of several variables). For signals, especially,total variationrefers to the integral of the absolutegradientof the signal. In signal and image reconstruction, it is applied astotal variation regularizationwhere the underlying principle is that signals with excessive details have high total variation and that removing these details, while retaining important information such as edges, would reduce the total variation of the signal and make the signal subject closer to the original signal in the problem. For the purpose of signal and image reconstruction,ℓ1{\displaystyle \ell _{1}}minimization models are used. Other approaches also include the least-squares as has been discussed before in this article. These methods are extremely slow and return a not-so-perfect reconstruction of the signal. The current CS Regularization models attempt to address this problem by incorporating sparsity priors of the original image, one of which is the total variation (TV). Conventional TV approaches are designed to give piece-wise constant solutions. Some of these include (as discussed ahead) – constrainedℓ1{\textstyle \ell _{1}}-minimization which uses an iterative scheme. This method, though fast, subsequently leads to over-smoothing of edges resulting in blurred image edges.[14]TV methods with iterative re-weighting have been implemented to reduce the influence of large gradient value magnitudes in the images. This has been used incomputed tomography(CT) reconstruction as a method known as edge-preserving total variation. However, as gradient magnitudes are used for estimation of relative penalty weights between the data fidelity and regularization terms, this method is not robust to noise and artifacts and accurate enough for CS image/signal reconstruction and, therefore, fails to preserve smaller structures. Recent progress on this problem involves using an iteratively directional TV refinement for CS reconstruction.[15]This method would have 2 stages: the first stage would estimate and refine the initial orientation field – which is defined as a noisy point-wise initial estimate, through edge-detection, of the given image. In the second stage, the CS reconstruction model is presented by utilizing directional TV regularizer. More details about these TV-based approaches – iteratively reweighted l1 minimization, edge-preserving TV and iterative model using directional orientation field and TV- are provided below. In the CS reconstruction models using constrainedℓ1{\displaystyle \ell _{1}}minimization,[16]larger coefficients are penalized heavily in theℓ1{\displaystyle \ell _{1}}norm. It was proposed to have a weighted formulation ofℓ1{\displaystyle \ell _{1}}minimization designed to more democratically penalize nonzero coefficients. An iterative algorithm is used for constructing the appropriate weights.[17]Each iteration requires solving oneℓ1{\displaystyle \ell _{1}}minimization problem by finding the local minimum of a concave penalty function that more closely resembles theℓ0{\displaystyle \ell _{0}}norm. An additional parameter, usually to avoid any sharp transitions in the penalty function curve, is introduced into the iterative equation to ensure stability and so that a zero estimate in one iteration does not necessarily lead to a zero estimate in the next iteration. The method essentially involves using the current solution for computing the weights to be used in the next iteration. Early iterations may find inaccurate sample estimates, however this method will down-sample these at a later stage to give more weight to the smaller non-zero signal estimates. One of the disadvantages is the need for defining a valid starting point as a global minimum might not be obtained every time due to the concavity of the function. Another disadvantage is that this method tends to uniformly penalize the image gradient irrespective of the underlying image structures. This causes over-smoothing of edges, especially those of low contrast regions, subsequently leading to loss of low contrast information. The advantages of this method include: reduction of the sampling rate for sparse signals; reconstruction of the image while being robust to the removal of noise and other artifacts; and use of very few iterations. This can also help in recovering images with sparse gradients. In the figure shown below,P1refers to the first-step of the iterative reconstruction process, of the projection matrixPof the fan-beam geometry, which is constrained by the data fidelity term. This may contain noise and artifacts as no regularization is performed. The minimization ofP1is solved through the conjugate gradient least squares method.P2refers to the second step of the iterative reconstruction process wherein it utilizes the edge-preserving total variation regularization term to remove noise and artifacts, and thus improve the quality of the reconstructed image/signal. The minimization ofP2is done through a simple gradient descent method. Convergence is determined by testing, after each iteration, for image positivity, by checking iffk−1=0{\displaystyle f^{k-1}=0}for the case whenfk−1<0{\displaystyle f^{k-1}<0}(Note thatf{\displaystyle f}refers to the different x-ray linear attenuation coefficients at different voxels of the patient image). This is an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low dose CT through low current levels (milliampere). In order to reduce the imaging dose, one of the approaches used is to reduce the number of x-ray projections acquired by the scanner detectors. However, this insufficient projection data which is used to reconstruct the CT image can cause streaking artifacts. Furthermore, using these insufficient projections in standard TV algorithms end up making the problem under-determined and thus leading to infinitely many possible solutions. In this method, an additional penalty weighted function is assigned to the original TV norm. This allows for easier detection of sharp discontinuities in intensity in the images and thereby adapt the weight to store the recovered edge information during the process of signal/image reconstruction. The parameterσ{\displaystyle \sigma }controls the amount of smoothing applied to the pixels at the edges to differentiate them from the non-edge pixels. The value ofσ{\displaystyle \sigma }is changed adaptively based on the values of the histogram of the gradient magnitude so that a certain percentage of pixels have gradient values larger thanσ{\displaystyle \sigma }. The edge-preserving total variation term, thus, becomes sparser and this speeds up the implementation. A two-step iteration process known as forward–backward splitting algorithm is used.[18]The optimization problem is split into two sub-problems which are then solved with the conjugate gradient least squares method[19]and the simple gradient descent method respectively. The method is stopped when the desired convergence has been achieved or if the maximum number of iterations is reached.[14] Some of the disadvantages of this method are the absence of smaller structures in the reconstructed image and degradation of image resolution. This edge preserving TV algorithm, however, requires fewer iterations than the conventional TV algorithm.[14]Analyzing the horizontal and vertical intensity profiles of the reconstructed images, it can be seen that there are sharp jumps at edge points and negligible, minor fluctuation at non-edge points. Thus, this method leads to low relative error and higher correlation as compared to the TV method. It also effectively suppresses and removes any form of image noise and image artifacts such as streaking. To prevent over-smoothing of edges and texture details and to obtain a reconstructed CS image which is accurate and robust to noise and artifacts, this method is used. First, an initial estimate of the noisy point-wise orientation field of the imageI{\displaystyle I},d^{\displaystyle {\hat {d}}}, is obtained. This noisy orientation field is defined so that it can be refined at a later stage to reduce the noise influences in orientation field estimation. A coarse orientation field estimation is then introduced based on structure tensor, which is formulated as:[20]Jρ(∇Iσ)=Gρ∗(∇Iσ⊗∇Iσ)=(J11J12J12J22){\displaystyle J_{\rho }(\nabla I_{\sigma })=G_{\rho }*(\nabla I_{\sigma }\otimes \nabla I_{\sigma })={\begin{pmatrix}J_{11}&J_{12}\\J_{12}&J_{22}\end{pmatrix}}}. Here,Jρ{\displaystyle J_{\rho }}refers to the structure tensor related with the image pixel point (i,j) having standard deviationρ{\displaystyle \rho }.G{\displaystyle G}refers to the Gaussian kernel(0,ρ2){\displaystyle (0,\rho ^{2})}with standard deviationρ{\displaystyle \rho }.σ{\displaystyle \sigma }refers to the manually defined parameter for the imageI{\displaystyle I}below which the edge detection is insensitive to noise.∇Iσ{\displaystyle \nabla I_{\sigma }}refers to the gradient of the imageI{\displaystyle I}and(∇Iσ⊗∇Iσ){\displaystyle (\nabla I_{\sigma }\otimes \nabla I_{\sigma })}refers to the tensor product obtained by using this gradient.[15] The structure tensor obtained is convolved with a Gaussian kernelG{\displaystyle G}to improve the accuracy of the orientation estimate withσ{\displaystyle \sigma }being set to high values to account for the unknown noise levels. For every pixel (i,j) in the image, the structure tensor J is a symmetric and positive semi-definite matrix. Convolving all the pixels in the image withG{\displaystyle G}, gives orthonormal eigen vectors ω and υ of theJ{\displaystyle J}matrix. ω points in the direction of the dominant orientation having the largest contrast and υ points in the direction of the structure orientation having the smallest contrast. The orientation field coarse initial estimationd^{\displaystyle {\hat {d}}}is defined asd^{\displaystyle {\hat {d}}}= υ. This estimate is accurate at strong edges. However, at weak edges or on regions with noise, its reliability decreases. To overcome this drawback, a refined orientation model is defined in which the data term reduces the effect of noise and improves accuracy while the second penalty term with the L2-norm is a fidelity term which ensures accuracy of initial coarse estimation. This orientation field is introduced into the directional total variation optimization model for CS reconstruction through the equation:minX‖∇X∙d‖1+λ2‖Y−ΦX‖22{\displaystyle \min _{\mathrm {X} }\lVert \nabla \mathrm {X} \bullet d\rVert _{1}+{\frac {\lambda }{2}}\ \lVert Y-\Phi \mathrm {X} \rVert _{2}^{2}}.X{\displaystyle \mathrm {X} }is the objective signal which needs to be recovered. Y is the corresponding measurement vector, d is the iterative refined orientation field andΦ{\displaystyle \Phi }is the CS measurement matrix. This method undergoes a few iterations ultimately leading to convergence.d^{\displaystyle {\hat {d}}}is the orientation field approximate estimation of the reconstructed imageXk−1{\displaystyle X^{k-1}}from the previous iteration (in order to check for convergence and the subsequent optical performance, the previous iteration is used). For the two vector fields represented byX{\displaystyle \mathrm {X} }andd{\displaystyle d},X∙d{\displaystyle \mathrm {X} \bullet d}refers to the multiplication of respective horizontal and vertical vector elements ofX{\displaystyle \mathrm {X} }andd{\displaystyle d}followed by their subsequent addition. These equations are reduced to a series of convex minimization problems which are then solved with a combination ofvariable splittingand augmented Lagrangian (FFT-based fast solver with a closed form solution) methods.[15]It (Augmented Lagrangian) is considered equivalent to the split Bregman iteration which ensures convergence of this method. The orientation field, d is defined as being equal to(dh,dv){\displaystyle (d_{h},d_{v})}, wheredh,dv{\displaystyle d_{h},d_{v}}define the horizontal and vertical estimates ofd{\displaystyle d}. The Augmented Lagrangian method for the orientation field,minX‖∇X∙d‖1+λ2‖Y−ΦX‖22{\displaystyle \min _{\mathrm {X} }\lVert \nabla \mathrm {X} \bullet d\rVert _{1}+{\frac {\lambda }{2}}\ \lVert Y-\Phi \mathrm {X} \rVert _{2}^{2}}, involves initializingdh,dv,H,V{\displaystyle d_{h},d_{v},H,V}and then finding the approximate minimizer ofL1{\displaystyle L_{1}}with respect to these variables. The Lagrangian multipliers are then updated and the iterative process is stopped when convergence is achieved. For the iterative directional total variation refinement model, the augmented lagrangian method involves initializingX,P,Q,λP,λQ{\displaystyle \mathrm {X} ,P,Q,\lambda _{P},\lambda _{Q}}.[21] Here,H,V,P,Q{\displaystyle H,V,P,Q}are newly introduced variables whereH{\displaystyle H}=∇dh{\displaystyle \nabla d_{h}},V{\displaystyle V}=∇dv{\displaystyle \nabla d_{v}},P{\displaystyle P}=∇X{\displaystyle \nabla \mathrm {X} }, andQ{\displaystyle Q}=P∙d{\displaystyle P\bullet d}.λH,λV,λP,λQ{\displaystyle \lambda _{H},\lambda _{V},\lambda _{P},\lambda _{Q}}are the Lagrangian multipliers forH,V,P,Q{\displaystyle H,V,P,Q}. For each iteration, the approximate minimizer ofL2{\displaystyle L_{2}}with respect to variables (X,P,Q{\displaystyle \mathrm {X} ,P,Q}) is calculated. And as in the field refinement model, the lagrangian multipliers are updated and the iterative process is stopped when convergence is achieved. For the orientation field refinement model, the Lagrangian multipliers are updated in the iterative process as follows: For the iterative directional total variation refinement model, the Lagrangian multipliers are updated as follows: Here,γH,γV,γP,γQ{\displaystyle \gamma _{H},\gamma _{V},\gamma _{P},\gamma _{Q}}are positive constants. Based onpeak signal-to-noise ratio(PSNR) andstructural similarityindex (SSIM) metrics and known ground-truth images for testing performance, it is concluded that iterative directional total variation has a better reconstructed performance than the non-iterative methods in preserving edge and texture areas. The orientation field refinement model plays a major role in this improvement in performance as it increases the number of directionless pixels in the flat area while enhancing the orientation field consistency in the regions with edges. The field of compressive sensing is related to several topics in signal processing and computational mathematics, such asunderdetermined linear systems,group testing, heavy hitters,sparse coding,multiplexing, sparse sampling, and finite rate of innovation. Its broad scope and generality has enabled several innovative CS-enhanced approaches in signal processing and compression, solution of inverse problems, design of radiating systems, radar and through-the-wall imaging, and antenna characterization.[22]Imaging techniques having a strong affinity with compressive sensing includecoded apertureandcomputational photography. Conventional CS reconstruction uses sparse signals (usually sampled at a rate less than the Nyquist sampling rate) for reconstruction through constrainedl1{\displaystyle l_{1}}minimization. One of the earliest applications of such an approach was in reflection seismology which used sparse reflected signals from band-limited data for tracking changes between sub-surface layers.[23]When the LASSO model came into prominence in the 1990s as a statistical method for selection of sparse models,[24]this method was further used in computational harmonic analysis for sparse signal representation from over-complete dictionaries. Some of the other applications include incoherent sampling of radar pulses. The work byBoyd et al.[16]has applied the LASSO model- for selection of sparse models- towards analog to digital converters (the current ones use a sampling rate higher than the Nyquist rate along with the quantized Shannon representation). This would involve a parallel architecture in which the polarity of the analog signal changes at a high rate followed by digitizing the integral at the end of each time-interval to obtain the converted digital signal. Compressed sensing has been used in an experimental mobile phone camera sensor. The approach allows a reduction in image acquisition energy per image by as much as a factor of 15 at the cost of complex decompression algorithms; the computation may require an off-device implementation.[25] Compressed sensing is used in single-pixel cameras fromRice University.[26]Bell Labsemployed the technique in a lensless single-pixel camera that takes stills using repeated snapshots of randomly chosen apertures from a grid. Image quality improves with the number of snapshots, and generally requires a small fraction of the data of conventional imaging, while eliminating lens/focus-related aberrations.[27][28] Compressed sensing can be used to improve image reconstruction inholographyby increasing the number ofvoxelsone can infer from a single hologram.[29][30][31]It is also used for image retrieval from undersampled measurements in optical[32][33]and millimeter-wave[34]holography. Compressed sensing has been used infacial recognitionapplications.[35] Compressed sensing has been used[36][37]to shortenmagnetic resonance imagingscanning sessions on conventional hardware.[38]Reconstruction methods include Compressed sensing addresses the issue of high scan time by enabling faster acquisition by measuring fewer Fourier coefficients. This produces a high-quality image with relatively lower scan time. Another application (also discussed ahead) is for CT reconstruction with fewer X-ray projections. Compressed sensing, in this case, removes the high spatial gradient parts – mainly, image noise and artifacts. This holds potential as one can obtain high-resolution CT images at low radiation doses (through lower current-mA settings).[42] Compressed sensing has shown outstanding results in the application ofnetwork tomographytonetwork management.Network delayestimation andnetwork congestiondetection can both be modeled as underdeterminedsystems of linear equationswhere the coefficient matrix is the network routing matrix. Moreover, in theInternet, network routing matrices usually satisfy the criterion for using compressed sensing.[43] In 2013 one company announced shortwave-infrared cameras which utilize compressed sensing.[44]These cameras have light sensitivity from 0.9μmto 1.7 μm, wavelengths invisible to the human eye. Inradio astronomyand opticalastronomical interferometry, full coverage of the Fourier plane is usually absent and phase information is not obtained in most hardware configurations. In order to obtainaperture synthesisimages, various compressed sensing algorithms are employed.[45]TheHögbom CLEAN algorithmhas been in use since 1974 for the reconstruction of images obtained from radio interferometers, which is similar to thematching pursuitalgorithm mentioned above. Compressed sensing combined with a moving aperture has been used to increase the acquisition rate of images in atransmission electron microscope.[46]Inscanning mode, compressive sensing combined with random scanning of the electron beam has enabled both faster acquisition and less electron dose, which allows for imaging of electron beam sensitive materials.[47]
https://en.wikipedia.org/wiki/Compressed_sensing
Anecho state network(ESN)[1][2]is a type ofreservoir computerthat uses arecurrent neural networkwith a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hiddenneuronsare fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system. Alternatively, one may consider a nonparametric Bayesian formulation of the output layer, under which: (i) a prior distribution is imposed over the output weights; and (ii) the output weights are marginalized out in the context of prediction generation, given the training data. This idea has been demonstrated in[3]by using Gaussian priors, whereby a Gaussian process model with ESN-driven kernel function is obtained. Such a solution was shown to outperform ESNs with trainable (finite) sets of weights in several benchmarks. Some publicly available efficient implementations of ESNs are aureservoir (a C++ library for various kinds with python/numpy bindings),MATLAB, ReservoirComputing.jl (a Julia-based implementation of various types) and pyESN (for simple ESNs inPython). The Echo State Network (ESN)[4]belongs to the Recurrent Neural Network (RNN) family and provide their architecture and supervised learning principle. Unlike Feedforward Neural Networks, Recurrent Neural Networks are dynamic systems and not functions. Recurrent Neural Networks are typically used for: For the training of RNNs a number of learning algorithms are available: backpropagation through time,real-time recurrent learning. Convergence is not guaranteed due to instability and bifurcation phenomena.[4] The main approach of the ESN is firstly to operate a random, large, fixed, recurring neural network with the input signal, which induces a nonlinear response signal in each neuron within this "reservoir" network, and secondly connect a desired output signal by a trainable linear combination of all these response signals.[2] Another feature of the ESN is the autonomous operation in prediction: if it is trained with an input that is a backshifted version of the output, then it can be used for signal generation/prediction by using the previous output as input.[4][5] The main idea of ESNs is tied toliquid state machines, which were independently and simultaneously developed with ESNs by Wolfgang Maass.[6]They, ESNs and the newly researched backpropagation decorrelation learning rule for RNNs[7]are more and more summarized under the name Reservoir Computing. Schiller and Steil[7]also demonstrated that in conventional training approaches for RNNs, in which all weights (not only output weights) are adapted, the dominant changes are in output weights. In cognitive neuroscience, Peter F. Dominey analysed a related process related to the modelling of sequence processing in the mammalian brain, in particular speech recognition in the human brain.[8]The basic idea also included a model of temporal input discrimination in biological neuronal networks.[9]An early clear formulation of the reservoir computing idea is due to K. Kirby, who disclosed this concept in a largely forgotten conference contribution.[10]The first formulation of the reservoir computing idea known today stems from L. Schomaker,[11]who described how a desired target output could be obtained from an RNN by learning to combine signals from a randomly configured ensemble of spiking neural oscillators.[2] Echo state networks can be built in different ways. They can be set up with or without directly trainable input-to-output connections, with or without output reservation feedback, with different neurotypes, different reservoir internal connectivity patterns etc. The output weight can be calculated for linear regression with all algorithms whether they are online or offline. In addition to the solutions for errors with smallest squares, margin maximization criteria, so-called training support vector machines, are used to determine the output values.[12]Other variants of echo state networks seek to change the formulation to better match common models of physical systems, such as those typically those defined by differential equations. Work in this direction includes echo state networks which partially include physical models,[13]hybrid echo state networks,[14]and continuous-time echo state networks.[15] The fixed RNN acts as a random, nonlinear medium whose dynamic response, the "echo", is used as a signal base. The linear combination of this base can be trained to reconstruct the desired output by minimizing some error criteria.[2] RNNs were rarely used in practice before the introduction of the ESN, because of the complexity involved in adjusting their connections (e.g., lack of autodifferentiation, susceptibility to vanishing/exploding gradients, etc.). RNN training algorithms were slow and often vulnerable to issues, such as branching errors.[16]Convergence could therefore not be guaranteed. On the other hand, ESN training does not have a problem with branching and is easy to implement. In early studies, ESNs were shown to perform well on time series prediction tasks from synthetic datasets.[1][17] Today, many of the problems that made RNNs slow and error-prone have been addressed with the advent of autodifferentiation (deep learning) libraries, as well as more stable architectures such aslong short-term memoryandGated recurrent unit; thus, the unique selling point of ESNs has been lost. RNNs have also proven themselves in several practical areas, such as language processing. To cope with tasks of similar complexity using reservoir calculation methods requires memory of excessive size. ESNs are used in some areas, such as signal processing applications. In particular, they have been widely used as a computing principle that mixes well with non-digital computer substrates. Since ESNs do not need to modify the parameters of the RNN, they make it possible to use many different objects as their nonlinear "reservoir″. For example, optical microchips, mechanical nanooscillators, polymer mixtures, or even artificial soft limbs.[2]
https://en.wikipedia.org/wiki/Echo_state_network
The following is a list of current and past, non-classified notableartificial intelligenceprojects.
https://en.wikipedia.org/wiki/List_of_artificial_intelligence_projects
Aliquid state machine(LSM) is a type ofreservoir computerthat uses aspiking neural network. An LSM consists of a large collection of units (callednodes, orneurons). Each node receives time varying input from external sources (theinputs) as well as from other nodes. Nodes arerandomly connectedto each other. Therecurrentnature of the connections turns the time varying input into aspatio-temporal patternof activations in the network nodes. The spatio-temporal patterns of activation are read out bylinear discriminantunits. The soup of recurrently connected nodes will end up computing a large variety ofnonlinear functionson the input. Given alarge enough varietyof such nonlinear functions, it is theoretically possible to obtain linear combinations (using the read out units) to perform whatever mathematical operation is needed to perform a certain task, such asspeech recognitionorcomputer vision. The wordliquidin the name comes from the analogy drawn to dropping a stone into a still body of water or other liquid. The falling stone will generateripplesin the liquid. The input (motion of the falling stone) has been converted into a spatio-temporal pattern of liquid displacement (ripples). LSMs have been put forward as a way to explain the operation ofbrains. LSMs are argued to be an improvement over the theory of artificial neural networks because: Criticisms of LSMs as used incomputational neuroscienceare that If a reservoir hasfading memoryandinput separability, with help of a readout, it can be proven the liquid state machine is a universal function approximator usingStone–Weierstrass theorem.[1]
https://en.wikipedia.org/wiki/Liquid_state_machine
Reservoir computingis a framework for computation derived fromrecurrent neural networktheory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir.[1]After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output.[1]The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed.[1]The second is that the computational power of naturally available systems, both classical and quantum mechanical, can be used to reduce the effective computational cost.[2] The first examples of reservoir neural networks demonstrated that randomly connected recurrent neural networks could be used for sensorimotor sequence learning,[3]and simple forms of interval and speech discrimination.[4][5]In these early models[4][5]the memory in the network took the form of both short-term synaptic plasticity and activity mediated by recurrent connections. In other early reservoir neural network models the memory of the recent stimulus history was provided solely by the recurrent activity.[3][6][7]Overall, the general concept of reservoir computing stems from the use of recursive connections withinneural networksto create a complex dynamical system.[8]It is a generalisation of earlier neural network architectures such as recurrent neural networks,liquid-state machinesandecho-state networks. Reservoir computing also extends to physical systems that are not networks in the classical sense, but rather continuous systems in space and/or time: e.g. a literal "bucket of water" can serve as a reservoir that performs computations on inputs given as perturbations of the surface.[9]The resultant complexity of such recurrent neural networks was found to be useful in solving a variety of problems including language processing and dynamic system modeling.[8]However, training of recurrent neural networks is challenging and computationally expensive.[8]Reservoir computing reduces those training-related challenges by fixing the dynamics of the reservoir and only training the linear output layer.[8] A large variety of nonlinear dynamical systems can serve as a reservoir that performs computations. In recent years semiconductor lasers have attracted considerable interest as computation can be fast and energy efficient compared to electrical components. Recent advances in both AI and quantum information theory have given rise to the concept ofquantum neural networks.[10]These hold promise in quantum information processing, which is challenging to classical networks, but can also find application in solving classical problems.[10][11]In 2018, a physical realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins within a molecular solid.[11]However, the nuclear spin experiments in[11]did not demonstrate quantum reservoir computing per se as they did not involve processing of sequential data. Rather the data were vector inputs, which makes this more accurately a demonstration of quantum implementation of arandom kitchen sink[12]algorithm (also going by the name ofextreme learning machinesin some communities). In 2019, another possible implementation of quantum reservoir processors was proposed in the form of two-dimensional fermionic lattices.[11]In 2020, realization of reservoir computing on gate-based quantum computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum computers.[13] Reservoir computers have been used fortime-seriesanalysis purposes. In particular, some of their usages involvechaotictime-seriesprediction,[14][15]separation ofchaoticsignals,[16]and link inference ofnetworksfrom their dynamics.[17] The 'reservoir' in reservoir computing is the internal structure of the computer, and must have two properties: it must be made up of individual, non-linear units, and it must be capable of storing information. The non-linearity describes the response of each unit to input, which is what allows reservoir computers to solve complex problems. Reservoirs are able to store information by connecting the units in recurrent loops, where the previous input affects the next response. The change in reaction due to the past allows the computers to be trained to complete specific tasks.[18] Reservoirs can be virtual or physical.[18]Virtual reservoirs are typically randomly generated and are designed like neural networks.[18][8]Virtual reservoirs can be designed to have non-linearity and recurrent loops, but, unlike neural networks, the connections between units are randomized and remain unchanged throughout computation.[18]Physical reservoirs are possible because of the inherent non-linearity of certain natural systems. The interaction between ripples on the surface of water contains the nonlinear dynamics required in reservoir creation, and a pattern recognition RC was developed by first inputting ripples with electric motors then recording and analyzing the ripples in the readout.[1] The readout is a neural network layer that performs a linear transformation on the output of the reservoir.[1]The weights of the readout layer are trained by analyzing the spatiotemporal patterns of the reservoir after excitation by known inputs, and by utilizing a training method such as alinear regressionor aRidge regression.[1]As its implementation depends on spatiotemporal reservoir patterns, the details of readout methods are tailored to each type of reservoir.[1]For example, the readout for a reservoir computer using a container of liquid as its reservoir might entail observing spatiotemporal patterns on the surface of the liquid.[1] An early example of reservoir computing was the context reverberation network.[19]In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layerperceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuousreaction–diffusion systeminspired byAlan Turing’s model ofmorphogenesis. At the trainable layer, the perceptron associates current inputs with the signals thatreverberatein the dynamical system; the latter were said to provide a dynamic "context" for the inputs. In the language of later work, the reaction–diffusion system served as the reservoir. The Tree Echo State Network (TreeESN) model represents a generalization of the reservoir computing framework to tree structured data.[20] Chaotic Liquid State Machine The liquid (i.e. reservoir) of a Chaotic Liquid State Machine (CLSM),[21][22]or chaotic reservoir, is made from chaotic spiking neurons but which stabilize their activity by settling to a single hypothesis that describes the trained inputs of the machine. This is in contrast to general types of reservoirs that don’t stabilize. The liquid stabilization occurs via synaptic plasticity and chaos control that govern neural connections inside the liquid. CLSM showed promising results in learning sensitive time series data.[21][22] This type of information processing is most relevant when time-dependent input signals depart from the mechanism’s internal dynamics.[23]These departures cause transients or temporary altercations which are represented in the device’s output.[23] The extension of the reservoir computing framework towards Deep Learning, with the introduction of Deep Reservoir Computing and of the Deep Echo State Network (DeepESN) model[24][25][26][27]allows to develop efficiently trained models for hierarchical processing of temporal data, at the same time enabling the investigation on the inherent role of layered composition inrecurrent neural networks. Quantum reservoir computing may use the nonlinear nature of quantum mechanical interactions or processes to form the characteristic nonlinear reservoirs[10][11][28][13]but may also be done with linear reservoirs when the injection of the input to the reservoir creates the nonlinearity.[29]The marriage of machine learning and quantum devices is leading to the emergence of quantum neuromorphic computing as a new research area.[30] Gaussian states are a paradigmatic class of states ofcontinuous variable quantum systems.[31]Although they can nowadays be created and manipulated in, e.g, state-of-the-art optical platforms,[32]naturally robust todecoherence, it is well-known that they are not sufficient for, e.g., universalquantum computingbecause transformations that preserve the Gaussian nature of a state are linear.[33]Normally, linear dynamics would not be sufficient for nontrivial reservoir computing either. It is nevertheless possible to harness such dynamics for reservoir computing purposes by considering a network of interactingquantum harmonic oscillatorsand injecting the input by periodical state resets of a subset of the oscillators. With a suitable choice of how the states of this subset of oscillators depends on the input, the observables of the rest of the oscillators can become nonlinear functions of the input suitable for reservoir computing; indeed, thanks to the properties of these functions, even universal reservoir computing becomes possible by combining the observables with a polynomial readout function.[29]In principle, such reservoir computers could be implemented with controlled multimodeoptical parametric processes,[34]however efficient extraction of the output from the system is challenging especially in the quantum regime wheremeasurement back-actionmust be taken into account. In this architecture, randomized coupling between lattice sites grants the reservoir the “black box” property inherent to reservoir processors.[10]The reservoir is then excited, which acts as the input, by an incidentoptical field. Readout occurs in the form of occupational numbers of lattice sites, which are naturally nonlinear functions of the input.[10] In this architecture, quantum mechanical coupling between spins of neighboring atoms within themolecular solidprovides the non-linearity required to create the higher-dimensional computational space.[11]The reservoir is then excited by radiofrequencyelectromagnetic radiationtuned to theresonancefrequencies of relevantnuclear spins.[11]Readout occurs by measuring the nuclear spin states.[11] The most prevalent model of quantum computing is the gate-based model where quantum computation is performed by sequential applications of unitary quantum gates on qubits of a quantum computer.[35]A theory for the implementation of reservoir computing on a gate-based quantum computer with proof-of-principle demonstrations on a number of IBM superconductingnoisy intermediate-scale quantum(NISQ) computers[36]has been reported in.[13]
https://en.wikipedia.org/wiki/Reservoir_computing
Scale-spacetheory is a framework formulti-scalesignalrepresentationdeveloped by thecomputer vision,image processingandsignal processingcommunities with complementary motivations fromphysicsandbiological vision. It is a formal theory for handling image structures at differentscales, by representing an image as a one-parameter family of smoothed images, thescale-space representation, parametrized by the size of thesmoothingkernelused for suppressing fine-scale structures.[1][2][3][4][5][6][7][8]The parametert{\displaystyle t}in this family is referred to as thescale parameter, with the interpretation that image structures of spatial size smaller than aboutt{\displaystyle {\sqrt {t}}}have largely been smoothed away in the scale-space level at scalet{\displaystyle t}. The main type of scale space is thelinear (Gaussian) scale space, which has wide applicability as well as the attractive property of being possible to derive from a small set ofscale-space axioms. The corresponding scale-space framework encompasses a theory for Gaussian derivative operators, which can be used as a basis for expressing a large class of visual operations for computerized systems that process visual information. This framework also allows visual operations to be madescale invariant, which is necessary for dealing with the size variations that may occur in image data, because real-world objects may be of different sizes and in addition the distance between the object and the camera may be unknown and may vary depending on the circumstances.[9][10] The notion of scale space applies to signals of arbitrary numbers of variables. The most common case in the literature applies to two-dimensional images, which is what is presented here. Consider a given imagef{\displaystyle f}wheref(x,y){\displaystyle f(x,y)}is the greyscale value of the pixel at position(x,y){\displaystyle (x,y)}. The linear (Gaussian)scale-space representationoff{\displaystyle f}is a family of derived signalsL(x,y;t){\displaystyle L(x,y;t)}defined by theconvolutionoff(x,y){\displaystyle f(x,y)}with the two-dimensionalGaussian kernel such that where the semicolon in the argument ofL{\displaystyle L}implies that the convolution is performed only over the variablesx,y{\displaystyle x,y}, while the scale parametert{\displaystyle t}after the semicolon just indicates which scale level is being defined. This definition ofL{\displaystyle L}works for a continuum of scalest≥0{\displaystyle t\geq 0}, but typically only a finite discrete set of levels in the scale-space representation would be actually considered. The scale parametert=σ2{\displaystyle t=\sigma ^{2}}is thevarianceof theGaussian filterand as a limit fort=0{\displaystyle t=0}the filterg{\displaystyle g}becomes animpulse functionsuch thatL(x,y;0)=f(x,y),{\displaystyle L(x,y;0)=f(x,y),}that is, the scale-space representation at scale levelt=0{\displaystyle t=0}is the imagef{\displaystyle f}itself. Ast{\displaystyle t}increases,L{\displaystyle L}is the result of smoothingf{\displaystyle f}with a larger and larger filter, thereby removing more and more of the details that the image contains. Since the standard deviation of the filter isσ=t{\displaystyle \sigma ={\sqrt {t}}}, details that are significantly smaller than this value are to a large extent removed from the image at scale parametert{\displaystyle t}, see the following figures and[11]for graphical illustrations. When faced with the task of generating a multi-scale representation one may ask: could any filtergof low-pass type and with a parametertwhich determines its width be used to generate a scale space? The answer is no, as it is of crucial importance that the smoothing filter does not introduce new spurious structures at coarse scales that do not correspond to simplifications of corresponding structures at finer scales. In the scale-space literature, a number of different ways have been expressed to formulate this criterion in precise mathematical terms. The conclusion from several different axiomatic derivations that have been presented is that the Gaussian scale space constitutes thecanonicalway to generate a linear scale space, based on the essential requirement that new structures must not be created when going from a fine scale to any coarser scale.[1][3][4][6][9][12][13][14][15][16][17][18][19]Conditions, referred to asscale-space axioms, that have been used for deriving the uniqueness of the Gaussian kernel includelinearity,shift invariance,semi-groupstructure, non-enhancement oflocal extrema,scale invarianceandrotational invariance. In the works,[15][20][21]the uniqueness claimed in the arguments based on scale invariance has been criticized, and alternative self-similar scale-space kernels have been proposed. The Gaussian kernel is, however, a unique choice according to the scale-space axiomatics based on causality[3]or non-enhancement of local extrema.[16][18] Equivalently, the scale-space family can be defined as the solution of thediffusion equation(for example in terms of theheat equation), with initial conditionL(x,y;0)=f(x,y){\displaystyle L(x,y;0)=f(x,y)}. This formulation of the scale-space representationLmeans that it is possible to interpret the intensity values of the imagefas a "temperature distribution" in the image plane and that the process that generates the scale-space representation as a function oftcorresponds to heatdiffusionin the image plane over timet(assuming the thermal conductivity of the material equal to the arbitrarily chosen constant⁠1/2⁠). Although this connection may appear superficial for a reader not familiar withdifferential equations, it is indeed the case that the main scale-space formulation in terms of non-enhancement of local extrema is expressed in terms of a sign condition onpartial derivativesin the 2+1-D volume generated by the scale space, thus within the framework ofpartial differential equations. Furthermore, a detailed analysis of the discrete case shows that the diffusion equation provides a unifying link between continuous and discrete scale spaces, which also generalizes to nonlinear scale spaces, for example, usinganisotropic diffusion. Hence, one may say that the primary way to generate a scale space is by the diffusion equation, and that the Gaussian kernel arises as theGreen's functionof this specific partial differential equation. The motivation for generating a scale-space representation of a given data set originates from the basic observation that real-world objects are composed of different structures at differentscales. This implies that real-world objects, in contrast to idealized mathematical entities such aspointsorlines, may appear in different ways depending on the scale of observation. For example, the concept of a "tree" is appropriate at the scale of meters, while concepts such as leaves and molecules are more appropriate at finer scales. For acomputer visionsystem analysing an unknown scene, there is no way to know a priori whatscalesare appropriate for describing the interesting structures in the image data. Hence, the only reasonable approach is to consider descriptions at multiple scales in order to be able to capture the unknown scale variations that may occur. Taken to the limit, a scale-space representation considers representations at all scales.[9] Another motivation to the scale-space concept originates from the process of performing a physical measurement on real-world data. In order to extract any information from a measurement process, one has to applyoperators of non-infinitesimal sizeto the data. In many branches of computer science and applied mathematics, the size of the measurement operator is disregarded in the theoretical modelling of a problem. The scale-space theory on the other hand explicitly incorporates the need for a non-infinitesimal size of the image operators as an integral part of any measurement as well as any other operation that depends on a real-world measurement.[5] There is a close link between scale-space theory and biological vision. Many scale-space operations show a high degree of similarity with receptive field profiles recorded from the mammalian retina and the first stages in the visual cortex. In these respects, the scale-space framework can be seen as a theoretically well-founded paradigm for early vision, which in addition has been thoroughly tested by algorithms and experiments.[4][9] At any scale in scale space, we can apply local derivative operators to the scale-space representation: Due to the commutative property between the derivative operator and the Gaussian smoothing operator, suchscale-space derivativescan equivalently be computed by convolving the original image with Gaussian derivative operators. For this reason they are often also referred to asGaussian derivatives: The uniqueness of the Gaussian derivative operators as local operations derived from a scale-space representation can be obtained by similar axiomatic derivations as are used for deriving the uniqueness of the Gaussian kernel for scale-space smoothing.[4][22] These Gaussian derivative operators can in turn be combined by linear or non-linear operators into a larger variety of different types of feature detectors, which in many cases can be well modelled bydifferential geometry. Specifically, invariance (or more appropriatelycovariance) to local geometric transformations, such as rotations or local affine transformations, can be obtained by considering differential invariants under the appropriate class of transformations or alternatively by normalizing the Gaussian derivative operators to a locally determined coordinate frame determined from e.g. a preferred orientation in the image domain, or by applying a preferred local affine transformation to a local image patch (see the article onaffine shape adaptationfor further details). When Gaussian derivative operators and differential invariants are used in this way as basic feature detectors at multiple scales, the uncommitted first stages of visual processing are often referred to as avisual front-end. This overall framework has been applied to a large variety of problems in computer vision, includingfeature detection,feature classification,image segmentation,image matching,motion estimation, computation ofshapecues andobject recognition. The set of Gaussian derivative operators up to a certain order is often referred to as theN-jetand constitutes a basic type of feature within the scale-space framework. Following the idea of expressing visual operations in terms of differential invariants computed at multiple scales using Gaussian derivative operators, we can express anedge detectorfrom the set of points that satisfy the requirement that the gradient magnitude should assume a local maximum in the gradient direction By working out the differential geometry, it can be shown[4]that thisdifferential edge detectorcan equivalently be expressed from the zero-crossings of the second-order differential invariant that satisfy the following sign condition on a third-order differential invariant: Similarly, multi-scaleblob detectorsat any given fixed scale[23][9]can be obtained from local maxima and local minima of either theLaplacianoperator (also referred to as theLaplacian of Gaussian) orthe determinant of the Hessian matrix In an analogous fashion, corner detectors and ridge and valley detectors can be expressed as local maxima, minima or zero-crossings of multi-scale differential invariants defined from Gaussian derivatives. The algebraic expressions for the corner and ridge detection operators are, however, somewhat more complex and the reader is referred to the articles oncorner detectionandridge detectionfor further details. Scale-space operations have also been frequently used for expressing coarse-to-fine methods, in particular for tasks such asimage matchingand formulti-scale image segmentation. The theory presented so far describes a well-founded framework forrepresentingimage structures at multiple scales. In many cases it is, however, also necessary to select locally appropriate scales for further analysis. This need forscale selectionoriginates from two major reasons; (i) real-world objects may have different size, and this size may be unknown to the vision system, and (ii) the distance between the object and the camera can vary, and this distance information may also be unknowna priori. A highly useful property of scale-space representation is that image representations can be made invariant to scales, by performing automatic local scale selection[9][10][23][24][25][26][27][28]based on localmaxima(orminima) over scales of scale-normalizedderivatives whereγ∈[0,1]{\displaystyle \gamma \in [0,1]}is a parameter that is related to the dimensionality of the image feature. This algebraic expression forscale normalized Gaussian derivative operatorsoriginates from the introduction ofγ{\displaystyle \gamma }-normalized derivativesaccording to It can be theoretically shown that a scale selection module working according to this principle will satisfy the followingscale covariance property: if for a certain type of image feature a local maximum is assumed in a certain image at a certain scalet0{\displaystyle t_{0}}, then under a rescaling of the image by a scale factors{\displaystyle s}the local maximum over scales in the rescaled image will be transformed to the scale levels2t0{\displaystyle s^{2}t_{0}}.[23] Following this approach of gamma-normalized derivatives, it can be shown that different types ofscale adaptive and scale invariantfeature detectors[9][10][23][24][25][29][30][27]can be expressed for tasks such asblob detection,corner detection,ridge detection,edge detectionandspatio-temporal interest point detection(see the specific articles on these topics for in-depth descriptions of how these scale-invariant feature detectors are formulated). Furthermore, the scale levels obtained from automatic scale selection can be used for determining regions of interest for subsequentaffine shape adaptation[31]to obtain affine invariant interest points[32][33]or for determining scale levels for computing associatedimage descriptors, such as locally scale adaptedN-jets. Recent work has shown that also more complex operations, such as scale-invariantobject recognitioncan be performed in this way, by computing local image descriptors (N-jets or local histograms of gradient directions) at scale-adapted interest points obtained from scale-space extrema of the normalizedLaplacianoperator (see alsoscale-invariant feature transform[34]) or the determinant of the Hessian (see alsoSURF);[35]see also the Scholarpedia article on thescale-invariant feature transform[36]for a more general outlook of object recognition approaches based on receptive field responses[19][37][38][39]in terms Gaussian derivative operators or approximations thereof. An imagepyramidis a discrete representation in which a scale space is sampled in both space and scale. For scale invariance, the scale factors should be sampled exponentially, for example as integer powers of 2 or√2. When properly constructed, the ratio of the sample rates in space and scale are held constant so that the impulse response is identical in all levels of the pyramid.[40][41][42][43]Fast, O(N), algorithms exist for computing a scale invariant image pyramid, in which the image or signal is repeatedly smoothed then subsampled. Values for scale space between pyramid samples can easily be estimated using interpolation within and between scales and allowing for scale and position estimates with sub resolution accuracy.[43] In a scale-space representation, the existence of a continuous scale parameter makes it possible to track zero crossings over scales leading to so-calleddeep structure. For features defined aszero-crossingsofdifferential invariants, theimplicit function theoremdirectly definestrajectoriesacross scales,[4][44]and at those scales wherebifurcationsoccur, the local behaviour can be modelled bysingularity theory.[4][44][45][46][47] Extensions of linear scale-space theory concern the formulation of non-linear scale-space concepts more committed to specific purposes.[48][49]Thesenon-linear scale-spacesoften start from the equivalent diffusion formulation of the scale-space concept, which is subsequently extended in a non-linear fashion. A large number of evolution equations have been formulated in this way, motivated by different specific requirements (see the abovementioned book references for further information). However, not all of these non-linear scale-spaces satisfy similar "nice" theoretical requirements as the linear Gaussian scale-space concept. Hence, unexpected artifacts may sometimes occur and one should be very careful of not using the term "scale-space" for just any type of one-parameter family of images. A first-order extension of the isotropic Gaussian scale space is provided by theaffine (Gaussian) scale space.[4]One motivation for this extension originates from the common need for computing image descriptors subject for real-world objects that are viewed under a perspective camera model. To handle such non-linear deformations locally, partial invariance (or more correctlycovariance) to localaffine deformationscan be achieved by considering affine Gaussian kernels with their shapes determined by the local image structure,[31]see the article onaffine shape adaptationfor theory and algorithms. Indeed, this affine scale space can also be expressed from a non-isotropic extension of the linear (isotropic) diffusion equation, while still being within the class of linearpartial differential equations. There exists a more general extension of the Gaussian scale-space model to affine and spatio-temporal scale-spaces.[4][31][18][19][50]In addition to variabilities over scale, which original scale-space theory was designed to handle, thisgeneralized scale-space theory[19]also comprises other types of variabilities caused by geometric transformations in the image formation process, including variations in viewing direction approximated by local affine transformations, and relative motions between objects in the world and the observer, approximated by localGalilean transformations. This generalized scale-space theory leads to predictions about receptive field profiles in good qualitative agreement with receptive field profiles measured by cell recordings in biological vision.[51][52][50][53] There are strong relations between scale-space theory andwavelet theory, although these two notions of multi-scale representation have been developed from somewhat different premises. There has also been work on othermulti-scale approaches, such aspyramidsand a variety of other kernels, that do not exploit or require the same requirements as true scale-space descriptions do. There are interesting relations between scale-space representation and biological vision and hearing. Neurophysiological studies of biological vision have shown that there arereceptive fieldprofiles in the mammalianretinaandvisual cortex, that can be well modelled by linear Gaussian derivative operators, in some cases also complemented by a non-isotropic affine scale-space model, a spatio-temporal scale-space model and/or non-linear combinations of such linear operators.[18][51][52][50][53][54][55][56][57] Regarding biological hearing there arereceptive fieldprofiles in theinferior colliculusand theprimary auditory cortexthat can be well modelled by spectra-temporal receptive fields that can be well modelled by Gaussian derivates over logarithmic frequencies and windowed Fourier transforms over time with the window functions being temporal scale-space kernels.[58][59] In the area of classical computer vision, scale-space theory has established itself as a theoretical framework for early vision, with Gaussian derivatives constituting a canonical model for the first layer of receptive fields. With the introduction ofdeep learning, there has also been work on also using Gaussian derivatives or Gaussian kernels as a general basis for receptive fields in deep networks.[60][61][62][63][64]Using the transformation properties of the Gaussian derivatives and Gaussian kernels under scaling transformations, it is in this way possible to obtain scale covariance/equivariance and scale invariance of the deep network to handle image structures at different scales in a theoretically well-founded manner.[62][63]There have also been approaches developed to obtain scale covariance/equivariance and scale invariance by learned filters combined with multiple scale channels.[65][66][67][68][69][70]Specifically, using the notions of scale covariance/equivariance and scale invariance, it is possible to make deep networks operate robustly at scales not spanned by the training data, thus enabling scale generalization.[62][63][67][69] For processing pre-recorded temporal signals or video, the Gaussian kernel can also be used for smoothing and suppressing fine-scale structures over the temporal domain, since the data are pre-recorded and available in all directions. When processing temporal signals or video in real-time situations, the Gaussian kernel cannot, however, be used for temporal smoothing, since it would access data from the future, which obviously cannot be available. For temporal smoothing in real-time situations, one can instead use the temporal kernel referred to as the time-causal limit kernel,[71]which possesses similar properties in a time-causal situation (non-creation of new structures towards increasing scale and temporal scale covariance) as the Gaussian kernel obeys in the non-causal case. The time-causal limit kernel corresponds to convolution with an infinite number of truncated exponential kernels coupled in cascade, with specifically chosen time constants to obtain temporal scale covariance. For discrete data, this kernel can often be numerically well approximated by a small set of first-order recursive filters coupled in cascade, see[71]for further details. For an earlier approach to handling temporal scales in a time-causal way, by performing Gaussian smoothing over a logarithmically transformed temporal axis, however, not having any known memory-efficient time-recursive implementation as the time-causal limit kernel has, see,[72] When implementing scale-space smoothing in practice there are a number of different approaches that can be taken in terms of continuous or discrete Gaussian smoothing, implementation in the Fourier domain, in terms of pyramids based on binomial filters that approximate the Gaussian or using recursive filters. More details about this are given in a separate article onscale space implementation.
https://en.wikipedia.org/wiki/Scale_space#Deep_learning_and_scale_space
Neural coding(orneural representation) is aneurosciencefield concerned with characterising the hypothetical relationship between thestimulusand the neuronal responses, and the relationship among theelectrical activitiesof the neurons in theensemble.[1][2]Based on the theory that sensory and other information is represented in thebrainbynetworks of neurons, it is believed thatneuronscan encode bothdigitalandanaloginformation.[3] Neurons have an ability uncommon among the cells of the body to propagate signals rapidly over large distances by generating characteristic electrical pulses calledaction potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such aslight,sound,taste,smellandtouch. Information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain. Beyond this, specialized neurons, such as those of the retina, can communicate more information throughgraded potentials. These differ from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons' output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials is higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons.[4] Although action potentials can vary somewhat in duration,amplitudeand shape, they are typically treated as identical stereotyped events in neural coding studies. If thebrief durationof an action potential (about 1 ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series ofall-or-nonepoint events in time.[5]The lengths of interspike intervals (ISIs) between two successive spikes in a spike train often vary, apparently randomly.[6]The study of neural coding involves measuring and characterizing how stimulus attributes, such as light or sound intensity, or motor actions, such as the direction of an arm movement, are represented by neuron action potentials or spikes. In order to describe and analyze neuronal firing,statistical methodsand methods ofprobability theoryand stochasticpoint processeshave been widely applied. With the development of large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and have already provided the first glimpse into the real-time neural code as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation.[7][8][9]Neuroscientists have initiated several large-scale brain decoding projects.[10][11] The link between stimulus and response can be studied from two opposite points of view. Neural encoding refers to the map from stimulus to response. The main focus is to understand how neurons respond to a wide variety of stimuli, and to construct models that attempt to predict responses to other stimuli.Neural decodingrefers to the reverse map, from response to stimulus, and the challenge is to reconstruct a stimulus, or certain aspects of that stimulus, from the spike sequences it evokes.[citation needed] A sequence, or 'train', of spikes may contain information based on different coding schemes. In some neurons the strength with which a postsynaptic partner responds may depend solely on the 'firing rate', the average number of spikes per unit time (a 'rate code'). At the other end, a complex 'temporal code' is based on the precise timing of single spikes. They may be locked to an external stimulus such as in the visual[12]andauditory systemor be generated intrinsically by the neural circuitry.[13] Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean.[14] The rate coding model ofneuronalfiring communication states that as the intensity of a stimulus increases, thefrequencyor rate ofaction potentials, or "spike firing", increases. Rate coding is sometimes called frequency coding. Rate coding is a traditional coding scheme, assuming that most, if not all, information about the stimulus is contained in the firing rate of the neuron. Because the sequence of action potentials generated by a given stimulus varies from trial to trial, neuronal responses are typically treated statistically or probabilistically. They may be characterized by firing rates, rather than as specific spike sequences. In most sensory systems, the firing rate increases, generally non-linearly, with increasing stimulus intensity.[15]Under a rate coding assumption, any information possibly encoded in the temporal structure of the spike train is ignored. Consequently, rate coding is inefficient but highly robust with respect to the ISI 'noise'.[6] During rate coding, precisely calculating firing rate is very important. In fact, the term "firing rate" has a few different definitions, which refer to different averaging procedures, such as anaverage over time(rate as a single-neuron spike count) or anaverage over several repetitions(rate of PSTH) of experiment. In rate coding, learning is based on activity-dependent synaptic weight modifications. Rate coding was originally shown byEdgar AdrianandYngve Zottermanin 1926.[16]In this simple experiment different weights were hung from amuscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. From these original experiments, Adrian and Zotterman concluded that action potentials were unitary events, and that the frequency of events, and not individual event magnitude, was the basis for most inter-neuronal communication. In the following decades, measurement of firing rates became a standard tool for describing the properties of all types of sensory orcorticalneurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes. During recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity.[6] The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial.[14]The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter (Chapter 1.5in the textbook 'Spiking Neuron Models'[14]). The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of theorganism— and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans performsaccades, rapid changes of the direction of gaze. The image projected onto the retinalphotoreceptorschanges therefore every few hundred milliseconds (Chapter 1.5in[14]) Despite its shortcomings, the concept of a spike-count rate code is widely used not only in experiments, but also in models ofneural networks. It has led to the idea that a neuron transforms information about a single input variable (the stimulus strength) into a single continuous output variable (the firing rate). There is a growing body of evidence that inPurkinje neurons, at least, information is not simply encoded in firing but also in the timing and duration of non-firing, quiescent periods.[17][18]There is also evidence from retinal cells, that information is encoded not only in the firing rate but also in spike timing.[19]More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow.[14] The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval.[14]It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in aPeri-Stimulus-Time Histogram(PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5in[14]). For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also thefractionof trials on which a spike occurred between those times. Equivalently, r(t)Δt is theprobabilitythat a spike occurs during this time interval. As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response.[14] Nevertheless, the experimental time-dependent firing rate measure can make sense, if there are large populations of independent neurons that receive the same stimulus. Instead of recording from a population of N neurons in a single run, it is experimentally easier to record from a single neuron and average over N repeated runs. Thus, the time-dependent firing rate coding relies on the implicit assumption that there are always populations of neurons. When precise spike timing or high-frequency firing-ratefluctuationsare found to carry information, the neural code is often identified as a temporal code.[14][20]A number of studies have found that the temporal resolution of the neural code is on a millisecond time scale, indicating that precise spike timing is a significant element in neural coding.[3][21][19]Such codes, that communicate via the time between spikes are also referred to as interpulse interval codes, and have been supported by recent studies.[22] Neurons exhibit high-frequency fluctuations of firing-rates which could be noise or could carry information. Rate coding models suggest that these irregularities are noise, while temporal coding models suggest that they encode information. If the nervous system only used rate codes to convey information, a more consistent, regular firing rate would have been evolutionarily advantageous, and neurons would have utilized this code over other less robust options.[23]Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing. To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences, at 6 spikes/10 ms.[24] Until recently, scientists had put the most emphasis on rate encoding as an explanation forpost-synaptic potentialpatterns. However, functions of the brain are more temporally precise than the use of only rate encoding seems to allow.[19]In other words, essential information could be lost due to the inability of the rate code to capture all the available information of the spike train. In addition, responses are different enough between similar (but not identical) stimuli to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code.[25] Temporal codes (also calledspike codes[14]), employ those features of the spiking activity that cannot be described by the firing rate. For example,time-to-first-spikeafter the stimulus onset,phase-of-firingwith respect to background oscillations, characteristics based on the second and higher statisticalmomentsof the ISIprobability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes.[26]As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to anongoing brain oscillation(phase of firing).[3][6]One way in which temporal codes are decoded, in presence ofneural oscillations, is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing thepost-synaptic neuron.[27] The temporal structure of a spike train or firing rate evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes[28](and rapidly changing firing rates in PSTHs) no matter what neural coding strategy is being used. Temporal coding in the narrow sense refers to temporal precision in the response that does not arise solely from the dynamics of the stimulus, but that nevertheless relates to properties of the stimulus. The interplay between stimulus and encoding dynamics makes the identification of a temporal code difficult. In temporal coding, learning can be explained by activity-dependent synaptic delay modifications.[29]The modifications can themselves depend not only on spike rates (rate coding) but also on spike timing patterns (temporal coding), i.e., can be a special case ofspike-timing-dependent plasticity.[30] The issue of temporal coding is distinct and independent from the issue of independent-spike coding. If each spike is independent of all the other spikes in the train, the temporal character of the neural code is determined by the behavior of time-dependent firing rate r(t). If r(t) varies slowly with time, the code is typically called a rate code, and if it varies rapidly, the code is called temporal. For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important forsound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work.[24] To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike.[31]This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations.[32]In theprimary visual cortexof macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions.[33] The mammaliangustatory systemis useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism.[34]Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation.[35] Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain.[25] As with the visual system, inmitral/tufted cellsin theolfactory bulbof mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. This type of extra information could help in recognizing a certain odor, but is not completely necessary, as average spike count over the course of the animal's sniffing was also a good identifier.[36]Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system.[24] The specificity of temporal coding requires highly refined technology to measure informative, reliable, experimental data. Advances made inoptogeneticsallow neurologists to control spikes in individual neurons, offering electrical and spatial single-cell resolution. For example, blue light causes the light-gated ion channelchannelrhodopsinto open, depolarizing the cell and producing a spike. When blue light is not sensed by the cell, the channel closes, and the neuron ceases to spike. The pattern of the spikes matches the pattern of the blue light stimuli. By inserting channelrhodopsin gene sequences into mouse DNA, researchers can control spikes and therefore certain behaviors of the mouse (e.g., making the mouse turn left).[37]Researchers, through optogenetics, have the tools to effect different temporal codes in a neuron while maintaining the same mean firing rate, and thereby can test whether or not temporal coding occurs in specific neural circuits.[38] Optogenetic technology also has the potential to enable the correction of spike abnormalities at the root of several neurological and psychological disorders.[38]If neurons do encode information in individual spike timing patterns, key signals could be missed by attempting to crack the code while looking only at mean firing rates.[24]Understanding any temporally encoded aspects of the neural code and replicating these sequences in neurons could allow for greater control and treatment of neurological disorders such asdepression,schizophrenia, andParkinson's disease. Regulation of spike intervals in single cells more precisely controls brain activity than the addition of pharmacological agents intravenously.[37] Phase-of-firing code is a neural coding scheme that combines thespikecount code with a time reference based onoscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low[39]or high frequencies.[40] It has been shown that neurons in some cortical sensory areas encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count.[39][41]Thelocal field potentialsignals reflect population (network) oscillations. The phase-of-firing code is often categorized as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on thephase precessionphenomena observed in place cells of thehippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence.[42] Phase code has been shown in visual cortex to involve alsohigh-frequency oscillations.[42]Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms.[42] Population coding is a method to represent stimuli by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis.[43]Experimental studies have revealed that this coding paradigm is widely used in the sensory and motor areas of the brain. For example, in the visual areamedial temporal(MT), neurons are tuned to the direction of object motion.[44]In response to an object moving in a particular direction, many neurons in MT fire with a noise-corrupted andbell-shapedactivity pattern across the population. The moving direction of the object is retrieved from the population activity, to be immune from the fluctuation existing in a single neuron's signal. When monkeys are trained to move a joystick towards a lit target, a single neuron will fire for multiple target directions. However it fires the fastest for one direction and more slowly depending on how close the target was to the neuron's "preferred" direction.[45][46]If each neuron represents movement in its preferred direction, and the vector sum of all neurons is calculated (each neuron has a firing rate and a preferred direction), the sum points in the direction of motion. In this manner, the population of neurons codes the signal for the motion.[citation needed]This particular population code is referred to aspopulation vectorcoding. Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels;[47]ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch,[48]and formant representations in consonant-vowel syllables.[49]The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding. Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronalvariabilityand the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously.[50]Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus. Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value.[citation needed]It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method ofmaximum likelihoodbased on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations,[51]or even more detailed dependencies such as higher ordermaximum entropy models,[52]orcopulas.[53] The correlation coding model ofneuronalfiring claims that correlations betweenaction potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the totalmutual informationpresent in the two spike trains about a stimulus feature.[54]However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign.[55]Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.[56] The independent-spike coding model ofneuronalfiring claims that each individualaction potential, or "spike", is independent of each other spike within thespike train.[20][57] A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate. This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as ingrid cellsthat represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.[58] Dimensionality reductionandtopological data analysis, have revealed that the population code is constrained to low-dimensional manifolds,[59]sometimes also referred to asattractors. The position along the neural manifold correlates to certain behavioral conditions like head direction neurons in the anterodorsal thalamic nucleus forming a ring structure,[60]grid cellsencoding spatial position inentorhinal cortexalong the surface of atorus,[61]ormotor cortexneurons encoding hand movements[62]and preparatory activity.[63]The low-dimensional manifolds are known to change in a state dependent manner, such as eye closure in thevisual cortex,[64]or breathing behavior in theventral respiratory column.[65] The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known. As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produceswavelet-like oriented filters that resemble thereceptive fieldsof simple cells in the visual cortex.[66]The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.[67] Given a potentially large set of input patterns, sparse coding algorithms (e.g.sparse autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols. Most models of sparse coding are based on the linear generative model.[68]In this model, the symbols are combined in alinear fashionto approximate the input. More formally, given a k-dimensional set of real-numbered input vectorsξ→∈Rk{\displaystyle {\vec {\xi }}\in \mathbb {R} ^{k}}, the goal of sparse coding is to determine n k-dimensionalbasis vectorsb1→,…,bn→∈Rk{\displaystyle {\vec {b_{1}}},\ldots ,{\vec {b_{n}}}\in \mathbb {R} ^{k}}, corresponding to neuronal receptive fields, along with asparsen-dimensional vector of weights or coefficientss→∈Rn{\displaystyle {\vec {s}}\in \mathbb {R} ^{n}}for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector:ξ→≈∑j=1nsjb→j{\displaystyle {\vec {\xi }}\approx \sum _{j=1}^{n}s_{j}{\vec {b}}_{j}}.[69] The codings generated by algorithms implementing a linear generative model can be classified into codings withsoft sparsenessand those withhard sparseness.[68]These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smoothGaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values,noorhardly anysmall absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.[68] Another measure of coding is whether it iscritically completeorovercomplete. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding isovercomplete. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise.[70]The human primaryvisual cortexis estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.[68] Other models are based onmatching pursuit, asparse approximationalgorithm which finds the "best matching" projections of multidimensional data, anddictionary learning, a representation learning method which aims to find asparse matrixrepresentation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.[71][72][73] Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specificassociative memoriesin which only a few neurons out of apopulationrespond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli. Theoretical work onsparse distributed memoryhas suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations.[74]Experimentally, sparse representations of sensory information have been observed in many systems, including vision,[75]audition,[76]touch,[77]and olfaction.[78]However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain. In theDrosophilaolfactory system, sparse odor coding by theKenyon cellsof themushroom bodyis thought to generate a large number of precisely addressable locations for the storage of odor-specific memories.[79]Sparseness is controlled by a negative feedback circuit between Kenyon cells andGABAergicanterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.[80]
https://en.wikipedia.org/wiki/Sparse_coding
Inmachine learning, the termstochastic parrotis a metaphor to describe the claim thatlarge language models, though able to generate plausible language, do not understand the meaning of the language they process.[1][2]The term was coined byEmily M. Bender[2][3]in the 2021artificial intelligenceresearch paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender,Timnit Gebru, Angelina McMillan-Major, andMargaret Mitchell.[4] The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender,Timnit Gebru, Angelina McMillan-Major, andMargaret Mitchell(using the pseudonym "Shmargaret Shmitchell").[4]They argued thatlarge language models(LLMs) present dangers such as environmental and financial costs, inscrutability leading to unknown dangerous biases, and potential for deception, and that they can't understand the concepts underlying what they learn.[5] The word "stochastic" – from the ancient Greek "stokhastikos" ('based on guesswork') – is a term fromprobability theorymeaning "randomly determined".[6]The word "parrot" refers toparrots' ability tomimic human speech, without understanding its meaning.[6] In their paper,Benderet al. argue that LLMs are probabilistically linking words and sentences together without considering meaning. Therefore, they are labeled to be mere "stochastic parrots".[4]According to the machine learning professionals Lindholm, Wahlström, Lindsten, and Schön, the analogy highlights two vital limitations:[1][7] Lindholm et al. noted that, with poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".[1] Gebru was asked by Google to retract the paper or remove the names of Google employees from it. According toJeff Dean, the paper "didn't meet our bar for publication". In response, Gebru listed conditions to be met, stating that otherwise they could "work on a last date". Dean wrote that one of these conditions was for Google to disclose the reviewers of the paper and their specific feedback, which Google declined. Shortly after, she received an email saying that Google was "accepting her resignation". Her firing sparked a protest by Google employees, who believed the intent was to censor Gebru's criticism.[8] In July of 2021, theAlan Turing Institutehosted a keynote and panel discussion on the paper.[9]As of September 2024[update], the paper has been cited in 4,789 publications.[10]The term has been used in publications in the fields of law,[11]grammar,[12]narrative,[13]and humanities.[14]The authors continue to maintain their concerns about the dangers ofchatbotsbased on large language models, such asGPT-4.[15] Stochastic parrot is now aneologismused by AI skeptics to refer to machines' lack of understanding of the meaning of their outputs and is sometimes interpreted as a "slur against AI".[6]Its use expanded further whenSam Altman, CEO ofOpen AI, used the term ironically when he tweeted, "i am a stochastic parrot and so r u."[6]The term was then designated to be the 2023 AI-related Word of the Year for theAmerican Dialect Society, even over the words "ChatGPT" and "LLM".[6][16] The phrase is often referenced by some researchers to describe LLMs as pattern matchers that can generate plausible human-like text through their vast amount of training data, merely parroting in a stochastic fashion. However, other researchers argue that LLMs are, in fact, at least partially able to understand language.[17] Some LLMs, such as ChatGPT, have become capable of interacting with users in convincingly human-like conversations.[17]The development of these new systems has deepened the discussion of the extent to which LLMs understand or are simply "parroting". In the mind of a human being, words and language correspond to things one has experienced.[18]For LLMs, words may correspond only to other words and patterns of usage fed into their training data.[19][20][4]Proponents of the idea of stochastic parrots thus conclude that LLMs are incapable of actually understanding language.[19][4] The tendency of LLMs to pass off fake information as fact is held as support.[18]Calledhallucinations, LLMs will occasionally synthesize information that matches some pattern, but not reality.[19][20][18]That LLMs can’t distinguish fact and fiction leads to the claim that they can’t connect words to a comprehension of the world, as language should do.[19][18]Further, LLMs often fail to decipher complex or ambiguous grammar cases that rely on understanding the meaning of language.[19][20]As an example, borrowing from Saba et al., is the prompt:[19] The wet newspaper that fell down off the table is my favorite newspaper. But now that my favorite newspaper fired the editor I might not like reading it anymore. Can I replace ‘my favorite newspaper’ by ‘the wet newspaper that fell down off the table’ in the second sentence? LLMs respond to this in the affirmative, not understanding that the meaning of "newspaper" is different in these two contexts; it is first an object and second an institution.[19]Based on these failures, some AI professionals conclude they are no more than stochastic parrots.[19][18][4] One argument against the hypothesis that LLMs are stochastic parrot is their results onbenchmarksfor reasoning, common sense and language understanding. In 2023, some LLMs have shown good results on many language understanding tests, such as the Super General Language Understanding Evaluation (SuperGLUE).[20][21]Such tests, and the smoothness of many LLM responses, help as many as 51% of AI professionals believe they can truly understand language with enough data, according to a 2022 survey.[20] Another technique for investigating if LLMs can understand is termed "mechanistic interpretability". The idea is toreverse-engineera large language model to analyze how it internally processes the information. One example is Othello-GPT, where a smalltransformerwas trained to predict legalOthellomoves. It has been found that this model has an internal representation of the Othello board, and that modifying this representation changes the predicted legal Othello moves in the correct way. This supports the idea that LLMs have a "world model", and are not just doing superficial statistics.[22][23] In another example, a small transformer was trained on computer programs written in the programming languageKarel. Similar to the Othello-GPT example, this model developed an internal representation of Karel program semantics. Modifying this representation results in appropriate changes to the output. Additionally, the model generates correct programs that are, on average, shorter than those in the training set.[24] Researchers also studied "grokking", a phenomenon where an AI model initially memorizes the training data outputs, and then, after further training, suddenly finds a solution that generalizes to unseen data.[25] However, when tests created to test people for language comprehension are used to test LLMs, they sometimes result in false positives caused by spurious correlations within text data.[26]Models have shown examples of shortcut learning, which is when a system makes unrelated correlations within data instead of using human-like understanding.[27]One such experiment conducted in 2019 tested Google’sBERTLLM using the argument reasoning comprehension task. BERT was prompted to choose between 2 statements, and find the one most consistent with an argument. Below is an example of one of these prompts:[20][28] Argument: Felons should be allowed to vote. A person who stole a car at 17 should not be barred from being a full citizen for life.Statement A: Grand theft auto is a felony.Statement B: Grand theft auto is not a felony. Researchers found that specific words such as "not" hint the model towards the correct answer, allowing near-perfect scores when included but resulting in random selection when hint words were removed.[20][28]This problem, and the known difficulties defining intelligence, causes some to argue all benchmarks that find understanding in LLMs are flawed, that they all allow shortcuts to fake understanding.
https://en.wikipedia.org/wiki/Stochastic_parrot
Topological deep learning (TDL)[1][2][3][4][5][6]is a research field that extendsdeep learningto handle complex,non-Euclideandata structures. Traditional deep learning models, such asconvolutional neural networks(CNNs) andrecurrent neural networks(RNNs), excel in processing data on regular grids and sequences. However, scientific and real-world data often exhibit more intricate data domains encountered in scientific computations , includingpoint clouds,meshes,time series,scalar fieldsgraphs, or generaltopological spaceslikesimplicial complexesandCW complexes.[7]TDL addresses this by incorporating topological concepts to process data with higher-order relationships, such as interactions among multiple entities and complex hierarchies. This approach leverages structures likesimplicial complexesandhypergraphsto capture global dependencies and qualitative spatial properties, offering a more nuanced representation of data. TDL also encompasses methods fromcomputationalandalgebraic topologythat permit studying properties ofneural networksand their training process, such as their predictive performance or generalization properties.[8][9][10][11][12][13][14]The mathematical foundations of TDL arealgebraic topology,differential topology, andgeometric topology. Therefore, TDL can be generalized for data ondifferentiable manifolds,knots, links, tangles, curves, etc. Traditional techniques fromdeep learningoften operate under the assumption that a dataset is residing in a highly-structured space (likeimages, whereconvolutional neural networksexhibit outstanding performance over alternative methods) or aEuclidean space. The prevalence of new types of data, in particulargraphs,meshes, andmolecules, resulted in the development of new techniques, culminating in the field ofgeometric deep learning, which originally proposed asignal-processingperspective for treating such data types.[15]While originally confined to graphs, where connectivity is defined based on nodes and edges, follow-up work extended concepts to a larger variety of data types, includingsimplicial complexes[16][3]andCW complexes,[8][17]with recent work proposing a unified perspective ofmessage-passingon general combinatorial complexes.[1] An independent perspective on different types of data originated fromtopological data analysis, which proposed a new framework for describing structural information of data, i.e., their "shape," that is inherently aware of multiple scales in data, ranging fromlocalinformation toglobalinformation.[18]While at first restricted to smaller datasets, subsequent work developed new descriptors that efficiently summarized topological information of datasets to make them available for traditional machine-learning techniques, such assupport vector machinesorrandom forests. Such descriptors ranged from new techniques forfeature engineeringover new ways of providing suitablecoordinatesfor topological descriptors,[19][20][21]or the creation of more efficientdissimilarity measures.[22][23][24][25] Contemporary research in this field is largely concerned with either integrating information about the underlying data topology into existingdeep-learningmodels or obtaining novel ways of training on topological domains. Focusing ontopologyin the sense ofpoint set topology, an active branch of TDL is concerned with learningontopological spaces, that is, on different topological domains. One of the core concepts in topological deep learning is the domain upon which this data is defined and supported. In case of Euclidean data, such as images, this domain is a grid, upon which the pixel value of the image is supported. In a more general setting this domain might be atopological domain. Next, we introduce the most common topological domains that are encountered in a deep learning setting. These domains include, but not limited to, graphs, simplicial complexes, cell complexes, combinatorial complexes and hypergraphs. Given a finite setSof abstract entities, a neighborhood functionN{\displaystyle {\mathcal {N}}}onSis an assignment that attach to every pointx{\displaystyle x}inSa subset ofSor a relation. Such a function can be induced by equippingSwith anauxiliary structure. Edges provide one way of defining relations among the entities ofS. More specifically, edges in a graph allow one to define the notion of neighborhood using, for instance, the one hop neighborhood notion. Edges however, limited in their modeling capacity as they can only be used to modelbinary relationsamong entities ofSsince every edge is connected typically to two entities. In many applications, it is desirable to permit relations that incorporate more than two entities. The idea of using relations that involve more than two entities is central to topological domains. Such higher-order relations allow for a broader range of neighborhood functions to be defined onSto capture multi-way interactions among entities ofS. Next we review the main properties, advantages, and disadvantages of some commonly studied topological domains in the context of deep learning, including (abstract) simplicial complexes, regular cell complexes, hypergraphs, and combinatorial complexes. Each of the enumerated topological domains has its own characteristics, advantages, and limitations: The properties of simplicial complexes, cell complexes, and hypergraphs give rise to two main features of relations on higher-order domains, namely hierarchies of relations and set-type relations.[1] A rank function on a higher-order domainXis an order-preserving functionrk:X→Z, whererk(x) attaches a non-negative integer value to each relationxinX, preserving set inclusion inX. Cell and simplicial complexes are common examples of higher-order domains equipped with rank functions and therefore with hierarchies of relations.[1] Relations in a higher-order domain are called set-type relations if the existence of a relation is not implied by another relation in the domain. Hypergraphs constitute examples of higher-order domains equipped with set-type relations. Given the modeling limitations of simplicial complexes, cell complexes, and hypergraphs, we develop the combinatorial complex, a higher-order domain that features both hierarchies of relations and set-type relations.[1] The learning tasks in TDL can be broadly classified into three categories:[1] In practice, to perform the aforementioned tasks, deep learning models designed for specific topological spaces must be constructed and implemented. These models, known as topological neural networks, are tailored to operate effectively within these spaces. Central to TDL aretopological neural networks (TNNs), specialized architectures designed to operate on data structured in topological domains.[2][1]Unlike traditional neural networks tailored for grid-like structures, TNNs are adept at handling more intricate data representations, such as graphs, simplicial complexes, and cell complexes. By harnessing the inherent topology of the data, TNNs can capture both local and global relationships, enabling nuanced analysis and interpretation. In a general topological domain, higher-order message passing involves exchanging messages among entities and cells using a set of neighborhood functions. Definition: Higher-Order Message Passing on a General Topological Domain LetX{\displaystyle {\mathcal {X}}}be a topological domain. We define a set of neighborhood functionsN={N1,…,Nn}{\displaystyle {\mathcal {N}}=\{{\mathcal {N}}_{1},\ldots ,{\mathcal {N}}_{n}\}}onX{\displaystyle {\mathcal {X}}}. Consider a cellx{\displaystyle x}and lety∈Nk(x){\displaystyle y\in {\mathcal {N}}_{k}(x)}for someNk∈N{\displaystyle {\mathcal {N}}_{k}\in {\mathcal {N}}}. Amessagemx,y{\displaystyle m_{x,y}}between cellsx{\displaystyle x}andy{\displaystyle y}is a computation dependent on these two cells or the data supported on them. DenoteN(x){\displaystyle {\mathcal {N}}(x)}as the multi-set{{N1(x),…,Nn(x)}}{\displaystyle \{\!\!\{{\mathcal {N}}_{1}(x),\ldots ,{\mathcal {N}}_{n}(x)\}\!\!\}}, and lethx(l){\displaystyle \mathbf {h} _{x}^{(l)}}represent some data supported on cellx{\displaystyle x}at layerl{\displaystyle l}.Higher-order message passingonX{\displaystyle {\mathcal {X}}},[1][8]induced byN{\displaystyle {\mathcal {N}}}, is defined by the following four update rules: Some remarks on Definition above are as follows. First, Equation 1 describes how messages are computed between cellsx{\displaystyle x}andy{\displaystyle y}. The messagemx,y{\displaystyle m_{x,y}}is influenced by both the datahx(l){\displaystyle \mathbf {h} _{x}^{(l)}}andhy(l){\displaystyle \mathbf {h} _{y}^{(l)}}associated with cellsx{\displaystyle x}andy{\displaystyle y}, respectively. Additionally, it incorporates characteristics specific to the cells themselves, such as orientation in the case of cell complexes. This allows for a richer representation of spatial relationships compared to traditional graph-based message passing frameworks. Second, Equation 2 defines how messages from neighboring cells are aggregated within each neighborhood. The function⨁{\displaystyle \bigoplus }aggregates these messages, allowing information to be exchanged effectively between adjacent cells within the same neighborhood. Third, Equation 3 outlines the process of combining messages from different neighborhoods. The function⨂{\displaystyle \bigotimes }aggregates messages across various neighborhoods, facilitating communication between cells that may not be directly connected but share common neighborhood relationships. Fourth, Equation 4 specifies how the aggregated messages influence the state of a cell in the next layer. Here, the functionβ{\displaystyle \beta }updates the state of cellx{\displaystyle x}based on its current statehx(l){\displaystyle \mathbf {h} _{x}^{(l)}}and the aggregated messagemx{\displaystyle m_{x}}obtained from neighboring cells. While the majority of TNNs follow the message passing paradigm fromgraph learning, several models have been suggested that do not follow this approach. For instance, Maggs et al.[26]leverage geometric information from embedded simplicial complexes, i.e., simplicial complexes with high-dimensional features attached to their vertices.This offers interpretability and geometric consistencywithoutrelying on message passing. Furthermore, in[27]a contrastive loss-based method was suggested to learn the simplicial representation. Motivated by the modular nature ofdeep neural networks, initial work in TDL drew inspiration fromtopological data analysis, and aimed to make the resulting descriptors amenable to integration intodeep-learningmodels. This led to work defining newlayersfor deep neural networks. Pioneering work by Hofer et al.,[28]for instance, introduced a layer that permitted topological descriptors like persistence diagrams orpersistence barcodesto be integrated into a deep neural network. This was achieved by means of end-to-end-trainable projection functions, permitting topological features to be used to solve shape classification tasks, for instance. Follow-up work expanded more on the theoretical properties of such descriptors and integrated them into the field ofrepresentation learning.[29]Other suchtopological layersinclude layers based on extended persistent homology descriptors,[30]persistence landscapes,[31]or coordinate functions.[32]In parallel,persistent homologyalso found applications in graph-learning tasks. Noteworthy examples include new algorithms for learning task-specific filtration functions for graph classification or node classification tasks.[33][34][35] TDL is rapidly finding new applications across different domains, including data compression,[36]enhancing the expressivity and predictive performance ofgraph neural networks,[16][17][33]action recognition,[37]and trajectory prediction.[38]
https://en.wikipedia.org/wiki/Topological_deep_learning
Instatistics,Hodges' estimator[1](or theHodges–Le Cam estimator[2]), named forJoseph Hodges, is a famouscounterexampleof anestimatorwhich is "superefficient",[3]i.e. it attains smaller asymptotic variance than regularefficient estimators. The existence of such a counterexample is the reason for the introduction of the notion ofregular estimators. Hodges' estimator improves upon a regular estimator at a single point. In general, any superefficient estimator may surpass a regular estimator at most on a set ofLebesgue measurezero.[4] Although Hodges discovered the estimator he never published it; the first publication was in the doctoral thesis ofLucien Le Cam.[5] Supposeθ^n{\displaystyle {\hat {\theta }}_{n}}is a "common" estimator for some parameterθ{\displaystyle \theta }: it isconsistent, and converges to someasymptotic distributionLθ{\displaystyle L_{\theta }}(usually this is anormal distributionwith mean zero and variance which may depend onθ{\displaystyle \theta }) at then{\displaystyle {\sqrt {n}}}-rate: Then theHodges' estimatorθ^nH{\displaystyle {\hat {\theta }}_{n}^{H}}is defined as[6] This estimator is equal toθ^n{\displaystyle {\hat {\theta }}_{n}}everywhere except on the small interval[−n−1/4,n−1/4]{\displaystyle [-n^{-1/4},n^{-1/4}]}, where it is equal to zero. It is not difficult to see that this estimator isconsistentforθ{\displaystyle \theta }, and itsasymptotic distributionis[7] for anyα∈R{\displaystyle \alpha \in \mathbb {R} }. Thus this estimator has the same asymptotic distribution asθ^n{\displaystyle {\hat {\theta }}_{n}}for allθ≠0{\displaystyle \theta \neq 0}, whereas forθ=0{\displaystyle \theta =0}the rate of convergence becomes arbitrarily fast. This estimator issuperefficient, as it surpasses the asymptotic behavior of the efficient estimatorθ^n{\displaystyle {\hat {\theta }}_{n}}at least at one pointθ=0{\displaystyle \theta =0}. It is not true that the Hodges estimator is equivalent to the sample mean, but much better when the true mean is 0. The correct interpretation is that, for finiten{\displaystyle n}, the truncation can lead to worse square error than the sample mean estimator forE[X]{\displaystyle E[X]}close to 0, as is shown in the example in the following section.[8] Le Cam shows that this behaviour is typical: superefficiency at the point θ implies the existence of a sequenceθn→θ{\displaystyle \theta _{n}\rightarrow \theta }such thatliminfEθnℓ(n(θ^n−θn)){\displaystyle \lim \inf E\theta _{n}\ell ({\sqrt {n}}({\hat {\theta }}_{n}-\theta _{n}))}is strictly larger than theCramér-Rao bound. For the extreme case where the asymptotic risk at θ is zero, thelim inf{\displaystyle \liminf }is even infinite for a sequenceθn→θ{\displaystyle \theta _{n}\rightarrow \theta }.[9] In general, superefficiency may only be attained on a subset of Lebesgue measure zero of the parameter spaceΘ{\displaystyle \Theta }.[10] Supposex1, ...,xnis anindependent and identically distributed(IID) random sample from normal distributionN(θ, 1)with unknown mean but known variance. Then the common estimator for the population meanθis the arithmetic mean of all observations:x¯{\displaystyle \scriptstyle {\bar {x}}}. The corresponding Hodges' estimator will beθ^nH=x¯⋅1{|x¯|≥n−1/4}{\displaystyle \scriptstyle {\hat {\theta }}_{n}^{H}\;=\;{\bar {x}}\cdot \mathbf {1} \{|{\bar {x}}|\,\geq \,n^{-1/4}\}}, where1{...} denotes theindicator function. Themean square error(scaled byn) associated with the regular estimatorxis constant and equal to 1 for allθ's. At the same time the mean square error of the Hodges' estimatorθ^nH{\displaystyle \scriptstyle {\hat {\theta }}_{n}^{H}}behaves erratically in the vicinity of zero, and even becomes unbounded asn→ ∞. This demonstrates that the Hodges' estimator is notregular, and its asymptotic properties are not adequately described by limits of the form (θfixed,n→ ∞).
https://en.wikipedia.org/wiki/Hodges%27_estimator
TheJames–Stein estimatoris anestimatorof themeanθ:=(θ1,θ2,…θm){\displaystyle {\boldsymbol {\theta }}:=(\theta _{1},\theta _{2},\dots \theta _{m})}for a multivariaterandom variableY:=(Y1,Y2,…Ym){\displaystyle {\boldsymbol {Y}}:=(Y_{1},Y_{2},\dots Y_{m})}. It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956,[1]whenCharles Steinreached a relatively shocking conclusion that while the then-usual estimate of the mean, thesample mean, isadmissiblewhenm≤2{\displaystyle m\leq 2}, it isinadmissiblewhenm≥3{\displaystyle m\geq 3}. Stein proposed a possible improvement to the estimator thatshrinksthe sample meansθi{\displaystyle {{\boldsymbol {\theta }}_{i}}}towards a more central mean vectorν{\displaystyle {\boldsymbol {\nu }}}(which can be chosena priorior commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to asStein's example or paradox. In 1961,Willard Jamesand Charles Stein simplified the original process.[2] It can be shown that the James–Stein estimatordominatesthe "ordinary"least squaresapproach in the sense that the James–Stein estimator has a lowermean squared errorthan the "ordinary" least squares estimator for allθ{\displaystyle {\boldsymbol {\theta }}}. This is possible because the James–Stein estimator isbiased, so that theGauss–Markov theoremdoes not apply. Similar to theHodges' estimator, the James-Stein estimator issuperefficientandnon-regularatθ=0{\displaystyle \theta =0}.[3] LetY∼Nm(θ,σ2I),{\displaystyle {\mathbf {Y} }\sim N_{m}({\boldsymbol {\theta }},\sigma ^{2}I),\,}where the vectorθ{\displaystyle {\boldsymbol {\theta }}}is the unknownmeanofY{\displaystyle {\mathbf {Y} }}, which ism{\displaystyle m}-variate normally distributedand with knowncovariance matrixσ2I{\displaystyle \sigma ^{2}I}. We are interested in obtaining an estimate,θ^{\displaystyle {\widehat {\boldsymbol {\theta }}}}, ofθ{\displaystyle {\boldsymbol {\theta }}}, based on a single observation,y{\displaystyle {\mathbf {y} }}, ofY{\displaystyle {\mathbf {Y} }}. In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independentGaussian noise. Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is theleast squaresestimator, which isθ^LS=y{\displaystyle {\widehat {\boldsymbol {\theta }}}_{LS}={\mathbf {y} }}. Stein demonstrated that in terms ofmean squared errorE⁡[‖θ−θ^‖2]{\displaystyle \operatorname {E} \left[\left\|{\boldsymbol {\theta }}-{\widehat {\boldsymbol {\theta }}}\right\|^{2}\right]}, the least squares estimator,θ^LS{\displaystyle {\widehat {\boldsymbol {\theta }}}_{LS}}, is sub-optimal to shrinkage based estimators, such as theJames–Stein estimator,θ^JS{\displaystyle {\widehat {\boldsymbol {\theta }}}_{JS}}.[1]The paradoxical result, that there is a (possibly) better and never any worse estimate ofθ{\displaystyle {\boldsymbol {\theta }}}in mean squared error as compared to the sample mean, became known asStein's example. Ifσ2{\displaystyle \sigma ^{2}}is known, the James–Stein estimator is given by James and Stein showed that the above estimatordominatesθ^LS{\displaystyle {\widehat {\boldsymbol {\theta }}}_{LS}}for anym≥3{\displaystyle m\geq 3}, meaning that the James–Stein estimator has a lowermean squared error(MSE) than themaximum likelihoodestimator.[2][4]By definition, this makes the least squares estimatorinadmissiblewhenm≥3{\displaystyle m\geq 3}. Notice that if(m−2)σ2<‖y‖2{\displaystyle (m-2)\sigma ^{2}<\|{\mathbf {y} }\|^{2}}then this estimator simply takes the natural estimatory{\displaystyle \mathbf {y} }and shrinks it towards the origin0. In fact this is not the only direction ofshrinkagethat works. Letνbe an arbitrary fixed vector of dimensionm{\displaystyle m}. Then there exists an estimator of the James–Stein type that shrinks towardν, namely The James–Stein estimator dominates the usual estimator for anyν. A natural question to ask is whether the improvement over the usual estimator is independent of the choice ofν. The answer is no. The improvement is small if‖θ−ν‖{\displaystyle \|{{\boldsymbol {\theta }}-{\boldsymbol {\nu }}}\|}is large. Thus to get a very great improvement some knowledge of the location ofθis necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledgea priori. But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher. Nonetheless, James and Stein's result is thatanyfinite guessνimproves the expected MSE over the maximum-likelihood estimator, which is tantamount to using an infiniteν, surely a poor guess. Seeing the James–Stein estimator as anempirical Bayes methodgives some intuition to this result: One assumes thatθitself is a random variable withprior distribution∼N(0,A){\displaystyle \sim N(0,A)}, whereAis estimated from the data itself. EstimatingAonly gives an advantage compared to themaximum-likelihood estimatorwhen the dimensionm{\displaystyle m}is large enough; hence it does not work form≤2{\displaystyle m\leq 2}. The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator.[5] A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator isadmissible. A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon thetotalMSE, i.e., the sum of the expected squared errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator. The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in atelecommunicationsetting, it is reasonable to combinechanneltap measurements in achannel estimationscenario, as the goal is to minimize the total channel estimation error. The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of theentropic uncertainty principlefor more than three measurements.[6] An intuitive derivation and interpretation is given by theGaltonianperspective.[7]Under this interpretation, we aim to predict the population means using theimperfectly measured sample means. The equation of theOLSestimator in a hypothetical regression of the population means on the sample means gives an estimator of the form of either the James–Stein estimator (when we force the OLS intercept to equal 0) or of the Efron-Morris estimator (when we allow the intercept to vary). Despite the intuition that the James–Stein estimator shrinks the maximum-likelihood estimatey{\displaystyle {\mathbf {y} }}towardν{\displaystyle {\boldsymbol {\nu }}}, the estimate actually movesawayfromν{\displaystyle {\boldsymbol {\nu }}}for small values of‖y−ν‖,{\displaystyle \|{\mathbf {y} }-{\boldsymbol {\nu }}\|,}as the multiplier ony−ν{\displaystyle {\mathbf {y} }-{\boldsymbol {\nu }}}is then negative. This can be easily remedied by replacing this multiplier by zero when it is negative. The resulting estimator is called thepositive-part James–Stein estimatorand is given by This estimator has a smaller risk than the basic James–Stein estimator. It follows that the basic James–Stein estimator is itselfinadmissible.[8] It turns out, however, that the positive-part estimator is also inadmissible.[4]This follows from a more general result which requires admissible estimators to be smooth. The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is ofteninadmissiblefor simultaneous estimation of several parameters.[citation needed]This effect has been calledStein's phenomenon, and has been demonstrated for several different problem settings, some of which are briefly outlined below.
https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator
Instatistics, themean percentage error (MPE)is the computed average of percentage errors by which forecasts of a model differ from actual values of the quantity being forecast. The formula for the mean percentage error is: whereat{\textstyle a_{t}}is the actual value of the quantity being forecast,ft{\textstyle f_{t}}is the forecast, andn{\textstyle n}is the number of different times for which the variable is forecast. Because actual rather than absolute values of the forecast errors are used in the formula, positive and negative forecast errors can offset each other; as a result, the formula can be used as a measure of thebiasin the forecasts. A disadvantage of this measure is that it is undefined whenever a single actual value is zero.
https://en.wikipedia.org/wiki/Mean_percentage_error
Mean square quantization error(MSQE) is afigure of meritfor the process ofanalog to digital conversion. In this conversion process, analog signals in acontinuous rangeof values are converted to a discrete set of values by comparing them with a sequence of thresholds. The quantization error of a signal is the difference between the original continuous value and its discretization, and the mean square quantization error (given someprobability distributionon the input values) is theexpected valueof the square of the quantization errors. Mathematically, suppose that the lower threshold for inputs that generate the quantized valueqi{\displaystyle q_{i}}isti−1{\displaystyle t_{i-1}}, that the upper threshold isti{\displaystyle t_{i}}, that there arek{\displaystyle k}levels of quantization, and that theprobability density functionfor the input analog values isp(x){\displaystyle p(x)}. Letx^{\displaystyle {\hat {x}}}denote the quantized value corresponding to an inputx{\displaystyle x}; that is,x^{\displaystyle {\hat {x}}}is the valueqi{\displaystyle q_{i}}for whichti−1≤x<ti{\displaystyle t_{i}-1\leq x<t_{i}}. Then This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mean_square_quantization_error
Instatistics, thereduced chi-square statisticis used extensively ingoodness of fittesting. It is also known asmean squared weighted deviation(MSWD) inisotopic dating[1]andvariance of unit weightin the context ofweighted least squares.[2][3] Its square root is calledregression standard error,[4]standard error of the regression,[5][6]orstandard error of the equation[7](seeOrdinary least squares § Reduced chi-squared) It is defined aschi-squareperdegree of freedom:[8][9][10][11]: 85[12][13][14][15]χν2=χ2ν,{\displaystyle \chi _{\nu }^{2}={\frac {\chi ^{2}}{\nu }},}where the chi-squared is a weighted sum of squareddeviations:χ2=∑i(Oi−Ci)2σi2{\displaystyle \chi ^{2}=\sum _{i}{\frac {(O_{i}-C_{i})^{2}}{\sigma _{i}^{2}}}}with inputs:varianceσi2{\displaystyle \sigma _{i}^{2}}, observationsO, and calculated dataC.[8]The degree of freedom,ν=n−m{\displaystyle \nu =n-m}, equals the number of observationsnminus the number of fitted parametersm. Inweighted least squares, the definition is often written in matrix notation asχν2=rTWrν,{\displaystyle \chi _{\nu }^{2}={\frac {r^{\mathrm {T} }Wr}{\nu }},}whereris the vector of residuals, andWis the weight matrix, the inverse of the input (diagonal) covariance matrix of observations. IfWis non-diagonal, thengeneralized least squaresapplies. Inordinary least squares, the definition simplifies to:χν2=RSSν,{\displaystyle \chi _{\nu }^{2}={\frac {\mathrm {RSS} }{\nu }},}RSS=∑r2,{\displaystyle \mathrm {RSS} =\sum r^{2},}where the numerator is theresidual sum of squares(RSS). When the fit is just an ordinary mean, thenχν2{\displaystyle \chi _{\nu }^{2}}equals thesample variance, the squared samplestandard deviation. As a general rule, when the variance of the measurement error is knowna priori, aχν2≫1{\displaystyle \chi _{\nu }^{2}\gg 1}indicates a poor model fit. Aχν2>1{\displaystyle \chi _{\nu }^{2}>1}indicates that the fit has not fully captured the data (or that the error variance has been underestimated). In principle, a value ofχν2{\displaystyle \chi _{\nu }^{2}}around1{\displaystyle 1}indicates that the extent of the match between observations and estimates is in accord with the error variance. Aχν2<1{\displaystyle \chi _{\nu }^{2}<1}indicates that the model is "overfitting" the data: either the model is improperly fitting noise, or the error variance has been overestimated.[11]:89 When the variance of the measurement error is only partially known,the reduced chi-squared may serve as a correction estimateda posteriori. Ingeochronology, the MSWD is a measure of goodness of fit that takes into account the relative importance of both the internal and external reproducibility, with most common usage in isotopic dating.[16][17][1][18][19][20] In general when: MSWD = 1 if the age data fit aunivariate normal distributionint(for thearithmetic meanage) or log(t) (for thegeometric meanage) space, or if the compositional data fit a bivariate normal distribution in [log(U/He),log(Th/He)]-space (for the central age). MSWD < 1 if the observed scatter is less than that predicted by the analytical uncertainties. In this case, the data are said to be "underdispersed", indicating that the analytical uncertainties were overestimated. MSWD > 1 if the observed scatter exceeds that predicted by the analytical uncertainties. In this case, the data are said to be "overdispersed". This situation is the rule rather than the exception in (U-Th)/He geochronology, indicating an incomplete understanding of the isotope system. Several reasons have been proposed to explain the overdispersion of (U-Th)/He data, including unevenly distributed U-Th distributions and radiation damage. Often the geochronologist will determine a series of age measurements on a single sample, with the measured valuexi{\displaystyle x_{i}}having a weightingwi{\displaystyle w_{i}}and an associated errorσxi{\displaystyle \sigma _{x_{i}}}for each age determination. As regards weighting, one can either weight all of the measured ages equally, or weight them by the proportion of the sample that they represent. For example, if two thirds of the sample was used for the first measurement and one third for the second and final measurement, then one might weight the first measurement twice that of the second. The arithmetic mean of the age determinations isx¯=∑i=1NxiN,{\displaystyle {\overline {x}}={\frac {\sum _{i=1}^{N}x_{i}}{N}},}but this value can be misleading, unless each determination of the age is of equal significance. When each measured value can be assumed to have the same weighting, or significance, the biased and unbiased (or "sample" and "population" respectively) estimators of the variance are computed as follows:σ2=∑i=1N(xi−x¯)2Nands2=NN−1⋅σ2=1N−1⋅∑i=1N(xi−x¯)2.{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}{N}}{\text{ and }}s^{2}={\frac {N}{N-1}}\cdot \sigma ^{2}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}.} The standard deviation is the square root of the variance. When individual determinations of an age are not of equal significance, it is better to use a weighted mean to obtain an "average" age, as follows:x¯∗=∑i=1Nwixi∑i=1Nwi.{\displaystyle {\overline {x}}^{*}={\frac {\sum _{i=1}^{N}w_{i}x_{i}}{\sum _{i=1}^{N}w_{i}}}.} The biased weighted estimator of variance can be shown to beσ2=∑i=1Nwi(xi−x¯∗)2∑i=1Nwi,{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{\sum _{i=1}^{N}w_{i}}},}which can be computed asσ2=∑i=1Nwixi2⋅∑i=1Nwi−(∑i=1Nwixi)2(∑i=1Nwi)2.{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}}}.} The unbiased weighted estimator of the sample variance can be computed as follows:s2=∑i=1Nwi(∑i=1Nwi)2−∑i=1Nwi2⋅∑i=1Nwi(xi−x¯∗)2.{\displaystyle s^{2}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}.}Again, the corresponding standard deviation is the square root of the variance. The unbiased weighted estimator of the sample variance can also be computed on the fly as follows:s2=∑i=1Nwixi2⋅∑i=1Nwi−(∑i=1Nwixi)2(∑i=1Nwi)2−∑i=1Nwi2.{\displaystyle s^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}.} The unweighted mean square of the weighted deviations (unweighted MSWD) can then be computed, as follows:MSWDu=1N−1⋅∑i=1N(xi−x¯)2σxi2.{\displaystyle {\text{MSWD}}_{u}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}{\frac {(x_{i}-{\overline {x}})^{2}}{\sigma _{x_{i}}^{2}}}.} By analogy, the weighted mean square of the weighted deviations (weighted MSWD) can be computed as follows:MSWDw=∑i=1Nwi(∑i=1Nwi)2−∑i=1Nwi2⋅∑i=1Nwi(xi−x¯∗)2(σxi)2.{\displaystyle {\text{MSWD}}_{w}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot \sum _{i=1}^{N}{\frac {w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{(\sigma _{x_{i}})^{2}}}.} In data analysis based on theRasch model, the reduced chi-squared statistic is called the outfit mean-square statistic, and the information-weighted reduced chi-squared statistic is called the infit mean-square statistic.[21]
https://en.wikipedia.org/wiki/Reduced_chi-squared_statistic
Instatistical mechanics, themean squared displacement(MSD), also calledmean square displacement,average squared displacement, ormean square fluctuation, is a measure of thedeviationof thepositionof aparticlewith respect to a reference position over time. It is the most common measure of the spatial extent of randommotion, and can be thought of as measuring the portion of the system "explored" by therandom walker. In the realm ofbiophysicsandenvironmental engineering, the MSD is measured over time to determine if a particle is spreading slowly due todiffusion, or if anadvectiveforce is also contributing.[1]Another relevant concept, thevariance-related diameter(VRD), defined as twice the square root of MSD, is also used in studying the transportation and mixing phenomena inenvironmental engineering.[2]It prominently appears in theDebye–Waller factor(describing vibrations within the solid state) and in theLangevin equation(describing diffusion of aBrownian particle). The MSD at timet{\displaystyle t}is defined as anensemble average:MSD≡⟨|x(t)−x0|2⟩=1N∑i=1N|x(i)(t)−x(i)(0)|2{\displaystyle {\text{MSD}}\equiv \left\langle \left|\mathbf {x} (t)-\mathbf {x_{0}} \right|^{2}\right\rangle ={\frac {1}{N}}\sum _{i=1}^{N}\left|\mathbf {x^{(i)}} (t)-\mathbf {x^{(i)}} (0)\right|^{2}}whereNis the number of particles to be averaged, vectorx(i)(0)=x0(i){\displaystyle \mathbf {x^{(i)}} (0)=\mathbf {x_{0}^{(i)}} }is the reference position of thei{\displaystyle i}-th particle, and vectorx(i)(t){\displaystyle \mathbf {x^{(i)}} (t)}is the position of thei{\displaystyle i}-th particle at timet.[3] Theprobability density function(PDF) for a particle in one dimension is found by solving the one-dimensionaldiffusion equation. (This equation states that the position probability density diffuses out over time - this is the method used by Einstein to describe a Brownian particle. Another method to describe the motion of a Brownian particle was described by Langevin, now known for its namesake as theLangevin equation.)∂p(x,t∣x0)∂t=D∂2p(x,t∣x0)∂x2,{\displaystyle {\frac {\partial p(x,t\mid x_{0})}{\partial t}}=D{\frac {\partial ^{2}p(x,t\mid x_{0})}{\partial x^{2}}},}given the initial conditionp(x,t=0∣x0)=δ(x−x0){\displaystyle p(x,t=0\mid x_{0})=\delta (x-x_{0})}; wherex(t){\displaystyle x(t)}is the position of the particle at some given time,x0{\displaystyle x_{0}}is the tagged particle's initial position, andD{\displaystyle D}is the diffusion constant with the S.I. unitsm2s−1{\displaystyle m^{2}s^{-1}}(an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the speed at which the probability for finding the particle atx(t){\displaystyle x(t)}is position dependent. Thedifferential equationabove takes the form of 1Dheat equation. The one-dimensional PDF below is theGreen's functionof heat equation (also known asHeat kernelin mathematics):P(x,t)=14πDtexp⁡(−(x−x0)24Dt).{\displaystyle P(x,t)={\frac {1}{\sqrt {4\pi Dt}}}\exp \left(-{\frac {(x-x_{0})^{2}}{4Dt}}\right).}This states that the probability of finding the particle atx(t){\displaystyle x(t)}is Gaussian, and the width of the Gaussian is time dependent. More specifically thefull width at half maximum(FWHM)(technically/pedantically, this is actually the Fulldurationat half maximum as the independent variable is time) scales likeFWHM∼t.{\displaystyle {\text{FWHM}}\sim {\sqrt {t}}.}Using the PDF one is able to derive the average of a given function,L{\displaystyle L}, at timet{\displaystyle t}:⟨L(t)⟩≡∫−∞∞L(x,t)P(x,t)dx,{\displaystyle \langle L(t)\rangle \equiv \int _{-\infty }^{\infty }L(x,t)P(x,t)\,dx,}where the average is taken over all space (or any applicable variable). The Mean squared displacement is defined asMSD≡⟨(x(t)−x0)2⟩,{\displaystyle {\text{MSD}}\equiv \left\langle \left(x(t)-x_{0}\right)^{2}\right\rangle ,}expanding out the ensemble average⟨(x−x0)2⟩=⟨x2⟩+x02−2x0⟨x⟩,{\displaystyle \left\langle \left(x-x_{0}\right)^{2}\right\rangle =\left\langle x^{2}\right\rangle +x_{0}^{2}-2x_{0}\langle x\rangle ,}dropping the explicit time dependence notation for clarity. To find the MSD, one can take one of two paths: one can explicitly calculate⟨x2⟩{\displaystyle \langle x^{2}\rangle }and⟨x⟩{\displaystyle \langle x\rangle }, then plug the result back into the definition of the MSD; or one could find themoment-generating function, an extremely useful, and general function when dealing with probability densities. The moment-generating function describes thek{\displaystyle k}-thmoment of the PDF. The first moment of the displacement PDF shown above is simply the mean:⟨x⟩{\displaystyle \langle x\rangle }. The second moment is given as⟨x2⟩{\displaystyle \langle x^{2}\rangle }. So then, to find the moment-generating function it is convenient to introduce thecharacteristic function:G(k)=⟨eikx⟩≡∫IeikxP(x,t∣x0)dx,{\displaystyle G(k)=\langle e^{ikx}\rangle \equiv \int _{I}e^{ikx}P(x,t\mid x_{0})\,dx,}one can expand out the exponential in the above equation to giveG(k)=∑m=0∞(ik)mm!μm.{\displaystyle G(k)=\sum _{m=0}^{\infty }{\frac {(ik)^{m}}{m!}}\mu _{m}.}By taking the natural log of the characteristic function, a new function is produced, thecumulant generating function,ln⁡(G(k))=∑m=1∞(ik)mm!κm,{\displaystyle \ln(G(k))=\sum _{m=1}^{\infty }{\frac {(ik)^{m}}{m!}}\kappa _{m},}whereκm{\displaystyle \kappa _{m}}is them{\displaystyle m}-thcumulantofx{\displaystyle x}. The first two cumulants are related to the first two moments,μ{\displaystyle \mu }, viaκ1=μ1;{\displaystyle \kappa _{1}=\mu _{1};}andκ2=μ2−μ12,{\displaystyle \kappa _{2}=\mu _{2}-\mu _{1}^{2},}where the second cumulant is the so-called variance,σ2{\displaystyle \sigma ^{2}}. With these definitions accounted for one can investigate the moments of the Brownian particle PDF,G(k)=14πDt∫Iexp⁡(ikx−(x−x0)24Dt)dx;{\displaystyle G(k)={\frac {1}{\sqrt {4\pi Dt}}}\int _{I}\exp \left(ikx-{\frac {\left(x-x_{0}\right)^{2}}{4Dt}}\right)\,dx;}bycompleting the squareand knowing the total area under a Gaussian one arrives atG(k)=exp⁡(ikx0−k2Dt).{\displaystyle G(k)=\exp(ikx_{0}-k^{2}Dt).}Taking the natural log, and comparing powers ofik{\displaystyle ik}to the cumulant generating function, the first cumulant isκ1=x0,{\displaystyle \kappa _{1}=x_{0},}which is as expected, namely that the mean position is the Gaussian centre. The second cumulant isκ2=2Dt,{\displaystyle \kappa _{2}=2Dt,\,}the factor 2 comes from the factorial factor in the denominator of the cumulant generating function. From this, the second moment is calculated,μ2=κ2+μ12=2Dt+x02.{\displaystyle \mu _{2}=\kappa _{2}+\mu _{1}^{2}=2Dt+x_{0}^{2}.}Plugging the results for the first and second moments back, one finds the MSD,⟨(x(t)−x0)2⟩=2Dt.{\displaystyle \left\langle \left(x(t)-x_{0}\right)^{2}\right\rangle =2Dt.} For a Brownian particle in higher-dimensionEuclidean space, its position is represented by a vectorx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})}, where theCartesian coordinatesx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}arestatistically independent. Then-variable probability distribution function is the product of thefundamental solutionsin each variable; i.e., P(x,t)=P(x1,t)P(x2,t)…P(xn,t)=1(4πDt)nexp⁡(−x⋅x4Dt).{\displaystyle P(\mathbf {x} ,t)=P(x_{1},t)P(x_{2},t)\dots P(x_{n},t)={\frac {1}{\sqrt {(4\pi Dt)^{n}}}}\exp \left(-{\frac {\mathbf {x} \cdot \mathbf {x} }{4Dt}}\right).} The Mean squared displacement is defined as MSD≡⟨|x−x0|2⟩=⟨(x1(t)−x1(0))2+(x2(t)−x2(0))2+⋯+(xn(t)−xn(0))2⟩{\displaystyle \mathrm {MSD} \equiv \left\langle |\mathbf {x} -\mathbf {x_{0}} |^{2}\right\rangle =\left\langle \left(x_{1}(t)-x_{1}(0)\right)^{2}+\left(x_{2}(t)-x_{2}(0)\right)^{2}+\dots +\left(x_{n}(t)-x_{n}(0)\right)^{2}\right\rangle } Since all the coordinates are independent, their deviation from the reference position is also independent. Therefore, MSD=⟨(x1(t)−x1(0))2⟩+⟨(x2(t)−x2(0))2⟩+⋯+⟨(xn(t)−xn(0))2⟩{\displaystyle {\text{MSD}}=\left\langle \left(x_{1}(t)-x_{1}(0)\right)^{2}\right\rangle +\left\langle \left(x_{2}(t)-x_{2}(0)\right)^{2}\right\rangle +\dots +\left\langle \left(x_{n}(t)-x_{n}(0)\right)^{2}\right\rangle } For each coordinate, following the same derivation as in 1D scenario above, one obtains the MSD in that dimension as2Dt{\displaystyle 2Dt}. Hence, the final result of mean squared displacement inn-dimensional Brownian motion is: MSD=2nDt.{\displaystyle {\text{MSD}}=2nDt.} In the measurements of single particle tracking (SPT), displacements can be defined for different time intervals between positions (also called time lags or lag times). SPT yields the trajectoryr→(t)=[x(t),y(t)]{\displaystyle {\vec {r}}(t)=[x(t),y(t)]}, representing a particle undergoing two-dimensional diffusion. Assuming that the trajectory of a single particle measured at time points1Δt,2Δt,…,NΔt{\displaystyle 1\,\Delta t,2\,\Delta t,\ldots ,N\,\Delta t}, whereΔt{\displaystyle \Delta t}is any fixed number, then there areN(N−1)/2{\displaystyle N(N-1)/2}non-trivial forward displacementsd→ij=r→j−r→i{\displaystyle {\vec {d}}_{ij}={\vec {r}}_{j}-{\vec {r}}_{i}}(1⩽i<j⩽N{\displaystyle 1\leqslant i<j\leqslant N}, the cases wheni=j{\displaystyle i=j}are not considered) which correspond to time intervals (or time lags)Δtij=(j−i)Δt{\displaystyle \,\Delta t_{ij}=(j-i)\,\Delta t}. Hence, there are many distinct displacements for small time lags, and very few for large time lags,MSD{\displaystyle {\rm {MSD}}}can be defined as an average quantity over time lags:[4][5] δ2(n)¯=1N−n∑i=1N−n(r→i+n−r→i)2n=1,…,N−1.{\displaystyle {\overline {\delta ^{2}(n)}}={\frac {1}{N-n}}\sum _{i=1}^{N-n}{({\vec {r}}_{i+n}-{\vec {r}}_{i}})^{2}\qquad n=1,\ldots ,N-1.} Similarly, for continuoustime series: δ2(Δ)¯=1T−Δ∫0T−Δ[r(t+Δ)−r(t)]2dt{\displaystyle {\overline {\delta ^{2}(\Delta )}}={\frac {1}{T-\Delta }}\int _{0}^{T-\Delta }[r(t+\Delta )-r(t)]^{2}\,dt} It's clear that choosing largeT{\displaystyle T}andΔ≪T{\displaystyle \Delta \ll T}can improve statistical performance. This technique allow us estimate the behavior of the whole ensembles by just measuring a single trajectory, but note that it's only valid for the systems withergodicity, like classicalBrownian motion(BM),fractional Brownian motion(fBM), andcontinuous-time random walk(CTRW) with limited distribution of waiting times, in these cases,δ2(Δ)¯=⟨[r(t)−r(0)]2⟩{\displaystyle {\overline {\delta ^{2}(\Delta )}}=\left\langle [r(t)-r(0)]^{2}\right\rangle }(defined above), here⟨⋅⟩{\displaystyle \left\langle \cdot \right\rangle }denotes ensembles average. However, for non-ergodic systems, like the CTRW with unlimited waiting time, waiting time can go to infinity at some time, in this case,δ2(Δ)¯{\displaystyle {\overline {\delta ^{2}(\Delta )}}}strongly depends onT{\displaystyle T},δ2(Δ)¯{\displaystyle {\overline {\delta ^{2}(\Delta )}}}and⟨[r(t)−r(0)]2⟩{\displaystyle \left\langle [r(t)-r(0)]^{2}\right\rangle }don't equal each other anymore, in order to get better asymptotics, introduce the averaged time MSD: ⟨δ2(Δ)¯⟩=1N∑δ2(Δ)¯{\displaystyle \left\langle {\overline {\delta ^{2}(\Delta )}}\right\rangle ={\frac {1}{N}}\sum {\overline {\delta ^{2}(\Delta )}}} Here⟨⋅⟩{\displaystyle \left\langle \cdot \right\rangle }denotes averaging overNensembles. Also, one can easily derive the autocorrelation function from the MSD: ⟨[r(t)−r(0)]2⟩=⟨r2(t)⟩+⟨r2(0)⟩−2⟨r(t)r(0)⟩,{\displaystyle \left\langle {[r(t)-r(0)]^{2}}\right\rangle =\left\langle r^{2}(t)\right\rangle +\left\langle r^{2}(0)\right\rangle -2\left\langle r(t)r(0)\right\rangle ,}where⟨r(t)r(0)⟩{\displaystyle \left\langle r(t)r(0)\right\rangle }is so-calledautocorrelationfunction for position of particles. Experimental methods to determine MSDs includeneutron scatteringandphoton correlation spectroscopy. The linear relationship between the MSD and timetallows for graphical methods to determine the diffusivity constantD. This is especially useful for rough calculations of the diffusivity in environmental systems. In someatmospheric dispersion models, the relationship between MSD and timetis not linear. Instead, a series of power laws empirically representing the variation of the square root of MSD versus downwind distance are commonly used in studying the dispersion phenomenon.[6]
https://en.wikipedia.org/wiki/Mean_squared_displacement
Instatisticsthemean squared prediction error(MSPE), also known asmean squared error of the predictions, of asmoothing,curve fitting, orregressionprocedure is theexpected valueof thesquaredprediction errors(PE), thesquare differencebetween the fitted values implied by the predictive functiong^{\displaystyle {\widehat {g}}}and the values of the (unobservable)true valueg. It is an inverse measure of theexplanatory powerofg^,{\displaystyle {\widehat {g}},}and can be used in the process ofcross-validationof an estimated model. Knowledge ofgwould be required in order to calculate the MSPE exactly; in practice, MSPE is estimated.[1] If the smoothing or fitting procedure hasprojection matrix(i.e., hat matrix)L, which maps the observed values vectory{\displaystyle y}topredicted valuesvectory^=Ly,{\displaystyle {\hat {y}}=Ly,}then PE and MSPE are formulated as: The MSPE can be decomposed into two terms: the squaredbias(mean error) of the fitted values and thevarianceof the fitted values: The quantitySSPE=nMSPEis calledsum squared prediction error. Theroot mean squared prediction erroris the square root of MSPE:RMSPE=√MSPE. The mean squared prediction error can be computed exactly in two contexts. First, with adata sampleof lengthn, thedata analystmay run theregressionover onlyqof the data points (withq<n), holding back the othern – qdata points with the specific purpose of using them to compute the estimated model’s MSPE out of sample (i.e., not using data that were used in the model estimation process). Since the regression process is tailored to theqin-sample points, normally the in-sample MSPE will be smaller than the out-of-sample one computed over then – qheld-back points. If the increase in the MSPE out of sample compared to in sample is relatively slight, that results in the model being viewed favorably. And if two models are to be compared, the one with the lower MSPE over then – qout-of-sample data points is viewed more favorably, regardless of the models’ relative in-sample performances. The out-of-sample MSPE in this context is exact for the out-of-sample data points that it was computed over, but is merely an estimate of the model’s MSPE for the mostly unobserved population from which the data were drawn. Second, as time goes on more data may become available to the data analyst, and then the MSPE can be computed over these new data. When the model has been estimated over all available data with none held back, the MSPE of the model over the entirepopulationof mostly unobserved data can be estimated as follows. For the modelyi=g(xi)+σεi{\displaystyle y_{i}=g(x_{i})+\sigma \varepsilon _{i}}whereεi∼N(0,1){\displaystyle \varepsilon _{i}\sim {\mathcal {N}}(0,1)}, one may write Using in-sample data values, the first term on the right side is equivalent to Thus, Ifσ2{\displaystyle \sigma ^{2}}is known or well-estimated byσ^2{\displaystyle {\widehat {\sigma }}^{2}}, it becomes possible to estimate MSPE by Colin Mallowsadvocated this method in the construction of his model selection statisticCp, which is a normalized version of the estimated MSPE: wherepthe number of estimated parameterspandσ^2{\displaystyle {\widehat {\sigma }}^{2}}is computed from the version of the model that includes all possible regressors. That concludes this proof.
https://en.wikipedia.org/wiki/Mean_squared_prediction_error
Instatisticsandsignal processing, aminimum mean square error(MMSE) estimator is an estimation method which minimizes themean square error(MSE), which is a common measure of estimator quality, of the fitted values of adependent variable. In theBayesiansetting, the term MMSE more specifically refers to estimation with quadraticloss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Since the posterior mean is cumbersome to calculate, the form of the MMSE estimator is usually constrained to be within a certain class of functions. Linear MMSE estimators are a popular choice since they are easy to use, easy to calculate, and very versatile. It has given rise to many popular estimators such as theWiener–Kolmogorov filterandKalman filter. The term MMSE more specifically refers to estimation in aBayesiansetting with quadratic cost function. The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when a new observation is made available; or the statistics of an actual random signal such as speech. This is in contrast to the non-Bayesian approach likeminimum-variance unbiased estimator(MVUE) where absolutely nothing is assumed to be known about the parameter in advance and which does not account for such situations. In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly onBayes' theorem, it allows us to make better posterior estimates as more observations become available. Thus unlike non-Bayesian approach where parameters of interest are assumed to be deterministic, but unknown constants, the Bayesian estimator seeks to estimate a parameter that is itself arandom variable. Furthermore, Bayesian estimation can also deal with situations where the sequence of observations are not necessarily independent. Thus Bayesian estimation provides yet another alternative to the MVUE. This is useful when the MVUE does not exist or cannot be found. Letx{\displaystyle x}be an×1{\displaystyle n\times 1}hidden random vector variable, and lety{\displaystyle y}be am×1{\displaystyle m\times 1}known random vector variable (the measurement or observation), both of them not necessarily of the same dimension. Anestimatorx^(y){\displaystyle {\hat {x}}(y)}ofx{\displaystyle x}is any function of the measurementy{\displaystyle y}. The estimation error vector is given bye=x^−x{\displaystyle e={\hat {x}}-x}and itsmean squared error(MSE) is given by thetraceof errorcovariance matrix where theexpectationE{\displaystyle \operatorname {E} }is taken overx{\displaystyle x}conditioned ony{\displaystyle y}. Whenx{\displaystyle x}is a scalar variable, the MSE expression simplifies toE⁡{(x^−x)2}{\displaystyle \operatorname {E} \left\{({\hat {x}}-x)^{2}\right\}}. Note that MSE can equivalently be defined in other ways, since The MMSE estimator is then defined as the estimator achieving minimal MSE: In many cases, it is not possible to determine the analytical expression of the MMSE estimator. Two basic numerical approaches to obtain the MMSE estimate depends on either finding the conditional expectationE⁡{x∣y}{\displaystyle \operatorname {E} \{x\mid y\}}or finding the minima of MSE. Direct numerical evaluation of the conditional expectation is computationally expensive since it often requires multidimensional integration usually done viaMonte Carlo methods. Another computational approach is to directly seek the minima of the MSE using techniques such as thestochastic gradient descent methods; but this method still requires the evaluation of expectation. While these numerical methods have been fruitful, a closed form expression for the MMSE estimator is nevertheless possible if we are willing to make some compromises. One possibility is to abandon the full optimality requirements and seek a technique minimizing the MSE within a particular class of estimators, such as the class of linear estimators. Thus, we postulate that the conditional expectation ofx{\displaystyle x}giveny{\displaystyle y}is a simple linear function ofy{\displaystyle y},E⁡{x∣y}=Wy+b{\displaystyle \operatorname {E} \{x\mid y\}=Wy+b}, where the measurementy{\displaystyle y}is a random vector,W{\displaystyle W}is a matrix andb{\displaystyle b}is a vector. This can be seen as the first order Taylor approximation ofE⁡{x∣y}{\displaystyle \operatorname {E} \{x\mid y\}}. The linear MMSE estimator is the estimator achieving minimum MSE among all estimators of such form. That is, it solves the following optimization problem: One advantage of such linear MMSE estimator is that it is not necessary to explicitly calculate theposterior probabilitydensity function ofx{\displaystyle x}. Such linear estimator only depends on the first two moments ofx{\displaystyle x}andy{\displaystyle y}. So although it may be convenient to assume thatx{\displaystyle x}andy{\displaystyle y}are jointly Gaussian, it is not necessary to make this assumption, so long as the assumed distribution has well defined first and second moments. The form of the linear estimator does not depend on the type of the assumed underlying distribution. The expression for optimalb{\displaystyle b}andW{\displaystyle W}is given by: wherex¯=E⁡{x}{\displaystyle {\bar {x}}=\operatorname {E} \{x\}},y¯=E⁡{y},{\displaystyle {\bar {y}}=\operatorname {E} \{y\},}theCXY{\displaystyle C_{XY}}is cross-covariance matrix betweenx{\displaystyle x}andy{\displaystyle y}, theCY{\displaystyle C_{Y}}is auto-covariance matrix ofy{\displaystyle y}. Thus, the expression for linear MMSE estimator, its mean, and its auto-covariance is given by where theCYX{\displaystyle C_{YX}}is cross-covariance matrix betweeny{\displaystyle y}andx{\displaystyle x}. Lastly, the error covariance and minimum mean square error achievable by such estimator is Let us have the optimal linear MMSE estimator given asx^=Wy+b{\displaystyle {\hat {x}}=Wy+b}, where we are required to find the expression forW{\displaystyle W}andb{\displaystyle b}. It is required that the MMSE estimator be unbiased. This means, Plugging the expression forx^{\displaystyle {\hat {x}}}in above, we get wherex¯=E⁡{x}{\displaystyle {\bar {x}}=\operatorname {E} \{x\}}andy¯=E⁡{y}{\displaystyle {\bar {y}}=\operatorname {E} \{y\}}. Thus we can re-write the estimator as and the expression for estimation error becomes From the orthogonality principle, we can haveE⁡{(x^−x)(y−y¯)T}=0{\displaystyle \operatorname {E} \{({\hat {x}}-x)(y-{\bar {y}})^{T}\}=0}, where we takeg(y)=y−y¯{\displaystyle g(y)=y-{\bar {y}}}. Here the left-hand-side term is When equated to zero, we obtain the desired expression forW{\displaystyle W}as TheCXY{\displaystyle C_{XY}}is cross-covariance matrix between X and Y, andCY{\displaystyle C_{Y}}is auto-covariance matrix of Y. SinceCXY=CYXT{\displaystyle C_{XY}=C_{YX}^{T}}, the expression can also be re-written in terms ofCYX{\displaystyle C_{YX}}as Thus the full expression for the linear MMSE estimator is Since the estimatex^{\displaystyle {\hat {x}}}is itself a random variable withE⁡{x^}=x¯{\displaystyle \operatorname {E} \{{\hat {x}}\}={\bar {x}}}, we can also obtain its auto-covariance as Putting the expression forW{\displaystyle W}andWT{\displaystyle W^{T}}, we get Lastly, the covariance of linear MMSE estimation error will then be given by The first term in the third line is zero due to the orthogonality principle. SinceW=CXYCY−1{\displaystyle W=C_{XY}C_{Y}^{-1}}, we can re-writeCe{\displaystyle C_{e}}in terms of covariance matrices as This we can recognize to be the same asCe=CX−CX^.{\displaystyle C_{e}=C_{X}-C_{\hat {X}}.}Thus the minimum mean square error achievable by such a linear estimator is For the special case when bothx{\displaystyle x}andy{\displaystyle y}are scalars, the above relations simplify to whereρ=σXYσXσY{\displaystyle \rho ={\frac {\sigma _{XY}}{\sigma _{X}\sigma _{Y}}}}is thePearson's correlation coefficientbetweenx{\displaystyle x}andy{\displaystyle y}. The above two equations allows us to interpret the correlation coefficient either as normalized slope of linear regression or as square root of the ratio of two variances Whenρ=0{\displaystyle \rho =0}, we havex^=x¯{\displaystyle {\hat {x}}={\bar {x}}}andσe2=σX2{\displaystyle \sigma _{e}^{2}=\sigma _{X}^{2}}. In this case, no new information is gleaned from the measurement which can decrease the uncertainty inx{\displaystyle x}. On the other hand, whenρ=±1{\displaystyle \rho =\pm 1}, we havex^=σXYσY(y−y¯)+x¯{\displaystyle {\hat {x}}={\frac {\sigma _{XY}}{\sigma _{Y}}}(y-{\bar {y}})+{\bar {x}}}andσe2=0{\displaystyle \sigma _{e}^{2}=0}. Herex{\displaystyle x}is completely determined byy{\displaystyle y}, as given by the equation of straight line. Standard method likeGauss eliminationcan be used to solve the matrix equation forW{\displaystyle W}. A more numerically stable method is provided byQR decompositionmethod. Since the matrixCY{\displaystyle C_{Y}}is a symmetric positive definite matrix,W{\displaystyle W}can be solved twice as fast with theCholesky decomposition, while for large sparse systemsconjugate gradient methodis more effective.Levinson recursionis a fast method whenCY{\displaystyle C_{Y}}is also aToeplitz matrix. This can happen wheny{\displaystyle y}is awide sense stationaryprocess. In such stationary cases, these estimators are also referred to asWiener–Kolmogorov filters. Let us further model the underlying process of observation as a linear process:y=Ax+z{\displaystyle y=Ax+z}, whereA{\displaystyle A}is a known matrix andz{\displaystyle z}is random noise vector with the meanE⁡{z}=0{\displaystyle \operatorname {E} \{z\}=0}and cross-covarianceCXZ=0{\displaystyle C_{XZ}=0}. Here the required mean and the covariance matrices will be Thus the expression for the linear MMSE estimator matrixW{\displaystyle W}further modifies to Putting everything into the expression forx^{\displaystyle {\hat {x}}}, we get Lastly, the error covariance is The significant difference between the estimation problem treated above and those ofleast squaresandGauss–Markovestimate is that the number of observationsm, (i.e. the dimension ofy{\displaystyle y}) need not be at least as large as the number of unknowns,n, (i.e. the dimension ofx{\displaystyle x}). The estimate for the linear observation process exists so long as them-by-mmatrix(ACXAT+CZ)−1{\displaystyle (AC_{X}A^{T}+C_{Z})^{-1}}exists; this is the case for anymif, for instance,CZ{\displaystyle C_{Z}}is positive definite. Physically the reason for this property is that sincex{\displaystyle x}is now a random variable, it is possible to form a meaningful estimate (namely its mean) even with no measurements. Every new measurement simply provides additional information which may modify our original estimate. Another feature of this estimate is that form<n, there need be no measurement error. Thus, we may haveCZ=0{\displaystyle C_{Z}=0}, because as long asACXAT{\displaystyle AC_{X}A^{T}}is positive definite, the estimate still exists. Lastly, this technique can handle cases where the noise is correlated. An alternative form of expression can be obtained by using the matrix identity which can be established by post-multiplying by(ACXAT+CZ){\displaystyle (AC_{X}A^{T}+C_{Z})}and pre-multiplying by(ATCZ−1A+CX−1),{\displaystyle (A^{T}C_{Z}^{-1}A+C_{X}^{-1}),}to obtain and SinceW{\displaystyle W}can now be written in terms ofCe{\displaystyle C_{e}}asW=CeATCZ−1{\displaystyle W=C_{e}A^{T}C_{Z}^{-1}}, we get a simplified expression forx^{\displaystyle {\hat {x}}}as In this form the above expression can be easily compared withridge regression,weighed least squareandGauss–Markov estimate. In particular, whenCX−1=0{\displaystyle C_{X}^{-1}=0}, corresponding to infinite variance of the apriori information concerningx{\displaystyle x}, the resultW=(ATCZ−1A)−1ATCZ−1{\displaystyle W=(A^{T}C_{Z}^{-1}A)^{-1}A^{T}C_{Z}^{-1}}is identical to the weighed linear least square estimate withCZ−1{\displaystyle C_{Z}^{-1}}as the weight matrix. Moreover, if the components ofz{\displaystyle z}are uncorrelated and have equal variance such thatCZ=σ2I,{\displaystyle C_{Z}=\sigma ^{2}I,}whereI{\displaystyle I}is an identity matrix, thenW=(ATA)−1AT{\displaystyle W=(A^{T}A)^{-1}A^{T}}is identical to the ordinary least square estimate. When apriori information is available asCX−1=λI{\displaystyle C_{X}^{-1}=\lambda I}and thez{\displaystyle z}are uncorrelated and have equal variance, we haveW=(ATA+λI)−1AT{\displaystyle W=(A^{T}A+\lambda I)^{-1}A^{T}}, which is identical to ridge regression solution. In many real-time applications, observational data is not available in a single batch. Instead the observations are made in a sequence. One possible approach is to use the sequential observations to update an old estimate as additional data becomes available, leading to finer estimates. One crucial difference between batch estimation and sequential estimation is that sequential estimation requires an additional Markov assumption. In the Bayesian framework, such recursive estimation is easily facilitated using Bayes' rule. Givenk{\displaystyle k}observations,y1,…,yk{\displaystyle y_{1},\ldots ,y_{k}}, Bayes' rule gives us the posterior density ofxk{\displaystyle x_{k}}as Thep(xk|y1,…,yk){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k})}is called the posterior density,p(yk|xk){\displaystyle p(y_{k}|x_{k})}is called the likelihood function, andp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}is the prior density ofk-th time step. Here we have assumed the conditional independence ofyk{\displaystyle y_{k}}from previous observationsy1,…,yk−1{\displaystyle y_{1},\ldots ,y_{k-1}}givenx{\displaystyle x}as This is the Markov assumption. The MMSE estimatex^k{\displaystyle {\hat {x}}_{k}}given thek-th observation is then the mean of the posterior densityp(xk|y1,…,yk){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k})}. With the lack of dynamical information on how the statex{\displaystyle x}changes with time, we will make a further stationarity assumption about the prior: Thus, the prior density fork-th time step is the posterior density of (k-1)-th time step. This structure allows us to formulate a recursive approach to estimation. In the context of linear MMSE estimator, the formula for the estimate will have the same form as before:x^=CXYCY−1(y−y¯)+x¯.{\displaystyle {\hat {x}}=C_{XY}C_{Y}^{-1}(y-{\bar {y}})+{\bar {x}}.}However, the mean and covariance matrices ofX{\displaystyle X}andY{\displaystyle Y}will need to be replaced by those of the prior densityp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}and likelihoodp(yk|xk){\displaystyle p(y_{k}|x_{k})}, respectively. For the prior densityp(xk|y1,…,yk−1){\displaystyle p(x_{k}|y_{1},\ldots ,y_{k-1})}, its mean is given by the previous MMSE estimate, and its covariance matrix is given by the previous error covariance matrix, as per by the properties of MMSE estimators and the stationarity assumption. Similarly, for the linear observation process, the mean of the likelihoodp(yk|xk){\displaystyle p(y_{k}|x_{k})}is given byy¯k=Ax¯k=Ax^k−1{\displaystyle {\bar {y}}_{k}=A{\bar {x}}_{k}=A{\hat {x}}_{k-1}}and the covariance matrix is as before The difference between the predicted value ofYk{\displaystyle Y_{k}}, as given byy¯k=Ax^k−1{\displaystyle {\bar {y}}_{k}=A{\hat {x}}_{k-1}}, and its observed valueyk{\displaystyle y_{k}}gives the prediction errory~k=yk−y¯k{\displaystyle {\tilde {y}}_{k}=y_{k}-{\bar {y}}_{k}}, which is also referred to as innovation or residual. It is more convenient to represent the linear MMSE in terms of the prediction error, whose mean and covariance areE[y~k]=0{\displaystyle \mathrm {E} [{\tilde {y}}_{k}]=0}andCY~k=CYk|Xk{\displaystyle C_{{\tilde {Y}}_{k}}=C_{Y_{k}|X_{k}}}. Hence, in the estimate update formula, we should replacex¯{\displaystyle {\bar {x}}}andCX{\displaystyle C_{X}}byx^k−1{\displaystyle {\hat {x}}_{k-1}}andCek−1{\displaystyle C_{e_{k-1}}}, respectively. Also, we should replacey¯{\displaystyle {\bar {y}}}andCY{\displaystyle C_{Y}}byy¯k−1{\displaystyle {\bar {y}}_{k-1}}andCY~k{\displaystyle C_{{\tilde {Y}}_{k}}}. Lastly, we replaceCXY{\displaystyle C_{XY}}by Thus, we have the new estimate as new observationyk{\displaystyle y_{k}}arrives as and the new error covariance as From the point of view of linear algebra, for sequential estimation, if we have an estimatex^1{\displaystyle {\hat {x}}_{1}}based on measurements generating spaceY1{\displaystyle Y_{1}}, then after receiving another set of measurements, we should subtract out from these measurements that part that could be anticipated from the result of the first measurements. In other words, the updating must be based on that part of the new data which is orthogonal to the old data. The repeated use of the above two equations as more observations become available lead to recursive estimation techniques. The expressions can be more compactly written as The matrixWk{\displaystyle W_{k}}is often referred to as the Kalman gain factor. The alternative formulation of the above algorithm will give The repetition of these three steps as more data becomes available leads to an iterative estimation algorithm. The generalization of this idea to non-stationary cases gives rise to theKalman filter. The three update steps outlined above indeed form the update step of the Kalman filter. As an important special case, an easy to use recursive expression can be derived when at eachk-th time instant the underlying linear observation process yields a scalar such thatyk=akTxk+zk{\displaystyle y_{k}=a_{k}^{T}x_{k}+z_{k}}, whereak{\displaystyle a_{k}}isn-by-1 known column vector whose values can change with time,xk{\displaystyle x_{k}}isn-by-1 random column vector to be estimated, andzk{\displaystyle z_{k}}is scalar noise term with varianceσk2{\displaystyle \sigma _{k}^{2}}. After (k+1)-th observation, the direct use of above recursive equations give the expression for the estimatex^k+1{\displaystyle {\hat {x}}_{k+1}}as: whereyk+1{\displaystyle y_{k+1}}is the new scalar observation and the gain factorwk+1{\displaystyle w_{k+1}}isn-by-1 column vector given by TheCek+1{\displaystyle C_{e_{k+1}}}isn-by-nerror covariance matrix given by Here, no matrix inversion is required. Also, the gain factor,wk+1{\displaystyle w_{k+1}}, depends on our confidence in the new data sample, as measured by the noise variance, versus that in the previous data. The initial values ofx^{\displaystyle {\hat {x}}}andCe{\displaystyle C_{e}}are taken to be the mean and covariance of the aprior probability density function ofx{\displaystyle x}. Alternative approaches:This important special case has also given rise to many other iterative methods (oradaptive filters), such as theleast mean squares filterandrecursive least squares filter, that directly solves the original MSE optimization problem usingstochastic gradient descents. However, since the estimation errore{\displaystyle e}cannot be directly observed, these methods try to minimize the mean squared prediction errorE{y~Ty~}{\displaystyle \mathrm {E} \{{\tilde {y}}^{T}{\tilde {y}}\}}. For instance, in the case of scalar observations, we have the gradient∇x^E{y~2}=−2E{y~a}.{\displaystyle \nabla _{\hat {x}}\mathrm {E} \{{\tilde {y}}^{2}\}=-2\mathrm {E} \{{\tilde {y}}a\}.}Thus, the update equation for the least mean square filter is given by whereηk{\displaystyle \eta _{k}}is the scalar step size and the expectation is approximated by the instantaneous valueE{aky~k}≈aky~k{\displaystyle \mathrm {E} \{a_{k}{\tilde {y}}_{k}\}\approx a_{k}{\tilde {y}}_{k}}. As we can see, these methods bypass the need for covariance matrices. In many practical applications, the observation noise is uncorrelated. That is,CZ{\displaystyle C_{Z}}is a diagonal matrix. In such cases, it is advantageous to consider the components ofy{\displaystyle y}as independent scalar measurements, rather than vector measurement. This allows us to reduce computation time by processing them×1{\displaystyle m\times 1}measurement vector asm{\displaystyle m}scalar measurements. The use of scalar update formula avoids matrix inversion in the implementation of the covariance update equations, thus improving the numerical robustness against roundoff errors. The update can be implemented iteratively as: whereℓ=1,2,…,m{\displaystyle \ell =1,2,\ldots ,m}, using the initial valuesCek+1(0)=Cek{\displaystyle C_{e_{k+1}}^{(0)}=C_{e_{k}}}andx^k+1(0)=x^k{\displaystyle {\hat {x}}_{k+1}^{(0)}={\hat {x}}_{k}}. The intermediate variablesCZk+1(ℓ){\displaystyle C_{Z_{k+1}}^{(\ell )}}is theℓ{\displaystyle \ell }-th diagonal element of them×m{\displaystyle m\times m}diagonal matrixCZk+1{\displaystyle C_{Z_{k+1}}}; whileAk+1(ℓ){\displaystyle A_{k+1}^{(\ell )}}is theℓ{\displaystyle \ell }-th row ofm×n{\displaystyle m\times n}matrixAk+1{\displaystyle A_{k+1}}. The final values areCek+1(m)=Cek+1{\displaystyle C_{e_{k+1}}^{(m)}=C_{e_{k+1}}}andx^k+1(m)=x^k+1{\displaystyle {\hat {x}}_{k+1}^{(m)}={\hat {x}}_{k+1}}. We shall take alinear predictionproblem as an example. Let a linear combination of observed scalar random variablesz1,z2{\displaystyle z_{1},z_{2}}andz3{\displaystyle z_{3}}be used to estimate another future scalar random variablez4{\displaystyle z_{4}}such thatz^4=∑i=13wizi{\displaystyle {\hat {z}}_{4}=\sum _{i=1}^{3}w_{i}z_{i}}. If the random variablesz=[z1,z2,z3,z4]T{\displaystyle z=[z_{1},z_{2},z_{3},z_{4}]^{T}}are real Gaussian random variables with zero mean and its covariance matrix given by then our task is to find the coefficientswi{\displaystyle w_{i}}such that it will yield an optimal linear estimatez^4{\displaystyle {\hat {z}}_{4}}. In terms of the terminology developed in the previous sections, for this problem we have the observation vectory=[z1,z2,z3]T{\displaystyle y=[z_{1},z_{2},z_{3}]^{T}}, the estimator matrixW=[w1,w2,w3]{\displaystyle W=[w_{1},w_{2},w_{3}]}as a row vector, and the estimated variablex=z4{\displaystyle x=z_{4}}as a scalar quantity. The autocorrelation matrixCY{\displaystyle C_{Y}}is defined as The cross correlation matrixCYX{\displaystyle C_{YX}}is defined as We now solve the equationCYWT=CYX{\displaystyle C_{Y}W^{T}=C_{YX}}by invertingCY{\displaystyle C_{Y}}and pre-multiplying to get So we havew1=2.57,{\displaystyle w_{1}=2.57,}w2=−0.142,{\displaystyle w_{2}=-0.142,}andw3=.5714{\displaystyle w_{3}=.5714}as the optimal coefficients forz^4{\displaystyle {\hat {z}}_{4}}. Computing the minimum mean square error then gives‖e‖min2=E⁡[z4z4]−WCYX=15−WCYX=.2857{\displaystyle \left\Vert e\right\Vert _{\min }^{2}=\operatorname {E} [z_{4}z_{4}]-WC_{YX}=15-WC_{YX}=.2857}.[2]Note that it is not necessary to obtain an explicit matrix inverse ofCY{\displaystyle C_{Y}}to compute the value ofW{\displaystyle W}. The matrix equation can be solved by well known methods such as Gauss elimination method. A shorter, non-numerical example can be found inorthogonality principle. Consider a vectory{\displaystyle y}formed by takingN{\displaystyle N}observations of a fixed but unknown scalar parameterx{\displaystyle x}disturbed by white Gaussian noise. We can describe the process by a linear equationy=1x+z{\displaystyle y=1x+z}, where1=[1,1,…,1]T{\displaystyle 1=[1,1,\ldots ,1]^{T}}. Depending on context it will be clear if1{\displaystyle 1}represents ascalaror a vector. Suppose that we know[−x0,x0]{\displaystyle [-x_{0},x_{0}]}to be the range within which the value ofx{\displaystyle x}is going to fall in. We can model our uncertainty ofx{\displaystyle x}by an aprioruniform distributionover an interval[−x0,x0]{\displaystyle [-x_{0},x_{0}]}, and thusx{\displaystyle x}will have variance ofσX2=x02/3.{\displaystyle \sigma _{X}^{2}=x_{0}^{2}/3.}. Let the noise vectorz{\displaystyle z}be normally distributed asN(0,σZ2I){\displaystyle N(0,\sigma _{Z}^{2}I)}whereI{\displaystyle I}is an identity matrix. Alsox{\displaystyle x}andz{\displaystyle z}are independent andCXZ=0{\displaystyle C_{XZ}=0}. It is easy to see that Thus, the linear MMSE estimator is given by We can simplify the expression by using the alternative form forW{\displaystyle W}as where fory=[y1,y2,…,yN]T{\displaystyle y=[y_{1},y_{2},\ldots ,y_{N}]^{T}}we havey¯=1TyN=∑i=1NyiN.{\displaystyle {\bar {y}}={\frac {1^{T}y}{N}}={\frac {\sum _{i=1}^{N}y_{i}}{N}}.} Similarly, the variance of the estimator is Thus the MMSE of this linear estimator is For very largeN{\displaystyle N}, we see that the MMSE estimator of a scalar with uniform aprior distribution can be approximated by the arithmetic average of all the observed data while the variance will be unaffected by dataσX^2=σX2,{\displaystyle \sigma _{\hat {X}}^{2}=\sigma _{X}^{2},}and the LMMSE of the estimate will tend to zero. However, the estimator is suboptimal since it is constrained to be linear. Had the random variablex{\displaystyle x}also been Gaussian, then the estimator would have been optimal. Notice, that the form of the estimator will remain unchanged, regardless of the apriori distribution ofx{\displaystyle x}, so long as the mean and variance of these distributions are the same. Consider a variation of the above example: Two candidates are standing for an election. Let the fraction of votes that a candidate will receive on an election day bex∈[0,1].{\displaystyle x\in [0,1].}Thus the fraction of votes the other candidate will receive will be1−x.{\displaystyle 1-x.}We shall takex{\displaystyle x}as a random variable with a uniform prior distribution over[0,1]{\displaystyle [0,1]}so that its mean isx¯=1/2{\displaystyle {\bar {x}}=1/2}and variance isσX2=1/12.{\displaystyle \sigma _{X}^{2}=1/12.}A few weeks before the election, two independent public opinion polls were conducted by two different pollsters. The first poll revealed that the candidate is likely to gety1{\displaystyle y_{1}}fraction of votes. Since some error is always present due to finite sampling and the particular polling methodology adopted, the first pollster declares their estimate to have an errorz1{\displaystyle z_{1}}with zero mean and varianceσZ12.{\displaystyle \sigma _{Z_{1}}^{2}.}Similarly, the second pollster declares their estimate to bey2{\displaystyle y_{2}}with an errorz2{\displaystyle z_{2}}with zero mean and varianceσZ22.{\displaystyle \sigma _{Z_{2}}^{2}.}Note that except for the mean and variance of the error, the error distribution is unspecified. How should the two polls be combined to obtain the voting prediction for the given candidate? As with previous example, we have Here, both theE⁡{y1}=E⁡{y2}=x¯=1/2{\displaystyle \operatorname {E} \{y_{1}\}=\operatorname {E} \{y_{2}\}={\bar {x}}=1/2}. Thus, we can obtain the LMMSE estimate as the linear combination ofy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}as where the weights are given by Here, since the denominator term is constant, the poll with lower error is given higher weight in order to predict the election outcome. Lastly, the variance ofx^{\displaystyle {\hat {x}}}is given by which makesσX^2{\displaystyle \sigma _{\hat {X}}^{2}}smaller thanσX2.{\displaystyle \sigma _{X}^{2}.}Thus, the LMMSE is given by In general, if we haveN{\displaystyle N}pollsters, thenx^=∑i=1Nwi(yi−x¯)+x¯,{\displaystyle {\hat {x}}=\sum _{i=1}^{N}w_{i}(y_{i}-{\bar {x}})+{\bar {x}},}where the weight fori-th pollster is given bywi=1/σZi2∑j=1N1/σZj2+1/σX2{\displaystyle w_{i}={\frac {1/\sigma _{Z_{i}}^{2}}{\sum _{j=1}^{N}1/\sigma _{Z_{j}}^{2}+1/\sigma _{X}^{2}}}}and the LMMSE is given byLMMSE=1∑j=1N1/σZj2+1/σX2.{\displaystyle \mathrm {LMMSE} ={\frac {1}{\sum _{j=1}^{N}1/\sigma _{Z_{j}}^{2}+1/\sigma _{X}^{2}}}.} Suppose that a musician is playing an instrument and that the sound is received by two microphones, each of them located at two different places. Let the attenuation of sound due to distance at each microphone bea1{\displaystyle a_{1}}anda2{\displaystyle a_{2}}, which are assumed to be known constants. Similarly, let the noise at each microphone bez1{\displaystyle z_{1}}andz2{\displaystyle z_{2}}, each with zero mean and variancesσZ12{\displaystyle \sigma _{Z_{1}}^{2}}andσZ22{\displaystyle \sigma _{Z_{2}}^{2}}respectively. Letx{\displaystyle x}denote the sound produced by the musician, which is a random variable with zero mean and varianceσX2.{\displaystyle \sigma _{X}^{2}.}How should the recorded music from these two microphones be combined, after being synced with each other? We can model the sound received by each microphone as Here both theE⁡{y1}=E⁡{y2}=0{\displaystyle \operatorname {E} \{y_{1}\}=\operatorname {E} \{y_{2}\}=0}. Thus, we can combine the two sounds as where thei-th weight is given as
https://en.wikipedia.org/wiki/Minimum_mean_square_error
Peak signal-to-noise ratio(PSNR) is an engineering term for the ratio between the maximum possible power of asignaland the power of corruptingnoisethat affects the fidelity of its representation. Because many signals have a very widedynamic range, PSNR is usually expressed as alogarithmicquantity using thedecibelscale. PSNR is commonly used to quantify reconstruction quality for images and video subject tolossy compression. PSNR is most easily defined via themean squared error(MSE). Given a noise-freem×nmonochrome imageIand its noisy approximationK,MSEis defined as The PSNR (indB) is defined as Here,MAXIis the maximum possible pixel value of the image. When the pixels are represented using 8 bits per sample, this is 255. More generally, when samples are represented using linearPCMwithBbits per sample,MAXIis 2B− 1. Forcolor imageswith threeRGBvalues per pixel, the definition of PSNR is the same except that the MSE is the sum over all squared value differences (now for each color, i.e. three times as many differences as in a monochrome image) divided by image size and by three. Alternately, for color images the image is converted to a differentcolor spaceand PSNR is reported against each channel of that color space, e.g.,YCbCrorHSL.[1][2] PSNR is most commonly used to measure the quality of reconstruction of lossy compressioncodecs(e.g., forimage compression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codecs, PSNR is anapproximationto human perception of reconstruction quality. Typical values for the PSNR inlossyimage and video compression are between 30 and 50 dB, provided the bit depth is 8bits, where higher is better. The processing quality of 12-bit images is considered high when the PSNR value is 60 dB or higher.[3][4]For 16-bit data typical values for the PSNR are between 60 and 80 dB.[5][6]Acceptable values for wireless transmission quality loss are considered to be about 20 dB to 25 dB.[7][8]The maximum PSNR for 8 bit is 48.131, for 10 bit is 60.198, for 12 bit is 72.245. In the absence of noise, the two imagesIandKare identical, and thus the MSE is zero. In this case the PSNR is infinite (or undefined, seeDivision by zero).[9] Although a higher PSNR generally correlates with a higher quality reconstruction, in many cases it may not. One has to be extremely careful with the range of validity of this metric; it is only conclusively valid when it is used to compare results from the same codec (or codec type) and same content.[10] Generally, when it comes to estimating thequality of imagesandvideosas perceived by humans, PSNR has been shown to perform very poorly compared to other quality metrics.[10][11] PSNR-HVS[12]is an extension of PSNR that incorporates properties of the human visual system such ascontrast perception. PSNR-HVS-M improves on PSNR-HVS by additionally taking into accountvisual masking.[13]In a 2007 study, it delivered better approximations of human visual quality judgements than PSNR andSSIMby large margin. It was also shown to have a distinct advantage overDCTuneand PSNR-HVS.[14]
https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
Bayesian interpretation of kernel regularizationexamines howkernel methodsinmachine learningcan be understood through the lens ofBayesian statistics, a framework that uses probability to model uncertainty. Kernel methods are founded on the concept of similarity between inputs within a structured space. While techniques likesupport vector machines(SVMs) and theirregularization(a technique to make a model more generalizable and transferable) were not originally formulated using Bayesian principles, analyzing them from aBayesianperspective provides valuable insights. In the Bayesian framework, kernel methods serve as a fundamental component ofGaussian processes, where the kernel function operates as a covariance function that defines relationships between inputs. Traditionally, these methods have been applied tosupervised learningproblems where inputs are represented as vectors and outputs as scalars. Recent developments have extended kernel methods to handlemultiple outputs, as seen inmulti-task learning.[1] The mathematical framework for kernel methods typically involvesreproducing kernel Hilbert spaces(RKHS). Not all kernels form inner product spaces, as they may not always be positive semidefinite (a property ensuring non-negative similarity measures), but they still operate within these more general RKHS. A mathematical equivalence between regularization approaches and Bayesian methods can be established, particularly in cases where the reproducing kernel Hilbert space is finite-dimensional. This equivalence demonstrates how both perspectives converge to essentially the sameestimators, revealing the underlying connection between these seemingly different approaches. The classicalsupervised learningproblem requires estimating the output for some new input pointx′{\displaystyle \mathbf {x} '}by learning a scalar-valued estimatorf^(x′){\displaystyle {\hat {f}}(\mathbf {x} ')}on the basis of a training setS{\displaystyle S}consisting ofn{\displaystyle n}input-output pairs,S=(X,Y)=(x1,y1),…,(xn,yn){\displaystyle S=(\mathbf {X} ,\mathbf {Y} )=(\mathbf {x} _{1},y_{1}),\ldots ,(\mathbf {x} _{n},y_{n})}.[2]Given a symmetric and positive bivariate functionk(⋅,⋅){\displaystyle k(\cdot ,\cdot )}called akernel, one of the most popular estimators in machine learning is given by whereK≡k(X,X){\displaystyle \mathbf {K} \equiv k(\mathbf {X} ,\mathbf {X} )}is thekernel matrixwith entriesKij=k(xi,xj){\displaystyle \mathbf {K} _{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})},k=[k(x1,x′),…,k(xn,x′)]⊤{\displaystyle \mathbf {k} =[k(\mathbf {x} _{1},\mathbf {x} '),\ldots ,k(\mathbf {x} _{n},\mathbf {x} ')]^{\top }}, andY=[y1,…,yn]⊤{\displaystyle \mathbf {Y} =[y_{1},\ldots ,y_{n}]^{\top }}. We will see how this estimator can be derived both from a regularization and a Bayesian perspective. The main assumption in the regularization perspective is that the set of functionsF{\displaystyle {\mathcal {F}}}is assumed to belong to a reproducing kernel Hilbert spaceHk{\displaystyle {\mathcal {H}}_{k}}.[2][3][4][5] Areproducing kernel Hilbert space(RKHS)Hk{\displaystyle {\mathcal {H}}_{k}}is aHilbert spaceof functions defined by asymmetric,positive-definite functionk:X×X→R{\displaystyle k:{\mathcal {X}}\times {\mathcal {X}}\rightarrow \mathbb {R} }called thereproducing kernelsuch that the functionk(x,⋅){\displaystyle k(\mathbf {x} ,\cdot )}belongs toHk{\displaystyle {\mathcal {H}}_{k}}for allx∈X{\displaystyle \mathbf {x} \in {\mathcal {X}}}.[6][7][8]There are three main properties that make an RKHS appealing: 1. Thereproducing property, after which the RKHS is named, where⟨⋅,⋅⟩k{\displaystyle \langle \cdot ,\cdot \rangle _{k}}is the inner product inHk{\displaystyle {\mathcal {H}}_{k}}. 2. Functions in an RKHS are in the closure of the linear combination of the kernel at given points, This allows the construction in a unified framework of both linear and generalized linear models. 3. The squared norm in an RKHS can be written as and could be viewed as measuring thecomplexityof the function. The estimator is derived as the minimizer of the regularized functional wheref∈Hk{\displaystyle f\in {\mathcal {H}}_{k}}and‖⋅‖k{\displaystyle \|\cdot \|_{k}}is the norm inHk{\displaystyle {\mathcal {H}}_{k}}. The first term in this functional, which measures the average of the squares of the errors between thef(xi){\displaystyle f(\mathbf {x} _{i})}and theyi{\displaystyle y_{i}}, is called theempirical riskand represents the cost we pay by predictingf(xi){\displaystyle f(\mathbf {x} _{i})}for the true valueyi{\displaystyle y_{i}}. The second term in the functional is the squared norm in a RKHS multiplied by a weightλ{\displaystyle \lambda }and serves the purpose of stabilizing the problem[3][5]as well as of adding a trade-off between fitting and complexity of the estimator.[2]The weightλ{\displaystyle \lambda }, called theregularizer, determines the degree to which instability and complexity of the estimator should be penalized (higher penalty for increasing value ofλ{\displaystyle \lambda }). The explicit form of the estimator in equation (1) is derived in two steps. First, the representer theorem[9][10][11]states that the minimizer of the functional (2) can always be written as a linear combination of the kernels centered at the training-set points, for somec∈Rn{\displaystyle \mathbf {c} \in \mathbb {R} ^{n}}. The explicit form of the coefficientsc=[c1,…,cn]⊤{\displaystyle \mathbf {c} =[c_{1},\ldots ,c_{n}]^{\top }}can be found by substituting forf(⋅){\displaystyle f(\cdot )}in the functional (2). For a function of the form in equation (3), we have that We can rewrite the functional (2) as This functional is convex inc{\displaystyle \mathbf {c} }and therefore we can find its minimum by setting the gradient with respect toc{\displaystyle \mathbf {c} }to zero, Substituting this expression for the coefficients in equation (3), we obtain the estimator stated previously in equation (1), The notion of a kernel plays a crucial role in Bayesian probability as the covariance function of a stochastic process called theGaussian process. As part of the Bayesian framework, the Gaussian process specifies theprior distributionthat describes the prior beliefs about the properties of the function being modeled. These beliefs are updated after taking into account observational data by means of alikelihood functionthat relates the prior beliefs to the observations. Taken together, the prior and likelihood lead to an updated distribution called theposterior distributionthat is customarily used for predicting test cases. AGaussian process(GP) is a stochastic process in which any finite number of random variables that are sampled follow a jointNormal distribution.[12]The mean vector and covariance matrix of the Gaussian distribution completely specify the GP. GPs are usually used as a priori distribution for functions, and as such the mean vector and covariance matrix can be viewed as functions, where the covariance function is also called thekernelof the GP. Let a functionf{\displaystyle f}follow a Gaussian process with mean functionm{\displaystyle m}and kernel functionk{\displaystyle k}, In terms of the underlying Gaussian distribution, we have that for any finite setX={xi}i=1n{\displaystyle \mathbf {X} =\{\mathbf {x} _{i}\}_{i=1}^{n}}if we letf(X)=[f(x1),…,f(xn)]⊤{\displaystyle f(\mathbf {X} )=[f(\mathbf {x} _{1}),\ldots ,f(\mathbf {x} _{n})]^{\top }}then wherem=m(X)=[m(x1),…,m(xN)]⊤{\displaystyle \mathbf {m} =m(\mathbf {X} )=[m(\mathbf {x} _{1}),\ldots ,m(\mathbf {x} _{N})]^{\top }}is the mean vector andK=k(X,X){\displaystyle \mathbf {K} =k(\mathbf {X} ,\mathbf {X} )}is the covariance matrix of the multivariate Gaussian distribution. In a regression context, the likelihood function is usually assumed to be a Gaussian distribution and the observations to be independent and identically distributed (iid), This assumption corresponds to the observations being corrupted with zero-mean Gaussian noise with varianceσ2{\displaystyle \sigma ^{2}}. The iid assumption makes it possible to factorize the likelihood function over the data points given the set of inputsX{\displaystyle \mathbf {X} }and the variance of the noiseσ2{\displaystyle \sigma ^{2}}, and thus the posterior distribution can be computed analytically. For a test input vectorx′{\displaystyle \mathbf {x} '}, given the training dataS={X,Y}{\displaystyle S=\{\mathbf {X} ,\mathbf {Y} \}}, the posterior distribution is given by whereϕ{\displaystyle {\boldsymbol {\phi }}}denotes the set of parameters which include the variance of the noiseσ2{\displaystyle \sigma ^{2}}and any parameters from the covariance functionk{\displaystyle k}and where A connection between regularization theory and Bayesian theory can only be achieved in the case offinite dimensional RKHS. Under this assumption, regularization theory and Bayesian theory are connected through Gaussian process prediction.[3][12][13] In the finite dimensional case, every RKHS can be described in terms of a feature mapΦ:X→Rp{\displaystyle \Phi :{\mathcal {X}}\rightarrow \mathbb {R} ^{p}}such that[2] Functions in the RKHS with kernelK{\displaystyle \mathbf {K} }can then be written as and we also have that We can now build a Gaussian process by assumingw=[w1,…,wp]⊤{\displaystyle \mathbf {w} =[w^{1},\ldots ,w^{p}]^{\top }}to be distributed according to a multivariate Gaussian distribution with zero mean and identity covariance matrix, If we assume a Gaussian likelihood we have wherefw(X)=(⟨w,Φ(x1)⟩,…,⟨w,Φ(xn⟩){\displaystyle f_{\mathbf {w} }(\mathbf {X} )=(\langle \mathbf {w} ,\Phi (\mathbf {x} _{1})\rangle ,\ldots ,\langle \mathbf {w} ,\Phi (\mathbf {x} _{n}\rangle )}. The resulting posterior distribution is then given by We can see that amaximum posterior (MAP)estimate is equivalent to the minimization problem definingTikhonov regularization, where in the Bayesian case the regularization parameter is related to the noise variance. From a philosophical perspective, the loss function in a regularization setting plays a different role than the likelihood function in the Bayesian setting. Whereas the loss function measures the error that is incurred when predictingf(x){\displaystyle f(\mathbf {x} )}in place ofy{\displaystyle y}, the likelihood function measures how likely the observations are from the model that was assumed to be true in the generative process. From a mathematical perspective, however, the formulations of the regularization and Bayesian frameworks make the loss function and the likelihood function to have the same mathematical role of promoting the inference of functionsf{\displaystyle f}that approximate the labelsy{\displaystyle y}as much as possible.
https://en.wikipedia.org/wiki/Bayesian_interpretation_of_regularization
In the field ofstatistical learning theory,matrix regularizationgeneralizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework,Tikhonov regularizationoptimizes overminx‖Ax−y‖2+λ‖x‖2{\displaystyle \min _{x}\left\|Ax-y\right\|^{2}+\lambda \left\|x\right\|^{2}}to find a vectorx{\displaystyle x}that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written asminX‖AX−Y‖2+λ‖X‖2,{\displaystyle \min _{X}\left\|AX-Y\right\|^{2}+\lambda \left\|X\right\|^{2},}where the vector norm enforcing a regularization penalty onx{\displaystyle x}has been extended to a matrix norm onX{\displaystyle X}. Matrix regularization has applications inmatrix completion,multivariate regression, andmulti-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case ofmultiple kernel learning. Consider a matrixW{\displaystyle W}to be learned from a set of examples,S=(Xit,yit){\displaystyle S=(X_{i}^{t},y_{i}^{t})}, wherei{\displaystyle i}goes from1{\displaystyle 1}ton{\displaystyle n}, andt{\displaystyle t}goes from1{\displaystyle 1}toT{\displaystyle T}. Let each input matrixXi{\displaystyle X_{i}}be∈RDT{\displaystyle \in \mathbb {R} ^{DT}}, and letW{\displaystyle W}be of sizeD×T{\displaystyle D\times T}. A general model for the outputy{\displaystyle y}can be posed asyit=⟨W,Xit⟩F,{\displaystyle y_{i}^{t}=\left\langle W,X_{i}^{t}\right\rangle _{F},}where the inner product is theFrobenius inner product. For different applications the matricesXi{\displaystyle X_{i}}will have different forms,[1]but for each of these the optimization problem to inferW{\displaystyle W}can be written asminW∈HE(W)+R(W),{\displaystyle \min _{W\in {\mathcal {H}}}E(W)+R(W),}whereE{\displaystyle E}defines the empirical error for a givenW{\displaystyle W}, andR(W){\displaystyle R(W)}is a matrix regularization penalty. The functionR(W){\displaystyle R(W)}is typically chosen to be convex and is often selected to enforce sparsity (usingℓ1{\displaystyle \ell ^{1}}-norms) and/or smoothness (usingℓ2{\displaystyle \ell ^{2}}-norms). Finally,W{\displaystyle W}is in the space of matricesH{\displaystyle {\mathcal {H}}}with Frobenius inner product⟨…⟩F{\displaystyle \langle \dots \rangle _{F}}. In the problem ofmatrix completion, the matrixXit{\displaystyle X_{i}^{t}}takes the formXit=et⊗ei′,{\displaystyle X_{i}^{t}=e_{t}\otimes e_{i}',}where(et)t{\displaystyle (e_{t})_{t}}and(ei′)i{\displaystyle (e_{i}')_{i}}are the canonical basis inRT{\displaystyle \mathbb {R} ^{T}}andRD{\displaystyle \mathbb {R} ^{D}}. In this case the role of the Frobenius inner product is to select individual elementswit{\displaystyle w_{i}^{t}}from the matrixW{\displaystyle W}. Thus, the outputy{\displaystyle y}is a sampling of entries from the matrixW{\displaystyle W}. The problem of reconstructingW{\displaystyle W}from a small set of sampled entries is possible only under certain restrictions on the matrix, and these restrictions can be enforced by a regularization function. For example, it might be assumed thatW{\displaystyle W}is low-rank, in which case the regularization penalty can take the form of a nuclear norm.[2]R(W)=λ‖W‖∗=λ∑i|σi|,{\displaystyle R(W)=\lambda \left\|W\right\|_{*}=\lambda \sum _{i}\left|\sigma _{i}\right|,}whereσi{\displaystyle \sigma _{i}}, withi{\displaystyle i}from1{\displaystyle 1}tominD,T{\displaystyle \min D,T}, are the singular values ofW{\displaystyle W}. Models used inmultivariate regressionare parameterized by a matrix of coefficients. In the Frobenius inner product above, each matrixX{\displaystyle X}isXit=et⊗xi{\displaystyle X_{i}^{t}=e_{t}\otimes x_{i}}such that the output of the inner product is thedot productof one row of the input with one column of the coefficient matrix. The familiar form of such models isY=XW+b{\displaystyle Y=XW+b} Many of the vector norms used in single variable regression can be extended to the multivariate case. One example is the squared Frobenius norm, which can be viewed as anℓ2{\displaystyle \ell ^{2}}-norm acting either entrywise, or on the singular values of the matrix:R(W)=λ‖W‖F2=λ∑i∑j|wij|2=λTr⁡(W∗W)=λ∑iσi2.{\displaystyle R(W)=\lambda \left\|W\right\|_{F}^{2}=\lambda \sum _{i}\sum _{j}\left|w_{ij}\right|^{2}=\lambda \operatorname {Tr} \left(W^{*}W\right)=\lambda \sum _{i}\sigma _{i}^{2}.} In the multivariate case the effect of regularizing with the Frobenius norm is the same as the vector case; very complex models will have larger norms, and, thus, will be penalized more. The setup for multi-task learning is almost the same as the setup for multivariate regression. The primary difference is that the input variables are also indexed by task (columns ofY{\displaystyle Y}). The representation with the Frobenius inner product is thenXit=et⊗xit.{\displaystyle X_{i}^{t}=e_{t}\otimes x_{i}^{t}.} The role of matrix regularization in this setting can be the same as in multivariate regression, but matrix norms can also be used to couple learning problems across tasks. In particular, note that for the optimization problemminW‖XW−Y‖22+λ‖W‖22{\displaystyle \min _{W}\left\|XW-Y\right\|_{2}^{2}+\lambda \left\|W\right\|_{2}^{2}}the solutions corresponding to each column ofY{\displaystyle Y}are decoupled. That is, the same solution can be found by solving the joint problem, or by solving an isolated regression problem for each column. The problems can be coupled by adding an additional regularization penalty on the covariance of solutionsminW,Ω‖XW−Y‖22+λ1‖W‖22+λ2Tr⁡(WTΩ−1W){\displaystyle \min _{W,\Omega }\left\|XW-Y\right\|_{2}^{2}+\lambda _{1}\left\|W\right\|_{2}^{2}+\lambda _{2}\operatorname {Tr} \left(W^{T}\Omega ^{-1}W\right)}whereΩ{\displaystyle \Omega }models the relationship between tasks. This scheme can be used to both enforce similarity of solutions across tasks, and to learn the specific structure of task similarity by alternating between optimizations ofW{\displaystyle W}andΩ{\displaystyle \Omega }.[3]When the relationship between tasks is known to lie on a graph, theLaplacian matrixof the graph can be used to couple the learning problems. Regularization by spectral filteringhas been used to find stable solutions to problems such as those discussed above by addressing ill-posed matrix inversions (see for exampleFilter function for Tikhonov regularization). In many cases the regularization function acts on the input (or kernel) to ensure a bounded inverse by eliminating small singular values, but it can also be useful to have spectral norms that act on the matrix that is to be learned. There are a number of matrix norms that act on the singular values of the matrix. Frequently used examples include theSchatten p-norms, withp= 1 or 2. For example, matrix regularization with a Schatten 1-norm, also called the nuclear norm, can be used to enforce sparsity in the spectrum of a matrix. This has been used in the context of matrix completion when the matrix in question is believed to have a restricted rank.[2]In this case the optimization problem becomes:min‖W‖∗subject toWi,j=Yij.{\displaystyle \min \left\|W\right\|_{*}~~{\text{ subject to }}~~W_{i,j}=Y_{ij}.} Spectral Regularization is also used to enforce a reduced rank coefficient matrix in multivariate regression.[4]In this setting, a reduced rank coefficient matrix can be found by keeping just the topn{\displaystyle n}singular values, but this can be extended to keep any reduced set of singular values and vectors. Sparse optimization has become the focus of much research interest as a way to find solutions that depend on a small number of variables (see e.g. theLasso method). In principle, entry-wise sparsity can be enforced by penalizing the entry-wiseℓ0{\displaystyle \ell ^{0}}-norm of the matrix, but theℓ0{\displaystyle \ell ^{0}}-norm is not convex. In practice this can be implemented by convex relaxation to theℓ1{\displaystyle \ell ^{1}}-norm. While entry-wise regularization with anℓ1{\displaystyle \ell ^{1}}-norm will find solutions with a small number of nonzero elements, applying anℓ1{\displaystyle \ell ^{1}}-norm to different groups of variables can enforce structure in the sparsity of solutions.[5] The most straightforward example of structured sparsity uses theℓp,q{\displaystyle \ell _{p,q}}norm withp=2{\displaystyle p=2}andq=1{\displaystyle q=1}:‖W‖2,1=∑i‖wi‖2.{\displaystyle \left\|W\right\|_{2,1}=\sum _{i}\left\|w_{i}\right\|_{2}.} For example, theℓ2,1{\displaystyle \ell _{2,1}}norm is used in multi-task learning to group features across tasks, such that all the elements in a given row of the coefficient matrix can be forced to zero as a group.[6]The grouping effect is achieved by taking theℓ2{\displaystyle \ell ^{2}}-norm of each row, and then taking the total penalty to be the sum of these row-wise norms. This regularization results in rows that will tend to be all zeros, or dense. The same type of regularization can be used to enforce sparsity column-wise by taking theℓ2{\displaystyle \ell ^{2}}-norms of each column. More generally, theℓ2,1{\displaystyle \ell _{2,1}}norm can be applied to arbitrary groups of variables:R(W)=λ∑gG∑j|Gg||wgj|2=λ∑gG‖wg‖g{\displaystyle R(W)=\lambda \sum _{g}^{G}{\sqrt {\sum _{j}^{|G_{g}|}\left|w_{g}^{j}\right|^{2}}}=\lambda \sum _{g}^{G}\left\|w_{g}\right\|_{g}}where the indexg{\displaystyle g}is across groups of variables, and|Gg|{\displaystyle |G_{g}|}indicates the cardinality of groupg{\displaystyle g}. Algorithms for solving these group sparsity problems extend the more well-known Lasso and group Lasso methods by allowing overlapping groups, for example, and have been implemented viamatching pursuit:[7]andproximal gradient methods.[8]By writing the proximal gradient with respect to a given coefficient,wgi{\displaystyle w_{g}^{i}}, it can be seen that this norm enforces a group-wise soft threshold[1]proxλ,Rg⁡(wg)i=(wgi−λwgi‖wg‖g)1‖wg‖g≥λ.{\displaystyle \operatorname {prox} _{\lambda ,R_{g}}\left(w_{g}\right)^{i}=\left(w_{g}^{i}-\lambda {\frac {w_{g}^{i}}{\left\|w_{g}\right\|_{g}}}\right)\mathbf {1} _{\|w_{g}\|_{g}\geq \lambda }.}where1‖wg‖g≥λ{\displaystyle \mathbf {1} _{\|w_{g}\|_{g}\geq \lambda }}is the indicator function for group norms≥λ{\displaystyle \geq \lambda }. Thus, usingℓ2,1{\displaystyle \ell _{2,1}}norms it is straightforward to enforce structure in the sparsity of a matrix either row-wise, column-wise, or in arbitrary blocks. By enforcing group norms on blocks in multivariate or multi-task regression, for example, it is possible to find groups of input and output variables, such that defined subsets of output variables (columns in the matrixY{\displaystyle Y}) will depend on the same sparse set of input variables. The ideas of structured sparsity andfeature selectioncan be extended to the nonparametric case ofmultiple kernel learning.[9]This can be useful when there are multiple types of input data (color and texture, for example) with different appropriate kernels for each, or when the appropriate kernel is unknown. If there are two kernels, for example, with feature mapsA{\displaystyle A}andB{\displaystyle B}that lie in correspondingreproducing kernel Hilbert spacesHA,HB{\displaystyle {\mathcal {H_{A}}},{\mathcal {H_{B}}}}, then a larger space,HD{\displaystyle {\mathcal {H_{D}}}}, can be created as the sum of two spaces:HD:f=h+h′;h∈HA,h′∈HB{\displaystyle {\mathcal {H_{D}}}:f=h+h';h\in {\mathcal {H_{A}}},h'\in {\mathcal {H_{B}}}}assuming linear independence inA{\displaystyle A}andB{\displaystyle B}. In this case theℓ2,1{\displaystyle \ell _{2,1}}-norm is again the sum of norms:‖f‖HD,1=‖h‖HA+‖h′‖HB{\displaystyle \left\|f\right\|_{{\mathcal {H_{D}}},1}=\left\|h\right\|_{\mathcal {H_{A}}}+\left\|h'\right\|_{\mathcal {H_{B}}}} Thus, by choosing a matrix regularization function as this type of norm, it is possible to find a solution that is sparse in terms of which kernels are used, but dense in the coefficient of each used kernel. Multiple kernel learning can also be used as a form of nonlinear variable selection, or as a model aggregation technique (e.g. by taking the sum of squared norms and relaxing sparsity constraints). For example, each kernel can be taken to be the Gaussian kernel with a different width.
https://en.wikipedia.org/wiki/Matrix_regularization
Spectral regularizationis any of a class ofregularizationtechniques used inmachine learningto control the impact of noise and preventoverfitting. Spectral regularization can be used in a broad range of applications, from deblurring images to classifying emails into a spam folder and a non-spam folder. For instance, in the email classification example, spectral regularization can be used to reduce the impact of noise and prevent overfitting when a machine learning system is being trained on a labeled set of emails to learn how to tell a spam and a non-spam email apart. Spectral regularization algorithms rely on methods that were originally defined and studied in the theory ofill-posedinverse problems(for instance, see[1]) focusing on the inversion of a linear operator (or a matrix) that possibly has a badcondition numberor an unbounded inverse. In this context, regularization amounts to substituting the original operator by abounded operatorcalled the "regularization operator" that has a condition number controlled by a regularization parameter,[2]a classical example beingTikhonov regularization. To ensure stability, this regularization parameter is tuned based on the level of noise.[2]The main idea behind spectral regularization is that each regularization operator can be described using spectral calculus as an appropriate filter on the eigenvalues of the operator that defines the problem, and the role of the filter is to "suppress the oscillatory behavior corresponding to small eigenvalues".[2]Therefore, each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function (which needs to be derived for that particular algorithm). Three of the most commonly used regularization algorithms for which spectral filtering is well-studied are Tikhonov regularization,Landweber iteration, andtruncated singular value decomposition(TSVD). As for choosing the regularization parameter, examples of candidate methods to compute this parameter include the discrepancy principle, generalizedcross validation, and the L-curve criterion.[3] It is of note that the notion of spectral filtering studied in the context of machine learning is closely connected to the literature onfunction approximation(in signal processing). The training set is defined asS={(x1,y1),…,(xn,yn)}{\displaystyle S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}, whereX{\displaystyle X}is then×d{\displaystyle n\times d}input matrix andY=(y1,…,yn){\displaystyle Y=(y_{1},\dots ,y_{n})}is the output vector. Where applicable, the kernel function is denoted byk{\displaystyle k}, and then×n{\displaystyle n\times n}kernel matrix is denoted byK{\displaystyle K}which has entriesKij=k(xi,xj){\displaystyle K_{ij}=k(x_{i},x_{j})}andH{\displaystyle {\mathcal {H}}}denotes theReproducing Kernel Hilbert Space(RKHS) with kernelk{\displaystyle k}. The regularization parameter is denoted byλ{\displaystyle \lambda }. (Note: Forg∈G{\displaystyle g\in G}andf∈F{\displaystyle f\in F}, withG{\displaystyle G}andF{\displaystyle F}being Hilbert spaces, given a linear, continuous operatorL{\displaystyle L}, assume thatg=Lf{\displaystyle g=Lf}holds. In this setting, the direct problem would be to solve forg{\displaystyle g}givenf{\displaystyle f}and the inverse problem would be to solve forf{\displaystyle f}giveng{\displaystyle g}. If the solution exists, is unique and stable, the inverse problem (i.e. the problem of solving forf{\displaystyle f}) is well-posed; otherwise, it is ill-posed.) The connection between the regularized least squares (RLS) estimation problem (Tikhonov regularization setting) and the theory of ill-posed inverse problems is an example of how spectral regularization algorithms are related to the theory of ill-posed inverse problems. The RLS estimator solvesminf∈H1n∑i=1n(yi−f(xi))2+λ‖f‖H2{\displaystyle \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}+\lambda \left\|f\right\|_{\mathcal {H}}^{2}}and the RKHS allows for expressing this RLS estimator asfSλ(X)=∑i=1ncik(x,xi){\displaystyle f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})}where(K+nλI)c=Y{\displaystyle (K+n\lambda I)c=Y}withc=(c1,…,cn){\displaystyle c=(c_{1},\dots ,c_{n})}.[4]The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimizationminf∈H1n∑i=1n(yi−f(xi))2{\displaystyle \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}}can be written asfSλ(X)=∑i=1ncik(x,xi){\displaystyle f_{S}^{\lambda }(X)=\sum _{i=1}^{n}c_{i}k(x,x_{i})}such thatKc=Y{\displaystyle Kc=Y}, adding the penalty function amounts to the following change in the system that needs to be solved:[5]{minf∈H1n∑i=1n(yi−f(xi))2→minf∈H1n∑i=1n(yi−f(xi))2+λ‖f‖H2}≡{Kc=Y→(K+nλI)c=Y}.{\displaystyle \left\{\min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}\left(y_{i}-f(x_{i})\right)^{2}\rightarrow \min _{f\in {\mathcal {H}}}{\frac {1}{n}}\sum _{i=1}^{n}\left(y_{i}-f(x_{i})\right)^{2}+\lambda \left\|f\right\|_{\mathcal {H}}^{2}\right\}\equiv {\biggl \{}Kc=Y\rightarrow \left(K+n\lambda I\right)c=Y{\biggr \}}.} In this learning setting, the kernel matrix can be decomposed asK=QΣQT{\displaystyle K=Q\Sigma Q^{T}}, withσ=diag⁡(σ1,…,σn),σ1≥σ2≥⋯≥σn≥0{\displaystyle \sigma =\operatorname {diag} (\sigma _{1},\dots ,\sigma _{n}),~\sigma _{1}\geq \sigma _{2}\geq \cdots \geq \sigma _{n}\geq 0}andq1,…,qn{\displaystyle q_{1},\dots ,q_{n}}are the corresponding eigenvectors. Therefore, in the initial learning setting, the following holds:c=K−1Y=QΣ−1QTY=∑i=1n1σi⟨qi,Y⟩qi.{\displaystyle c=K^{-1}Y=Q\Sigma ^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}}}\langle q_{i},Y\rangle q_{i}.} Thus, for small eigenvalues, even small perturbations in the data can lead to considerable changes in the solution. Hence, the problem is ill-conditioned, and solving this RLS problem amounts to stabilizing a possibly ill-conditioned matrix inversion problem, which is studied in the theory of ill-posed inverse problems; in both problems, a main concern is to deal with the issue ofnumerical stability. Each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function, denoted here byGλ(⋅){\displaystyle G_{\lambda }(\cdot )}. If the Kernel matrix is denoted byK{\displaystyle K}, thenλ{\displaystyle \lambda }should control the magnitude of the smaller eigenvalues ofGλ(K){\displaystyle G_{\lambda }(K)}. In a filtering setup, the goal is to find estimatorsfSλ(X):=∑i=1ncik(x,xi){\displaystyle f_{S}^{\lambda }(X):=\sum _{i=1}^{n}c_{i}k(x,x_{i})}wherec=Gλ(K)Y{\displaystyle c=G_{\lambda }(K)Y}. To do so, a scalar filter functionGλ(σ){\displaystyle G_{\lambda }(\sigma )}is defined using the eigen-decomposition of the kernel matrix:Gλ(K)=QGλ(Σ)QT,{\displaystyle G_{\lambda }(K)=QG_{\lambda }(\Sigma )Q^{T},}which yieldsGλ(K)Y=∑i=1nGλ(σi)⟨qi,Y⟩qi.{\displaystyle G_{\lambda }(K)Y~=~\sum _{i=1}^{n}G_{\lambda }(\sigma _{i})\langle q_{i},Y\rangle q_{i}.} Typically, an appropriate filter function should have the following properties:[5] While the above items give a rough characterization of the general properties of filter functions for all spectral regularization algorithms, the derivation of the filter function (and hence its exact form) varies depending on the specific regularization method that spectral filtering is applied to. In the Tikhonov regularization setting, the filter function for RLS is described below. As shown in,[4]in this setting,c=(K+nλI)−1Y{\displaystyle c=\left(K+n\lambda I\right)^{-1}Y}. Thus,c=(K+nλI)−1Y=Q(Σ+nλI)−1QTY=∑i=1n1σi+nλ<qi,Y>qi.{\displaystyle c=(K+n\lambda I)^{-1}Y=Q(\Sigma +n\lambda I)^{-1}Q^{T}Y=\sum _{i=1}^{n}{\frac {1}{\sigma _{i}+n\lambda }}<q_{i},Y>q_{i}.} The undesired components are filtered out using regularization: The filter function for Tikhonov regularization is therefore defined as:[5]Gλ(σ)=1σ+nλ.{\displaystyle G_{\lambda }(\sigma )={\frac {1}{\sigma +n\lambda }}.} The idea behind the Landweber iteration isgradient descent:[5] In this setting, ifn{\displaystyle n}is larger thanK{\displaystyle K}'s largest eigenvalue, the above iteration converges by choosingη=2/n{\displaystyle \eta =2/n}as the step-size:.[5]The above iteration is equivalent to minimizing1n‖Y−Kc‖22{\displaystyle {\frac {1}{n}}\left\|Y-Kc\right\|_{2}^{2}}(i.e. the empirical risk) via gradient descent; using induction, it can be proved that at thet{\displaystyle t}-th iteration, the solution is given by[5]c=η∑i=0t−1(I−ηK)iY.{\displaystyle c=\eta \sum _{i=0}^{t-1}\left(I-\eta K\right)^{i}Y.} Thus, the appropriate filter function is defined by:Gλ(σ)=η∑i=0t−1(I−ησ)i.{\displaystyle G_{\lambda }(\sigma )=\eta \sum _{i=0}^{t-1}\left(I-\eta \sigma \right)^{i}.} It can be shown that this filter function corresponds to a truncated power expansion ofK−1{\displaystyle K^{-1}};[5]to see this, note that the relation∑i≥0xi=1/(1−x){\displaystyle \sum _{i\geq 0}x^{i}=1/(1-x)}, would still hold ifx{\displaystyle x}is replaced by a matrix; thus, ifK{\displaystyle K}(the kernel matrix), or ratherI−ηK{\displaystyle I-\eta K}, is considered, the following holds:K−1=η∑i=0∞(I−ηK)i∼η∑i=0t−1(I−ηK)i.{\displaystyle K^{-1}=\eta \sum _{i=0}^{\infty }\left(I-\eta K\right)^{i}\sim \eta \sum _{i=0}^{t-1}\left(I-\eta K\right)^{i}.} In this setting, the number of iterations gives the regularization parameter; roughly speaking,t∼1/λ{\displaystyle t\sim 1/\lambda }.[5]Ift{\displaystyle t}is large, overfitting may be a concern. Ift{\displaystyle t}is small, oversmoothing may be a concern. Thus, choosing an appropriate time for early stopping of the iterations provides a regularization effect. In the TSVD setting, given the eigen-decompositionK=QΣQT{\displaystyle K=Q\Sigma Q^{T}}and using a prescribed thresholdλn{\displaystyle \lambda n}, a regularized inverse can be formed for the kernel matrix by discarding all the eigenvalues that are smaller than this threshold.[5]Thus, the filter function for TSVD can be defined asGλ(σ)={1/σ,ifσ≥λn0,otherwise{\displaystyle G_{\lambda }(\sigma )={\begin{cases}1/\sigma ,&{\text{if }}\sigma \geq \lambda n\\[1ex]0,&{\text{otherwise}}\end{cases}}} It can be shown that TSVD is equivalent to the (unsupervised) projection of the data using (kernel)Principal Component Analysis(PCA), and that it is also equivalent to minimizing the empirical risk on the projected data (without regularization).[5]Note that the number of components kept for the projection is the onlyfree parameterhere.
https://en.wikipedia.org/wiki/Regularization_by_spectral_filtering
Regularized least squares(RLS) is a family of methods for solving theleast-squaresproblem while usingregularizationto further constrain the resulting solution. RLS is used for two main reasons. The first comes up when the number of variables in the linear system exceeds the number of observations. In such settings, theordinary least-squaresproblem isill-posedand is therefore impossible to fit because the associated optimization problem has infinitely many solutions. RLS allows the introduction of further constraints that uniquely determine the solution. The second reason for using RLS arises when the learned model suffers from poorgeneralization. RLS can be used in such cases to improve the generalizability of the model by constraining it at training time. This constraint can either force the solution to be "sparse" in some way or to reflect other prior knowledge about the problem such as information about correlations between features. ABayesianunderstanding of this can be reached by showing that RLS methods are often equivalent topriorson the solution to the least-squares problem. Consider a learning setting given by a probabilistic space(X×Y,ρ(X,Y)){\displaystyle (X\times Y,\rho (X,Y))},Y∈R{\displaystyle Y\in R}. LetS={xi,yi}i=1n{\displaystyle S=\{x_{i},y_{i}\}_{i=1}^{n}}denote a training set ofn{\displaystyle n}pairs i.i.d. with respect to the joint distributionρ{\displaystyle \rho }. LetV:Y×R→[0;∞){\displaystyle V:Y\times R\to [0;\infty )}be a loss function. DefineF{\displaystyle F}as the space of the functions such that expected risk:ε(f)=∫V(y,f(x))dρ(x,y){\displaystyle \varepsilon (f)=\int V(y,f(x))\,d\rho (x,y)}is well defined. The main goal is to minimize the expected risk:inff∈Fε(f){\displaystyle \inf _{f\in F}\varepsilon (f)}Since the problem cannot be solved exactly there is a need to specify how to measure the quality of a solution. A good learning algorithm should provide an estimator with a small risk. As the joint distributionρ{\displaystyle \rho }is typically unknown, the empirical risk is taken. For regularized least squares the square loss function is introduced:ε(f)=1n∑i=1nV(yi,f(xi))=1n∑i=1n(yi−f(xi))2{\displaystyle \varepsilon (f)={\frac {1}{n}}\sum _{i=1}^{n}V(y_{i},f(x_{i}))={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}} However, if the functions are from a relatively unconstrained space, such as the set of square-integrable functions onX{\displaystyle X}, this approach may overfit the training data, and lead to poor generalization. Thus, it should somehow constrain or penalize the complexity of the functionf{\displaystyle f}. In RLS, this is accomplished by choosing functions from a reproducing kernel Hilbert space (RKHS)H{\displaystyle {\mathcal {H}}}, and adding a regularization term to the objective function, proportional to the norm of the function inH{\displaystyle {\mathcal {H}}}:inff∈Fε(f)+λR(f),λ>0{\displaystyle \inf _{f\in F}\varepsilon (f)+\lambda R(f),\lambda >0} A RKHS can be defined by asymmetricpositive-definite kernel functionK(x,z){\displaystyle K(x,z)}with the reproducing property:⟨Kx,f⟩H=f(x),{\displaystyle \langle K_{x},f\rangle _{\mathcal {H}}=f(x),}whereKx(z)=K(x,z){\displaystyle K_{x}(z)=K(x,z)}. The RKHS for a kernelK{\displaystyle K}consists of thecompletionof the space of functions spanned by{Kx∣x∈X}{\displaystyle \left\{K_{x}\mid x\in X\right\}}:f(x)=∑i=1nαiKxi(x),f∈H{\textstyle f(x)=\sum _{i=1}^{n}\alpha _{i}K_{x_{i}}(x),\,f\in {\mathcal {H}}}, where allαi{\displaystyle \alpha _{i}}are real numbers. Some commonly used kernels include the linear kernel, inducing the space of linear functions:K(x,z)=xTz,{\displaystyle K(x,z)=x^{\mathsf {T}}z,}the polynomial kernel, inducing the space of polynomial functions of orderd{\displaystyle d}:K(x,z)=(xTz+1)d,{\displaystyle K(x,z)=\left(x^{\mathsf {T}}z+1\right)^{d},}and the Gaussian kernel:K(x,z)=e−‖x−z‖2/σ2.{\displaystyle K(x,z)=e^{-{\left\|x-z\right\|^{2}}/{\sigma ^{2}}}.} Note that for an arbitrary loss functionV{\displaystyle V}, this approach defines a general class of algorithms named Tikhonov regularization. For instance, using thehinge lossleads to thesupport vector machinealgorithm, and using theepsilon-insensitive lossleads tosupport vector regression. Therepresenter theoremguarantees that the solution can be written as:f(x)=∑i=1nciK(xi,x){\displaystyle f(x)=\sum _{i=1}^{n}c_{i}K(x_{i},x)}for somec∈Rn{\displaystyle c\in \mathbb {R} ^{n}}. The minimization problem can be expressed as:minc∈Rn1n‖Y−Kc‖Rn2+λ‖f‖H2,{\displaystyle \min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\left\|Y-Kc\right\|_{\mathbb {R} ^{n}}^{2}+\lambda \left\|f\right\|_{H}^{2},}where, with some abuse of notation, thei,j{\displaystyle i,j}entry of kernel matrixK{\displaystyle K}(as opposed to kernel functionK(⋅,⋅){\displaystyle K(\cdot ,\cdot )}) isK(xi,xj){\displaystyle K(x_{i},x_{j})}. For such a function,‖f‖H2=⟨f,f⟩H=⟨∑i=1nciK(xi,⋅),∑j=1ncjK(xj,⋅)⟩H=∑i=1n∑j=1ncicj⟨K(xi,⋅),K(xj,⋅)⟩H=∑i=1n∑j=1ncicjK(xi,xj)=cTKc,{\displaystyle {\begin{aligned}\left\|f\right\|_{H}^{2}&=\langle f,f\rangle _{H}\\[1ex]&=\left\langle \sum _{i=1}^{n}c_{i}K(x_{i},\cdot ),\sum _{j=1}^{n}c_{j}K(x_{j},\cdot )\right\rangle _{H}\\[1ex]&=\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}\left\langle K(x_{i},\cdot ),K(x_{j},\cdot )\right\rangle _{H}\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}K(x_{i},x_{j})\\&=c^{\mathsf {T}}Kc,\end{aligned}}} The following minimization problem can be obtained:minc∈Rn1n‖Y−Kc‖Rn2+λcTKc.{\displaystyle \min _{c\in \mathbb {R} ^{n}}{\frac {1}{n}}\left\|Y-Kc\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}Kc.} As the sum of convex functions is convex, the solution is unique and its minimum can be found by setting the gradient with respect toc{\displaystyle c}to0{\displaystyle 0}:−1nK(Y−Kc)+λKc=0⇒K(K+λnI)c=KY⇒c=(K+λnI)−1Y,{\displaystyle -{\frac {1}{n}}K\left(Y-Kc\right)+\lambda Kc=0\Rightarrow K\left(K+\lambda nI\right)c=KY\Rightarrow c=\left(K+\lambda nI\right)^{-1}Y,}wherec∈Rn.{\displaystyle c\in \mathbb {R} ^{n}.} The complexity of training is basically the cost of computing the kernel matrix plus the cost of solving the linear system which is roughlyO(n3){\displaystyle O(n^{3})}. The computation of the kernel matrix for the linear orGaussian kernelisO(n2D){\displaystyle O(n^{2}D)}. The complexity of testing isO(n){\displaystyle O(n)}. The prediction at a new test pointx∗{\displaystyle x_{*}}is:f(x∗)=∑i=1nciK(xi,x∗)=K(X,X∗)Tc{\displaystyle f(x_{*})=\sum _{i=1}^{n}c_{i}K(x_{i},x_{*})=K(X,X_{*})^{\mathsf {T}}c} For convenience a vector notation is introduced. LetX{\displaystyle X}be ann×d{\displaystyle n\times d}matrix, where the rows are input vectors, andY{\displaystyle Y}an×1{\displaystyle n\times 1}vector where the entries are corresponding outputs. In terms of vectors, the kernel matrix can be written asK=XXT{\displaystyle K=XX^{\mathsf {T}}}. The learning function can be written as:f(x∗)=Kx∗c=x∗TXTc=x∗Tw{\displaystyle f(x_{*})=K_{x_{*}}c=x_{*}^{\mathsf {T}}X^{\mathsf {T}}c=x_{*}^{\mathsf {T}}w} Here we definew=XTc,w∈Rd{\displaystyle w=X^{\mathsf {T}}c,w\in \mathbb {R} ^{d}}. The objective function can be rewritten as:1n‖Y−Kc‖Rn2+λcTKc=1n‖y−XXTc‖Rn2+λcTXXTc=1n‖y−Xw‖Rn2+λ‖w‖Rd2{\displaystyle {\begin{aligned}{\frac {1}{n}}\left\|Y-Kc\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}Kc&={\frac {1}{n}}\left\|y-XX^{\mathsf {T}}c\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}XX^{\mathsf {T}}c\\[1ex]&={\frac {1}{n}}\left\|y-Xw\right\|_{\mathbb {R} ^{n}}^{2}+\lambda \left\|w\right\|_{\mathbb {R} ^{d}}^{2}\end{aligned}}} The first term is the objective function fromordinary least squares(OLS) regression, corresponding to theresidual sum of squares. The second term is a regularization term, not present in OLS, which penalizes largew{\displaystyle w}values. As a smooth finite dimensional problem is considered and it is possible to apply standard calculus tools. In order to minimize the objective function, the gradient is calculated with respect tow{\displaystyle w}and set it to zero:XTXw−XTy+λnw=0{\displaystyle X^{\mathsf {T}}Xw-X^{\mathsf {T}}y+\lambda nw=0}w=(XTX+λnI)−1XTy{\displaystyle w=\left(X^{\mathsf {T}}X+\lambda nI\right)^{-1}X^{\mathsf {T}}y} This solution closely resembles that of standard linear regression, with an extra termλI{\displaystyle \lambda I}. If the assumptions of OLS regression hold, the solutionw=(XTX)−1XTy{\displaystyle w=\left(X^{\mathsf {T}}X\right)^{-1}X^{\mathsf {T}}y}, withλ=0{\displaystyle \lambda =0}, is an unbiased estimator, and is the minimum-variance linear unbiased estimator, according to theGauss–Markov theorem. The termλnI{\displaystyle \lambda nI}therefore leads to a biased solution; however, it also tends to reduce variance. This is easy to see, as thecovariancematrix of thew{\displaystyle w}-values is proportional to(XTX+λnI)−1{\displaystyle \left(X^{\mathsf {T}}X+\lambda nI\right)^{-1}}, and therefore large values ofλ{\displaystyle \lambda }will lead to lower variance. Therefore, manipulatingλ{\displaystyle \lambda }corresponds to trading-off bias and variance. For problems with high-variancew{\displaystyle w}estimates, such as cases with relatively smalln{\displaystyle n}or with correlated regressors, the optimal prediction accuracy may be obtained by using a nonzeroλ{\displaystyle \lambda }, and thus introducing some bias to reduce variance. Furthermore, it is not uncommon inmachine learningto have cases wheren<d{\displaystyle n<d}, in which caseXTX{\displaystyle X^{\mathsf {T}}X}isrank-deficient, and a nonzeroλ{\displaystyle \lambda }is necessary to compute(XTX+λnI)−1{\displaystyle \left(X^{\mathsf {T}}X+\lambda nI\right)^{-1}}. The parameterλ{\displaystyle \lambda }controls the invertibility of the matrixXTX+λnI{\displaystyle X^{\mathsf {T}}X+\lambda nI}. Several methods can be used to solve the above linear system,Cholesky decompositionbeing probably the method of choice, since the matrixXTX+λnI{\displaystyle X^{\mathsf {T}}X+\lambda nI}issymmetricandpositive definite. The complexity of this method isO(nD2){\displaystyle O(nD^{2})}for training andO(D){\displaystyle O(D)}for testing. The costO(nD2){\displaystyle O(nD^{2})}is essentially that of computingXTX{\displaystyle X^{\mathsf {T}}X}, whereas the inverse computation (or rather the solution of the linear system) is roughlyO(D3){\displaystyle O(D^{3})}. In this section it will be shown how to extend RLS to any kind of reproducing kernel K. Instead of linear kernel a feature map is consideredΦ:X→F{\displaystyle \Phi :X\to F}for some Hilbert spaceF{\displaystyle F}, called the feature space. In this case the kernel is defined as: The matrixX{\displaystyle X}is now replaced by the new data matrixΦ{\displaystyle \Phi }, whereΦij=φj(xi){\displaystyle \Phi _{ij}=\varphi _{j}(x_{i})}, or thej{\displaystyle j}-th component of theφ(xi){\displaystyle \varphi (x_{i})}.K(x,x′)=⟨Φ(x),Φ(x′)⟩F.{\displaystyle K(x,x')=\langle \Phi (x),\Phi (x')\rangle _{F}.}It means that for a given training setK=ΦΦT{\displaystyle K=\Phi \Phi ^{\mathsf {T}}}. Thus, the objective function can be written asminc∈Rn‖Y−ΦΦTc‖Rn2+λcTΦΦTc.{\displaystyle \min _{c\in \mathbb {R} ^{n}}\left\|Y-\Phi \Phi ^{\mathsf {T}}c\right\|_{\mathbb {R} ^{n}}^{2}+\lambda c^{\mathsf {T}}\Phi \Phi ^{\mathsf {T}}c.} This approach is known as thekernel trick. This technique can significantly simplify the computational operations. IfF{\displaystyle F}is high dimensional, computingφ(xi){\displaystyle \varphi (x_{i})}may be rather intensive. If the explicit form of the kernel function is known, we just need to compute and store then×n{\displaystyle n\times n}kernel matrixK{\displaystyle K}. In fact, theHilbert spaceF{\displaystyle F}need not be isomorphic toRm{\displaystyle \mathbb {R} ^{m}}, and can be infinite dimensional. This follows fromMercer's theorem, which states that a continuous, symmetric, positive definite kernel function can be expressed asK(x,z)=∑i=1∞σiei(x)ei(z){\displaystyle K(x,z)=\sum _{i=1}^{\infty }\sigma _{i}e_{i}(x)e_{i}(z)}whereei(x){\displaystyle e_{i}(x)}form anorthonormal basisforℓ2(X){\displaystyle \ell ^{2}(X)}, andσi∈R{\displaystyle \sigma _{i}\in \mathbb {R} }. If feature maps is definedφ(x){\displaystyle \varphi (x)}with componentsφi(x)=σiei(x){\displaystyle \varphi _{i}(x)={\sqrt {\sigma _{i}}}e_{i}(x)}, it follows thatK(x,z)=⟨φ(x),φ(z)⟩{\displaystyle K(x,z)=\langle \varphi (x),\varphi (z)\rangle }. This demonstrates that any kernel can be associated with a feature map, and that RLS generally consists of linear RLS performed in some possibly higher-dimensional feature space. While Mercer's theorem shows how one feature map that can be associated with a kernel, in fact multiple feature maps can be associated with a given reproducing kernel. For instance, the mapφ(x)=Kx{\displaystyle \varphi (x)=K_{x}}satisfies the propertyK(x,z)=⟨φ(x),φ(z)⟩{\displaystyle K(x,z)=\langle \varphi (x),\varphi (z)\rangle }for an arbitrary reproducing kernel. Least squares can be viewed as a likelihood maximization under an assumption of normally distributed residuals. This is because the exponent of theGaussian distributionis quadratic in the data, and so is the least-squares objective function. In this framework, the regularization terms of RLS can be understood to be encodingpriorsonw{\displaystyle w}.[1]For instance, Tikhonov regularization corresponds to a normally distributed prior onw{\displaystyle w}that is centered at 0. To see this, first note that the OLS objective is proportional to thelog-likelihoodfunction when each sampledyi{\displaystyle y^{i}}is normally distributed aroundwT⋅xi{\displaystyle w^{\mathsf {T}}\cdot x^{i}}. Then observe that a normal prior onw{\displaystyle w}centered at 0 has a log-probability of the formlog⁡P(w)=q−α∑j=1dwj2{\displaystyle \log P(w)=q-\alpha \sum _{j=1}^{d}w_{j}^{2}}whereq{\displaystyle q}andα{\displaystyle \alpha }are constants that depend on the variance of the prior and are independent ofw{\displaystyle w}. Thus, minimizing the logarithm of the likelihood times the prior is equivalent to minimizing the sum of the OLS loss function and the ridge regression regularization term. This gives a more intuitive interpretation for whyTikhonov regularizationleads to a unique solution to the least-squares problem: there are infinitely many vectorsw{\displaystyle w}satisfying the constraints obtained from the data, but since we come to the problem with a prior belief thatw{\displaystyle w}is normally distributed around the origin, we will end up choosing a solution with this constraint in mind. Other regularization methods correspond to different priors. See thelistbelow for more details. One particularly common choice for the penalty functionR{\displaystyle R}is the squaredℓ2{\displaystyle \ell _{2}}norm, i.e.,R(w)=∑j=1dwj2{\displaystyle R(w)=\sum _{j=1}^{d}w_{j}^{2}}and the solution is found asw^=argminw∈Rd1n‖Y−Xw‖22+λ∑j=1d|wj|2{\displaystyle {\hat {w}}={\text{argmin}}_{w\in \mathbb {R} ^{d}}{\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda \sum _{j=1}^{d}\left|w_{j}\right|^{2}}The most common names for this are calledTikhonov regularizationandridge regression. It admits a closed-form solution forw{\displaystyle w}:w^=(1nXTX+λI)−11nXTY=(XTX+nλI)−1XTY{\displaystyle {\hat {w}}=\left({\frac {1}{n}}X^{\mathsf {T}}X+\lambda I\right)^{-1}{\frac {1}{n}}X^{\mathsf {T}}Y=\left(X^{\mathsf {T}}X+n\lambda I\right)^{-1}X^{\mathsf {T}}Y}The name ridge regression alludes to the fact that theλI{\displaystyle \lambda I}term adds positive entries along the diagonal "ridge" of the samplecovariance matrix1nXTX{\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X}. Whenλ=0{\displaystyle \lambda =0}, i.e., in the case ofordinary least squares, the condition thatd>n{\displaystyle d>n}causes the samplecovariance matrix1nXTX{\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X}to not have full rank and so it cannot be inverted to yield a unique solution. This is why there can be an infinitude of solutions to theordinary least squaresproblem whend>n{\displaystyle d>n}. However, whenλ>0{\displaystyle \lambda >0}, i.e., when ridge regression is used, the addition ofλI{\displaystyle \lambda I}to the sample covariance matrix ensures that all of its eigenvalues will be strictly greater than 0. In other words, it becomes invertible, and the solution is then unique. Compared to ordinary least squares, ridge regression is not unbiased. It accepts bias to reduce variance and themean square error. If we want to findw^{\displaystyle {\hat {w}}}for different values of the regularization coefficientλ{\displaystyle \lambda }(which we denotew^(λ){\displaystyle {\hat {w}}(\lambda )}) we may use theeigenvalue decompositionof the covariance matrix1nXTX=Qdiag(α1,…,αd)QT{\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X=Q{\text{diag}}(\alpha _{1},\ldots ,\alpha _{d})Q^{\mathsf {T}}}where columns ofQ∈Rd×d{\displaystyle Q\in \mathbb {R} ^{d\times d}}are the eigenvectors of1nXTX{\displaystyle {\frac {1}{n}}X^{\mathsf {T}}X}andα1,…,αd{\displaystyle \alpha _{1},\ldots ,\alpha _{d}}- itsd{\displaystyle d}eigenvalues. The solution in then given byw^(λ)=Qdiag−1(α1+λ,…,αd+λ)Z{\displaystyle {\hat {w}}(\lambda )=Q{\text{diag}}^{-1}(\alpha _{1}+\lambda ,\ldots ,\alpha _{d}+\lambda )Z}whereZ=1nQTXTY=[Z1,…,Zd]T.{\displaystyle Z={\frac {1}{n}}Q^{\mathsf {T}}X^{\mathsf {T}}Y=[Z_{1},\ldots ,Z_{d}]^{\mathsf {T}}.} Using the above results, the algorithm for finding amaximum likelihoodestimate ofλ{\displaystyle \lambda }may be defined as follows:[2] λ←1n∑i=1dαiαi+λ[1n‖Y−Xw^(λ)‖2‖w^(λ)‖2+λ].{\displaystyle \lambda \leftarrow {\frac {1}{n}}\sum _{i=1}^{d}{\frac {\alpha _{i}}{\alpha _{i}+\lambda }}\left[{\frac {{\frac {1}{n}}\|Y-X{\hat {w}}(\lambda )\|^{2}}{\|{\hat {w}}(\lambda )\|^{2}}}+\lambda \right].} This algorithm, for automatic (as opposed to heuristic) regularization, is obtained as afixed pointsolution in themaximum likelihoodestimation of the parameters.[2]Although the guarantees of convergence are not provided, the examples indicate that a satisfactory solution may be obtained after a couple of iterations. The eigenvalue decomposition simplifies derivation of the algorithm and also simplifies the calculations:‖w^(λ)‖2=∑i=1d|Zi|2(αi+λ)2,{\displaystyle \|{\hat {w}}(\lambda )\|^{2}=\sum _{i=1}^{d}{\frac {|Z_{i}|^{2}}{(\alpha _{i}+\lambda )^{2}}},}1n‖Y−Xw^(λ)‖2=∑i=1d|Zi|2αi+λ.{\displaystyle {\frac {1}{n}}\|Y-X{\hat {w}}(\lambda )\|^{2}=\sum _{i=1}^{d}{\frac {|Z_{i}|^{2}}{\alpha _{i}+\lambda }}.} An alternative fixed-point algorithm known as Gull-McKay algorithm[2]λ←1n‖Y−Xw^(λ)‖2[n∑i=1dαiαi+λ−1]‖w^(λ)‖2{\displaystyle \lambda \leftarrow {\frac {{\frac {1}{n}}\|Y-X{\hat {w}}(\lambda )\|^{2}}{\left[{\frac {n}{\sum _{i=1}^{d}{\frac {\alpha _{i}}{\alpha _{i}+\lambda }}}}-1\right]\|{\hat {w}}(\lambda )\|^{2}}}}usually has a faster convergence, but may be used only ifn>∑i=1dαiαi+λ{\displaystyle n>\sum _{i=1}^{d}{\frac {\alpha _{i}}{\alpha _{i}+\lambda }}}. Thus, while it can be used without problems forn>d{\displaystyle n>d}caution is recommended forn<d{\displaystyle n<d}. The least absolute selection and shrinkage (LASSO) method is another popular choice. Inlasso regression, the lasso penalty functionR{\displaystyle R}is theℓ1{\displaystyle \ell _{1}}norm, i.e.R(w)=∑j=1d|wj|{\displaystyle R(w)=\sum _{j=1}^{d}\left|w_{j}\right|}1n‖Y−Xw‖22+λ∑j=1d|wj|→minw∈Rd{\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda \sum _{j=1}^{d}|w_{j}|\rightarrow \min _{w\in \mathbb {R} ^{d}}} Note that the lasso penalty function is convex but not strictly convex. UnlikeTikhonov regularization, this scheme does not have a convenient closed-form solution: instead, the solution is typically found usingquadratic programmingor more generalconvex optimizationmethods, as well as by specific algorithms such as theleast-angle regressionalgorithm. An important difference between lasso regression and Tikhonov regularization is that lasso regression forces more entries ofw{\displaystyle w}to actually equal 0 than would otherwise. In contrast, while Tikhonov regularization forces entries ofw{\displaystyle w}to be small, it does not force more of them to be 0 than would be otherwise. Thus, LASSO regularization is more appropriate than Tikhonov regularization in cases in which we expect the number of non-zero entries ofw{\displaystyle w}to be small, and Tikhonov regularization is more appropriate when we expect that entries ofw{\displaystyle w}will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand. Besides feature selection described above, LASSO has some limitations. Ridge regression provides better accuracy in the casen>d{\displaystyle n>d}for highly correlated variables.[3]In another case,n<d{\displaystyle n<d}, LASSO selects at mostn{\displaystyle n}variables. Moreover, LASSO tends to select some arbitrary variables from group of highly correlated samples, so there is no grouping effect. 1n‖Y−Xw‖22+λ‖wj‖0→minw∈Rd{\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda \left\|w_{j}\right\|_{0}\rightarrow \min _{w\in \mathbb {R} ^{d}}}The most extreme way to enforce sparsity is to say that the actual magnitude of the coefficients ofw{\displaystyle w}does not matter; rather, the only thing that determines the complexity ofw{\displaystyle w}is the number of non-zero entries. This corresponds to settingR(w){\displaystyle R(w)}to be theℓ0{\displaystyle \ell _{0}}normofw{\displaystyle w}. This regularization function, while attractive for the sparsity that it guarantees, is very difficult to solve because doing so requires optimization of a function that is not even weaklyconvex. Lasso regression is the minimal possible relaxation ofℓ0{\displaystyle \ell _{0}}penalization that yields a weakly convex optimization problem. For any non-negativeλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}the objective has the following form:1n‖Y−Xw‖22+λ1∑j=1d|wj|+λ2∑j=1d|wj|2→minw∈Rd{\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}+\lambda _{1}\sum _{j=1}^{d}\left|w_{j}\right|+\lambda _{2}\sum _{j=1}^{d}\left|w_{j}\right|^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}} Letα=λ1λ1+λ2{\displaystyle \alpha ={\frac {\lambda _{1}}{\lambda _{1}+\lambda _{2}}}}, then the solution of the minimization problem is described as:1n‖Y−Xw‖22→minw∈Rds.t.(1−α)‖w‖1+α‖w‖2≤t{\displaystyle {\frac {1}{n}}\left\|Y-Xw\right\|_{2}^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}{\text{ s.t. }}(1-\alpha )\left\|w\right\|_{1}+\alpha \left\|w\right\|_{2}\leq t}for somet{\displaystyle t}. Consider(1−α)‖w‖1+α‖w‖2≤t{\displaystyle (1-\alpha )\left\|w\right\|_{1}+\alpha \left\|w\right\|_{2}\leq t}as an Elastic Net penalty function. Whenα=1{\displaystyle \alpha =1}, elastic net becomes ridge regression, whereasα=0{\displaystyle \alpha =0}it becomes Lasso.∀α∈(0,1]{\displaystyle \forall \alpha \in (0,1]}Elastic Net penalty function doesn't have the first derivative at 0 and it is strictly convex∀α>0{\displaystyle \forall \alpha >0}taking the properties bothlasso regressionandridge regression. One of the main properties of the Elastic Net is that it can select groups of correlated variables. The difference between weight vectors of samplesxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}is given by:|wi∗(λ1,λ2)−wj∗(λ1,λ2)|≤∑i=1n|yi|λ22(1−ρij),{\displaystyle \left|w_{i}^{*}(\lambda _{1},\lambda _{2})-w_{j}^{*}(\lambda _{1},\lambda _{2})\right|\leq {\frac {\sum _{i=1}^{n}|y_{i}|}{\lambda _{2}}}{\sqrt {2(1-\rho _{ij})}},}whereρij=xiTxj{\displaystyle \rho _{ij}=x_{i}^{\mathsf {T}}x_{j}}.[4] Ifxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}are highly correlated (ρij→1{\displaystyle \rho _{ij}\to 1}), the weight vectors are very close. In the case of negatively correlated samples (ρij→−1{\displaystyle \rho _{ij}\to -1}) the samples−xj{\displaystyle -x_{j}}can be taken. To summarize, for highly correlated variables the weight vectors tend to be equal up to a sign in the case of negative correlated variables. The following is a list of possible choices of the regularization functionR(⋅){\displaystyle R(\cdot )}, along with the name for each one, the corresponding prior if there is a simple one, and ways for computing the solution to the resulting optimization problem.
https://en.wikipedia.org/wiki/Regularized_least_squares
Inmathematical optimization, themethod of Lagrange multipliersis a strategy for finding the localmaxima and minimaof afunctionsubject toequation constraints(i.e., subject to the condition that one or moreequationshave to be satisfied exactly by the chosen values of thevariables).[1]It is named after the mathematicianJoseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that thederivative testof an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as theLagrangian functionor Lagrangian.[2]In the general case, the Lagrangian is defined as L(x,λ)≡f(x)+⟨λ,g(x)⟩{\displaystyle {\mathcal {L}}(x,\lambda )\equiv f(x)+\langle \lambda ,g(x)\rangle } for functionsf,g{\displaystyle f,g}; the notation⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }denotes aninner product. The valueλ{\displaystyle \lambda }is called theLagrange multiplier. In simple cases, where the inner product is defined as thedot product, the Lagrangian is L(x,λ)≡f(x)+λ⋅g(x){\displaystyle {\mathcal {L}}(x,\lambda )\equiv f(x)+\lambda \cdot g(x)} The method can be summarized as follows: in order to find the maximum or minimum of a functionf{\displaystyle f}subject to the equality constraintg(x)=0{\displaystyle g(x)=0}, find thestationary pointsofL{\displaystyle {\mathcal {L}}}considered as a function ofx{\displaystyle x}and the Lagrange multiplierλ{\displaystyle \lambda ~}. This means that allpartial derivativesshould be zero, including the partial derivative with respect toλ{\displaystyle \lambda ~}.[3] or equivalently The solution corresponding to the originalconstrained optimizationis always asaddle pointof the Lagrangian function,[4][5]which can be identified among the stationary points from thedefinitenessof thebordered Hessian matrix.[6] The great advantage of this method is that it allows the optimization to be solved without explicitparameterizationin terms of the constraints. As a result, the method of Lagrange multipliers is widely used to solve challenging constrained optimization problems. Further, the method of Lagrange multipliers is generalized by theKarush–Kuhn–Tucker conditions, which can also take into account inequality constraints of the formh(x)≤c{\displaystyle h(\mathbf {x} )\leq c}for a given constantc{\displaystyle c}. The following is known as the Lagrange multiplier theorem.[7] Letf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }be theobjective functionand letg:Rn→Rc{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{c}}be the constraints function, both belonging toC1{\displaystyle C^{1}}(that is, having continuous first derivatives). Letx⋆{\displaystyle x_{\star }}be an optimal solution to the following optimization problem such that, for the matrix of partial derivatives[D⁡g(x⋆)]j,k=∂gj∂xk{\displaystyle {\Bigl [}\operatorname {D} g(x_{\star }){\Bigr ]}_{j,k}={\frac {\ \partial g_{j}\ }{\partial x_{k}}}},rank⁡(D⁡g(x⋆))=c≤n{\displaystyle \operatorname {rank} (\operatorname {D} g(x_{\star }))=c\leq n}: maximizef(x)subject to:g(x)=0{\displaystyle {\begin{aligned}&{\text{maximize }}f(x)\\&{\text{subject to: }}g(x)=0\end{aligned}}} Then there exists a unique Lagrange multiplierλ⋆∈Rc{\displaystyle \lambda _{\star }\in \mathbb {R} ^{c}}such thatD⁡f(x⋆)=λ⋆TD⁡g(x⋆).{\displaystyle \operatorname {D} f(x_{\star })=\lambda _{\star }^{\mathsf {T}}\operatorname {D} g(x_{\star })~.}(Note that this is a somewhat conventional thing whereλ⋆{\displaystyle \lambda _{\star }}is clearly treated as a column vector to ensure that the dimensions match. But, we might as well make it just a row vector without taking the transpose.)[tone] The Lagrange multiplier theorem states that at any local maximum (or minimum) of the function evaluated under the equality constraints, if constraint qualification applies (explained below), then thegradientof the function (at that point) can be expressed as alinear combinationof the gradients of the constraints (at that point), with the Lagrange multipliers acting ascoefficients.[8]This is equivalent to saying that any direction perpendicular to all gradients of the constraints is also perpendicular to the gradient of the function. Or still, saying that thedirectional derivativeof the function is0in every feasible direction. For the case of only one constraint and only two choice variables (as exemplified in Figure 1), consider theoptimization problemmaximizex,yf(x,y)subject tog(x,y)=0.{\displaystyle {\begin{aligned}{\underset {x,y}{\text{maximize}}}\quad &f(x,y)\\{\text{subject to}}\quad &g(x,y)=0.\end{aligned}}}(Sometimes an additive constant is shown separately rather than being included ing{\displaystyle g}, in which case the constraint is writteng(x,y)=c,{\displaystyle g(x,y)=c,}as in Figure 1.) We assume that bothf{\displaystyle f}andg{\displaystyle g}have continuous firstpartial derivatives. We introduce a new variable (λ{\displaystyle \lambda }) called aLagrange multiplier(orLagrange undetermined multiplier) and study theLagrange function(orLagrangianorLagrangian expression) defined byL(x,y,λ)=f(x,y)+λ⋅g(x,y),{\displaystyle {\mathcal {L}}(x,y,\lambda )=f(x,y)+\lambda \cdot g(x,y),}where theλ{\displaystyle \lambda }term may be either added or subtracted. Iff(x0,y0){\displaystyle f(x_{0},y_{0})}is a maximum off(x,y){\displaystyle f(x,y)}for the original constrained problem and∇g(x0,y0)≠0,{\displaystyle \nabla g(x_{0},y_{0})\neq 0,}then there existsλ0{\displaystyle \lambda _{0}}such that (x0,y0,λ0{\displaystyle x_{0},y_{0},\lambda _{0}}) is astationary pointfor the Lagrange function (stationary points are those points where the first partial derivatives ofL{\displaystyle {\mathcal {L}}}are zero). The assumption∇g≠0{\displaystyle \nabla g\neq 0}is called constraint qualification. However, not all stationary points yield a solution of the original problem, as the method of Lagrange multipliers yields only anecessary conditionfor optimality in constrained problems.[9][10][11][12][13]Sufficient conditions for a minimum or maximumalso exist, but if a particularcandidate solutionsatisfies the sufficient conditions, it is only guaranteed that that solution is the best onelocally– that is, it is better than any permissible nearby points. Theglobaloptimum can be found by comparing the values of the original objective function at the points satisfying the necessary and locally sufficient conditions. The method of Lagrange multipliers relies on the intuition that at a maximum,f(x,y)cannot be increasing in the direction of any such neighboring point that also hasg= 0. If it were, we could walk alongg= 0to get higher, meaning that the starting point wasn't actually the maximum. Viewed in this way, it is an exact analogue to testing if the derivative of an unconstrained function is0, that is, we are verifying that the directional derivative is 0 in any relevant (viable) direction. We can visualizecontoursoffgiven byf(x,y) =dfor various values ofd, and the contour ofggiven byg(x,y) =c. Suppose we walk along the contour line withg=c.We are interested in finding points wherefalmost does not change as we walk, since these points might be maxima. There are two ways this could happen: To check the first possibility (we touch a contour line off), notice that since thegradientof a function is perpendicular to the contour lines, the tangents to the contour lines offandgare parallel if and only if the gradients offandgare parallel. Thus we want points(x,y)whereg(x,y) =cand∇x,yf=λ∇x,yg,{\displaystyle \nabla _{x,y}f=\lambda \,\nabla _{x,y}g,}for someλ{\displaystyle \lambda }where∇x,yf=(∂f∂x,∂f∂y),∇x,yg=(∂g∂x,∂g∂y){\displaystyle \nabla _{x,y}f=\left({\frac {\partial f}{\partial x}},{\frac {\partial f}{\partial y}}\right),\qquad \nabla _{x,y}g=\left({\frac {\partial g}{\partial x}},{\frac {\partial g}{\partial y}}\right)}are the respective gradients. The constantλ{\displaystyle \lambda }is required because although the two gradient vectors are parallel, the magnitudes of the gradient vectors are generally not equal. This constant is called the Lagrange multiplier. (In some conventionsλ{\displaystyle \lambda }is preceded by a minus sign). Notice that this method also solves the second possibility, thatfis level: iffis level, then its gradient is zero, and settingλ=0{\displaystyle \lambda =0}is a solution regardless of∇x,yg{\displaystyle \nabla _{x,y}g}. To incorporate these conditions into one equation, we introduce an auxiliary functionL(x,y,λ)≡f(x,y)+λ⋅g(x,y),{\displaystyle {\mathcal {L}}(x,y,\lambda )\equiv f(x,y)+\lambda \cdot g(x,y)\,,}and solve∇x,y,λL(x,y,λ)=0.{\displaystyle \nabla _{x,y,\lambda }{\mathcal {L}}(x,y,\lambda )=0~.}Note that this amounts to solving three equations in three unknowns. This is the method of Lagrange multipliers. Note that∇λL(x,y,λ)=0{\displaystyle \ \nabla _{\lambda }{\mathcal {L}}(x,y,\lambda )=0\ }impliesg(x,y)=0,{\displaystyle \ g(x,y)=0\ ,}as the partial derivative ofL{\displaystyle {\mathcal {L}}}with respect toλ{\displaystyle \lambda }isg(x,y).{\displaystyle \ g(x,y)~.} To summarize∇x,y,λL(x,y,λ)=0⟺{∇x,yf(x,y)=−λ∇x,yg(x,y)g(x,y)=0{\displaystyle \nabla _{x,y,\lambda }{\mathcal {L}}(x,y,\lambda )=0\iff {\begin{cases}\nabla _{x,y}f(x,y)=-\lambda \,\nabla _{x,y}g(x,y)\\g(x,y)=0\end{cases}}}The method generalizes readily to functions onn{\displaystyle n}variables∇x1,…,xn,λL(x1,…,xn,λ)=0{\displaystyle \nabla _{x_{1},\dots ,x_{n},\lambda }{\mathcal {L}}(x_{1},\dots ,x_{n},\lambda )=0}which amounts to solvingn+ 1equations inn+ 1unknowns. The constrained extrema offarecritical pointsof the LagrangianL{\displaystyle {\mathcal {L}}}, but they are not necessarilylocal extremaofL{\displaystyle {\mathcal {L}}}(see§ Example 2below). One mayreformulate the Lagrangianas aHamiltonian, in which case the solutions are local minima for the Hamiltonian. This is done inoptimal controltheory, in the form ofPontryagin's maximum principle. The fact that solutions of the method of Lagrange multipliers are not necessarily extrema of the Lagrangian, also poses difficulties for numerical optimization. This can be addressed by minimizing themagnitudeof the gradient of the Lagrangian, as these minima are the same as the zeros of the magnitude, as illustrated inExample 5: Numerical optimization. The method of Lagrange multipliers can be extended to solve problems with multiple constraints using a similar argument. Consider aparaboloidsubject to two line constraints that intersect at a single point. As the only feasible solution, this point is obviously a constrained extremum. However, thelevel setoff{\displaystyle f}is clearly not parallel to either constraint at the intersection point (see Figure 3); instead, it is a linear combination of the two constraints' gradients. In the case of multiple constraints, that will be what we seek in general: The method of Lagrange seeks points not at which the gradient off{\displaystyle f}is a multiple of any single constraint's gradient necessarily, but in which it is a linear combination of all the constraints' gradients. Concretely, suppose we haveM{\displaystyle M}constraints and are walking along the set of points satisfyinggi(x)=0,i=1,…,M.{\displaystyle g_{i}(\mathbf {x} )=0,i=1,\dots ,M\,.}Every pointx{\displaystyle \mathbf {x} }on the contour of a given constraint functiongi{\displaystyle g_{i}}has a space of allowable directions: the space of vectors perpendicular to∇gi(x).{\displaystyle \nabla g_{i}(\mathbf {x} )\,.}The set of directions that are allowed by all constraints is thus the space of directions perpendicular to all of the constraints' gradients. Denote this space of allowable moves byA{\displaystyle \ A\ }and denote the span of the constraints' gradients byS.{\displaystyle S\,.}ThenA=S⊥,{\displaystyle A=S^{\perp }\,,}the space of vectors perpendicular to every element ofS.{\displaystyle S\,.} We are still interested in finding points wheref{\displaystyle f}does not change as we walk, since these points might be (constrained) extrema. We therefore seekx{\displaystyle \mathbf {x} }such that any allowable direction of movement away fromx{\displaystyle \mathbf {x} }is perpendicular to∇f(x){\displaystyle \nabla f(\mathbf {x} )}(otherwise we could increasef{\displaystyle f}by moving along that allowable direction). In other words,∇f(x)∈A⊥=S.{\displaystyle \nabla f(\mathbf {x} )\in A^{\perp }=S\,.}Thus there are scalarsλ1,λ2,…,λM{\displaystyle \lambda _{1},\lambda _{2},\ \dots ,\lambda _{M}}such that∇f(x)=∑k=1Mλk∇gk(x)⟺∇f(x)−∑k=1Mλk∇gk(x)=0.{\displaystyle \nabla f(\mathbf {x} )=\sum _{k=1}^{M}\lambda _{k}\,\nabla g_{k}(\mathbf {x} )\quad \iff \quad \nabla f(\mathbf {x} )-\sum _{k=1}^{M}{\lambda _{k}\nabla g_{k}(\mathbf {x} )}=0~.} These scalars are the Lagrange multipliers. We now haveM{\displaystyle M}of them, one for every constraint. As before, we introduce an auxiliary functionL(x1,…,xn,λ1,…,λM)=f(x1,…,xn)−∑k=1Mλkgk(x1,…,xn){\displaystyle {\mathcal {L}}\left(x_{1},\ldots ,x_{n},\lambda _{1},\ldots ,\lambda _{M}\right)=f\left(x_{1},\ldots ,x_{n}\right)-\sum \limits _{k=1}^{M}{\lambda _{k}g_{k}\left(x_{1},\ldots ,x_{n}\right)}\ }and solve∇x1,…,xn,λ1,…,λML(x1,…,xn,λ1,…,λM)=0⟺{∇f(x)−∑k=1Mλk∇gk(x)=0g1(x)=⋯=gM(x)=0{\displaystyle \nabla _{x_{1},\ldots ,x_{n},\lambda _{1},\ldots ,\lambda _{M}}{\mathcal {L}}(x_{1},\ldots ,x_{n},\lambda _{1},\ldots ,\lambda _{M})=0\iff {\begin{cases}\nabla f(\mathbf {x} )-\sum _{k=1}^{M}{\lambda _{k}\,\nabla g_{k}(\mathbf {x} )}=0\\g_{1}(\mathbf {x} )=\cdots =g_{M}(\mathbf {x} )=0\end{cases}}}which amounts to solvingn+M{\displaystyle n+M}equations inn+M{\displaystyle \ n+M\ }unknowns. The constraint qualification assumption when there are multiple constraints is that the constraint gradients at the relevant point are linearly independent. The problem of finding the local maxima and minima subject to constraints can be generalized to finding local maxima and minima on adifferentiable manifoldM.{\displaystyle \ M~.}[14]In what follows, it is not necessary thatM{\displaystyle M}be a Euclidean space, or even aRiemannian manifold. All appearances of the gradient∇{\displaystyle \ \nabla \ }(which depends on a choice of Riemannian metric) can be replaced with theexterior derivatived⁡.{\displaystyle \ \operatorname {d} ~.} LetM{\displaystyle \ M\ }be asmooth manifoldof dimensionm.{\displaystyle \ m~.}Suppose that we wish to find the stationary pointsx{\displaystyle \ x\ }of a smooth functionf:M→R{\displaystyle \ f:M\to \mathbb {R} \ }when restricted to the submanifoldN{\displaystyle \ N\ }defined byg(x)=0,{\displaystyle \ g(x)=0\ ,}whereg:M→R{\displaystyle \ g:M\to \mathbb {R} \ }is a smooth function for which0is aregular value. Letd⁡f{\displaystyle \ \operatorname {d} f\ }andd⁡g{\displaystyle \ \operatorname {d} g\ }be theexterior derivativesoff{\displaystyle \ f\ }andg{\displaystyle \ g\ }. Stationarity for the restrictionf|N{\displaystyle \ f|_{N}\ }atx∈N{\displaystyle \ x\in N\ }meansd⁡(f|N)x=0.{\displaystyle \ \operatorname {d} (f|_{N})_{x}=0~.}Equivalently, the kernelker⁡(d⁡fx){\displaystyle \ \ker(\operatorname {d} f_{x})\ }containsTxN=ker⁡(d⁡gx).{\displaystyle \ T_{x}N=\ker(\operatorname {d} g_{x})~.}In other words,d⁡fx{\displaystyle \ \operatorname {d} f_{x}\ }andd⁡gx{\displaystyle \ \operatorname {d} g_{x}\ }are proportional 1-forms. For this it is necessary and sufficient that the following system of12m(m−1){\displaystyle \ {\tfrac {1}{2}}m(m-1)\ }equations holds:d⁡fx∧d⁡gx=0∈Λ2(Tx∗M){\displaystyle \operatorname {d} f_{x}\wedge \operatorname {d} g_{x}=0\in \Lambda ^{2}(T_{x}^{\ast }M)}where∧{\displaystyle \ \wedge \ }denotes theexterior product. The stationary pointsx{\displaystyle \ x\ }are the solutions of the above system of equations plus the constraintg(x)=0.{\displaystyle \ g(x)=0~.}Note that the12m(m−1){\displaystyle \ {\tfrac {1}{2}}m(m-1)\ }equations are not independent, since the left-hand side of the equation belongs to the subvariety ofΛ2(Tx∗M){\displaystyle \ \Lambda ^{2}(T_{x}^{\ast }M)\ }consisting ofdecomposable elements. In this formulation, it is not necessary to explicitly find the Lagrange multiplier, a numberλ{\displaystyle \ \lambda \ }such thatd⁡fx=λ⋅d⁡gx.{\displaystyle \ \operatorname {d} f_{x}=\lambda \cdot \operatorname {d} g_{x}~.} LetM{\displaystyle \ M\ }andf{\displaystyle \ f\ }be as in the above section regarding the case of a single constraint. Rather than the functiong{\displaystyle g}described there, now consider a smooth functionG:M→Rp(p>1),{\displaystyle \ G:M\to \mathbb {R} ^{p}(p>1)\ ,}with component functionsgi:M→R,{\displaystyle \ g_{i}:M\to \mathbb {R} \ ,}for which0∈Rp{\displaystyle 0\in \mathbb {R} ^{p}}is aregular value. LetN{\displaystyle N}be the submanifold ofM{\displaystyle \ M\ }defined byG(x)=0.{\displaystyle \ G(x)=0~.} x{\displaystyle \ x\ }is a stationary point off|N{\displaystyle f|_{N}}if and only ifker⁡(d⁡fx){\displaystyle \ \ker(\operatorname {d} f_{x})\ }containsker⁡(d⁡Gx).{\displaystyle \ \ker(\operatorname {d} G_{x})~.}For convenience letLx=d⁡fx{\displaystyle \ L_{x}=\operatorname {d} f_{x}\ }andKx=d⁡Gx,{\displaystyle \ K_{x}=\operatorname {d} G_{x}\ ,}whered⁡G{\displaystyle \ \operatorname {d} G}denotes the tangent map or JacobianTM→TRp{\displaystyle \ TM\to T\mathbb {R} ^{p}~}(TxRp{\displaystyle \ T_{x}\mathbb {R} ^{p}}can be canonically identified withRp{\displaystyle \ \mathbb {R} ^{p}}). The subspaceker⁡(Kx){\displaystyle \ker(K_{x})}has dimension smaller than that ofker⁡(Lx){\displaystyle \ker(L_{x})}, namelydim⁡(ker⁡(Lx))=n−1{\displaystyle \ \dim(\ker(L_{x}))=n-1\ }anddim⁡(ker⁡(Kx))=n−p.{\displaystyle \ \dim(\ker(K_{x}))=n-p~.}ker⁡(Kx){\displaystyle \ker(K_{x})}belongs toker⁡(Lx){\displaystyle \ \ker(L_{x})\ }if and only ifLx∈Tx∗M{\displaystyle L_{x}\in T_{x}^{\ast }M}belongs to the image ofKx∗:Rp∗→Tx∗M.{\displaystyle \ K_{x}^{\ast }:\mathbb {R} ^{p\ast }\to T_{x}^{\ast }M~.}Computationally speaking, the condition is thatLx{\displaystyle L_{x}}belongs to the row space of the matrix ofKx,{\displaystyle \ K_{x}\ ,}or equivalently the column space of the matrix ofKx∗{\displaystyle K_{x}^{\ast }}(the transpose). Ifωx∈Λp(Tx∗M){\displaystyle \ \omega _{x}\in \Lambda ^{p}(T_{x}^{\ast }M)\ }denotes the exterior product of the columns of the matrix ofKx∗,{\displaystyle \ K_{x}^{\ast }\ ,}the stationary condition forf|N{\displaystyle \ f|_{N}\ }atx{\displaystyle \ x\ }becomesLx∧ωx=0∈Λp+1(Tx∗M){\displaystyle L_{x}\wedge \omega _{x}=0\in \Lambda ^{p+1}\left(T_{x}^{\ast }M\right)}Once again, in this formulation it is not necessary to explicitly find the Lagrange multipliers, the numbersλ1,…,λp{\displaystyle \ \lambda _{1},\ldots ,\lambda _{p}\ }such thatd⁡fx=∑i=1pλid⁡(gi)x.{\displaystyle \ \operatorname {d} f_{x}=\sum _{i=1}^{p}\lambda _{i}\operatorname {d} (g_{i})_{x}~.} In this section, we modify the constraint equations from the formgi(x)=0{\displaystyle g_{i}({\bf {x}})=0}to the formgi(x)=ci,{\displaystyle \ g_{i}({\bf {x}})=c_{i}\ ,}where theci{\displaystyle \ c_{i}\ }aremreal constants that are considered to be additional arguments of the Lagrangian expressionL{\displaystyle {\mathcal {L}}}. Often the Lagrange multipliers have an interpretation as some quantity of interest. For example, by parametrising the constraint's contour line, that is, if the Lagrangian expression isL(x1,x2,…;λ1,λ2,…;c1,c2,…)=f(x1,x2,…)+λ1(c1−g1(x1,x2,…))+λ2(c2−g2(x1,x2,…))+⋯{\displaystyle {\begin{aligned}&{\mathcal {L}}(x_{1},x_{2},\ldots ;\lambda _{1},\lambda _{2},\ldots ;c_{1},c_{2},\ldots )\\[4pt]={}&f(x_{1},x_{2},\ldots )+\lambda _{1}(c_{1}-g_{1}(x_{1},x_{2},\ldots ))+\lambda _{2}(c_{2}-g_{2}(x_{1},x_{2},\dots ))+\cdots \end{aligned}}}then∂L∂ck=λk.{\displaystyle \ {\frac {\partial {\mathcal {L}}}{\partial c_{k}}}=\lambda _{k}~.} So,λkis the rate of change of the quantity being optimized as a function of the constraint parameter. As examples, inLagrangian mechanicsthe equations of motion are derived by finding stationary points of theaction, the time integral of the difference between kinetic and potential energy. Thus, the force on a particle due to a scalar potential,F= −∇V, can be interpreted as a Lagrange multiplier determining the change in action (transfer of potential to kinetic energy) following a variation in the particle's constrained trajectory. In control theory this is formulated instead ascostate equations. Moreover, by theenvelope theoremthe optimal value of a Lagrange multiplier has an interpretation as the marginal effect of the corresponding constraint constant upon the optimal attainable value of the original objective function: If we denote values at the optimum with a star (⋆{\displaystyle \star }), then it can be shown thatd⁡f(x1⋆(c1,c2,…),x2⋆(c1,c2,…),…)d⁡ck=λ⋆k.{\displaystyle {\frac {\ \operatorname {d} f\left(\ x_{1\star }(c_{1},c_{2},\dots ),\ x_{2\star }(c_{1},c_{2},\dots ),\ \dots \ \right)\ }{\operatorname {d} c_{k}}}=\lambda _{\star k}~.} For example, in economics the optimal profit to a player is calculated subject to a constrained space of actions, where a Lagrange multiplier is the change in the optimal value of the objective function (profit) due to the relaxation of a given constraint (e.g. through a change in income); in such a contextλ⋆k{\displaystyle \ \lambda _{\star k}\ }is themarginal costof the constraint, and is referred to as theshadow price.[15] Sufficient conditions for a constrained local maximum or minimum can be stated in terms of a sequence of principal minors (determinants of upper-left-justified sub-matrices) of the borderedHessian matrixof second derivatives of the Lagrangian expression.[6][16] Suppose we wish to maximizef(x,y)=x+y{\displaystyle \ f(x,y)=x+y\ }subject to the constraintx2+y2=1.{\displaystyle \ x^{2}+y^{2}=1~.}Thefeasible setis the unit circle, and thelevel setsoffare diagonal lines (with slope −1), so we can see graphically that the maximum occurs at(12,12),{\displaystyle \ \left({\tfrac {1}{\sqrt {2}}},{\tfrac {1}{\sqrt {2}}}\right)\ ,}and that the minimum occurs at(−12,−12).{\displaystyle \ \left(-{\tfrac {1}{\sqrt {2}}},-{\tfrac {1}{\sqrt {2}}}\right)~.} For the method of Lagrange multipliers, the constraint isg(x,y)=x2+y2−1=0,{\displaystyle g(x,y)=x^{2}+y^{2}-1=0\ ,}hence the Lagrangian function,L(x,y,λ)=f(x,y)+λ⋅g(x,y)=x+y+λ(x2+y2−1),{\displaystyle {\begin{aligned}{\mathcal {L}}(x,y,\lambda )&=f(x,y)+\lambda \cdot g(x,y)\\[4pt]&=x+y+\lambda (x^{2}+y^{2}-1)\ ,\end{aligned}}}is a function that is equivalent tof(x,y){\displaystyle \ f(x,y)\ }wheng(x,y){\displaystyle \ g(x,y)\ }is set to0. Now we can calculate the gradient:∇x,y,λL(x,y,λ)=(∂L∂x,∂L∂y,∂L∂λ)=(1+2λx,1+2λy,x2+y2−1),{\displaystyle {\begin{aligned}\nabla _{x,y,\lambda }{\mathcal {L}}(x,y,\lambda )&=\left({\frac {\partial {\mathcal {L}}}{\partial x}},{\frac {\partial {\mathcal {L}}}{\partial y}},{\frac {\partial {\mathcal {L}}}{\partial \lambda }}\right)\\[4pt]&=\left(1+2\lambda x,1+2\lambda y,x^{2}+y^{2}-1\right)\ \color {gray}{,}\end{aligned}}}and therefore:∇x,y,λL(x,y,λ)=0⇔{1+2λx=01+2λy=0x2+y2−1=0{\displaystyle \nabla _{x,y,\lambda }{\mathcal {L}}(x,y,\lambda )=0\quad \Leftrightarrow \quad {\begin{cases}1+2\lambda x=0\\1+2\lambda y=0\\x^{2}+y^{2}-1=0\end{cases}}} Notice that the last equation is the original constraint. The first two equations yieldx=y=−12λ,λ≠0.{\displaystyle x=y=-{\frac {1}{2\lambda }},\qquad \lambda \neq 0~.}By substituting into the last equation we have:14λ2+14λ2−1=0,{\displaystyle {\frac {1}{4\lambda ^{2}}}+{\frac {1}{4\lambda ^{2}}}-1=0\ ,}soλ=±12,{\displaystyle \lambda =\pm {\frac {1}{\sqrt {2\ }}}\ ,}which implies that the stationary points ofL{\displaystyle {\mathcal {L}}}are(22,22,−12),(−22,−22,12).{\displaystyle \left({\tfrac {\sqrt {2\ }}{2}},{\tfrac {\sqrt {2\ }}{2}},-{\tfrac {1}{\sqrt {2\ }}}\right),\qquad \left(-{\tfrac {\sqrt {2\ }}{2}},-{\tfrac {\sqrt {2\ }}{2}},{\tfrac {1}{\sqrt {2\ }}}\right)~.} Evaluating the objective functionfat these points yieldsf(22,22)=2,f(−22,−22)=−2.{\displaystyle f\left({\tfrac {\sqrt {2\ }}{2}},{\tfrac {\sqrt {2\ }}{2}}\right)={\sqrt {2\ }}\ ,\qquad f\left(-{\tfrac {\sqrt {2\ }}{2}},-{\tfrac {\sqrt {2\ }}{2}}\right)=-{\sqrt {2\ }}~.} Thus the constrained maximum is2{\displaystyle \ {\sqrt {2\ }}\ }and the constrained minimum is−2{\displaystyle -{\sqrt {2}}}. Now we modify the objective function of Example1so that we minimizef(x,y)=(x+y)2{\displaystyle \ f(x,y)=(x+y)^{2}\ }instead off(x,y)=x+y,{\displaystyle \ f(x,y)=x+y\ ,}again along the circleg(x,y)=x2+y2−1=0.{\displaystyle \ g(x,y)=x^{2}+y^{2}-1=0~.}Now the level sets off{\displaystyle f}are still lines of slope −1, and the points on the circle tangent to these level sets are again(2/2,2/2){\displaystyle \ ({\sqrt {2}}/2,{\sqrt {2}}/2)\ }and(−2/2,−2/2).{\displaystyle \ (-{\sqrt {2}}/2,-{\sqrt {2}}/2)~.}These tangency points are maxima off.{\displaystyle \ f~.} On the other hand, the minima occur on the level set forf=0{\displaystyle \ f=0\ }(since by its constructionf{\displaystyle \ f\ }cannot take negative values), at(2/2,−2/2){\displaystyle \ ({\sqrt {2}}/2,-{\sqrt {2}}/2)\ }and(−2/2,2/2),{\displaystyle \ (-{\sqrt {2}}/2,{\sqrt {2}}/2)\ ,}where the level curves off{\displaystyle \ f\ }are not tangent to the constraint. The condition that∇x,y,λ(f(x,y)+λ⋅g(x,y))=0{\displaystyle \ \nabla _{x,y,\lambda }\left(f(x,y)+\lambda \cdot g(x,y)\right)=0\ }correctly identifies all four points as extrema; the minima are characterized in byλ=0{\displaystyle \ \lambda =0\ }and the maxima byλ=−2.{\displaystyle \ \lambda =-2~.} This example deals with more strenuous calculations, but it is still a single constraint problem. Suppose one wants to find the maximum values off(x,y)=x2y{\displaystyle f(x,y)=x^{2}y}with the condition that thex{\displaystyle \ x\ }- andy{\displaystyle \ y\ }-coordinates lie on the circle around the origin with radius3.{\displaystyle \ {\sqrt {3\ }}~.}That is, subject to the constraintg(x,y)=x2+y2−3=0.{\displaystyle g(x,y)=x^{2}+y^{2}-3=0~.} As there is just a single constraint, there is a single multiplier, sayλ.{\displaystyle \ \lambda ~.} The constraintg(x,y){\displaystyle \ g(x,y)\ }is identically zero on the circle of radius3.{\displaystyle \ {\sqrt {3\ }}~.}Any multiple ofg(x,y){\displaystyle \ g(x,y)\ }may be added tog(x,y){\displaystyle \ g(x,y)\ }leavingg(x,y){\displaystyle \ g(x,y)\ }unchanged in the region of interest (on the circle where our original constraint is satisfied). Applying the ordinary Lagrange multiplier method yieldsL(x,y,λ)=f(x,y)+λ⋅g(x,y)=x2y+λ(x2+y2−3),{\displaystyle {\begin{aligned}{\mathcal {L}}(x,y,\lambda )&=f(x,y)+\lambda \cdot g(x,y)\\&=x^{2}y+\lambda (x^{2}+y^{2}-3)\ ,\end{aligned}}}from which the gradient can be calculated:∇x,y,λL(x,y,λ)=(∂L∂x,∂L∂y,∂L∂λ)=(2xy+2λx,x2+2λy,x2+y2−3).{\displaystyle {\begin{aligned}\nabla _{x,y,\lambda }{\mathcal {L}}(x,y,\lambda )&=\left({\frac {\partial {\mathcal {L}}}{\partial x}},{\frac {\partial {\mathcal {L}}}{\partial y}},{\frac {\partial {\mathcal {L}}}{\partial \lambda }}\right)\\&=\left(2xy+2\lambda x,x^{2}+2\lambda y,x^{2}+y^{2}-3\right)~.\end{aligned}}}And therefore:∇x,y,λL(x,y,λ)=0⟺{2xy+2λx=0x2+2λy=0x2+y2−3=0⟺{x(y+λ)=0(i)x2=−2λy(ii)x2+y2=3(iii){\displaystyle \nabla _{x,y,\lambda }{\mathcal {L}}(x,y,\lambda )=0\quad \iff \quad {\begin{cases}2xy+2\lambda x=0\\x^{2}+2\lambda y=0\\x^{2}+y^{2}-3=0\end{cases}}\quad \iff \quad {\begin{cases}x(y+\lambda )=0&{\text{(i)}}\\x^{2}=-2\lambda y&{\text{(ii)}}\\x^{2}+y^{2}=3&{\text{(iii)}}\end{cases}}}(iii) is just the original constraint. (i) impliesx=0{\displaystyle \ x=0\ }orλ=−y.{\displaystyle \ \lambda =-y~.}Ifx=0{\displaystyle x=0}theny=±3{\displaystyle \ y=\pm {\sqrt {3\ }}\ }by (iii) and consequentlyλ=0{\displaystyle \ \lambda =0\ }from (ii). Ifλ=−y,{\displaystyle \ \lambda =-y\ ,}substituting this into (ii) yieldsx2=2y2.{\displaystyle \ x^{2}=2y^{2}~.}Substituting this into (iii) and solving fory{\displaystyle \ y\ }givesy=±1.{\displaystyle \ y=\pm 1~.}Thus there are six critical points ofL:{\displaystyle \ {\mathcal {L}}\ :}(2,1,−1);(−2,1,−1);(2,−1,1);(−2,−1,1);(0,3,0);(0,−3,0).{\displaystyle ({\sqrt {2\ }},1,-1);\quad (-{\sqrt {2\ }},1,-1);\quad ({\sqrt {2\ }},-1,1);\quad (-{\sqrt {2\ }},-1,1);\quad (0,{\sqrt {3\ }},0);\quad (0,-{\sqrt {3\ }},0)~.} Evaluating the objective at these points, one finds thatf(±2,1)=2;f(±2,−1)=−2;f(0,±3)=0.{\displaystyle f(\pm {\sqrt {2\ }},1)=2;\quad f(\pm {\sqrt {2\ }},-1)=-2;\quad f(0,\pm {\sqrt {3\ }})=0~.} Therefore, the objective function attains theglobal maximum(subject to the constraints) at(±2,1){\displaystyle \ (\pm {\sqrt {2\ }},1\ )}and theglobal minimumat(±2,−1).{\displaystyle \ (\pm {\sqrt {2\ }},-1)~.}The point(0,3){\displaystyle \ (0,{\sqrt {3\ }})\ }is alocal minimumoff{\displaystyle \ f\ }and(0,−3){\displaystyle \ (0,-{\sqrt {3\ }})\ }is alocal maximumoff,{\displaystyle \ f\ ,}as may be determined by consideration of theHessian matrixofL(x,y,0).{\displaystyle \ {\mathcal {L}}(x,y,0)~.} Note that while(2,1,−1){\displaystyle \ ({\sqrt {2\ }},1,-1)\ }is a critical point ofL,{\displaystyle \ {\mathcal {L}}\ ,}it is not a local extremum ofL.{\displaystyle \ {\mathcal {L}}~.}We haveL(2+ε,1,−1+δ)=2+δ(ε2+(22)ε).{\displaystyle {\mathcal {L}}\left({\sqrt {2\ }}+\varepsilon ,1,-1+\delta \right)=2+\delta \left(\varepsilon ^{2}+\left(2{\sqrt {2\ }}\right)\varepsilon \right)~.} Given any neighbourhood of(2,1,−1),{\displaystyle \ ({\sqrt {2\ }},1,-1)\ ,}one can choose a small positiveε{\displaystyle \ \varepsilon \ }and a smallδ{\displaystyle \ \delta \ }of either sign to getL{\displaystyle \ {\mathcal {L}}}values both greater and less than2.{\displaystyle \ 2~.}This can also be seen from the Hessian matrix ofL{\displaystyle \ {\mathcal {L}}\ }evaluated at this point (or indeed at any of the critical points) which is anindefinite matrix. Each of the critical points ofL{\displaystyle \ {\mathcal {L}}\ }is asaddle pointofL.{\displaystyle \ {\mathcal {L}}~.}[4] Suppose we wish to find thediscrete probability distributionon the points{p1,p2,…,pn}{\displaystyle \ \{p_{1},p_{2},\ldots ,p_{n}\}\ }with maximalinformation entropy. This is the same as saying that we wish to find theleast structuredprobability distribution on the points{p1,p2,⋯,pn}.{\displaystyle \ \{p_{1},p_{2},\cdots ,p_{n}\}~.}In other words, we wish to maximize theShannon entropyequation:f(p1,p2,…,pn)=−∑j=1npjlog2⁡pj.{\displaystyle f(p_{1},p_{2},\ldots ,p_{n})=-\sum _{j=1}^{n}p_{j}\log _{2}p_{j}~.} For this to be a probability distribution the sum of the probabilitiespi{\displaystyle \ p_{i}\ }at each pointxi{\displaystyle \ x_{i}\ }must equal 1, so our constraint is:g(p1,p2,…,pn)=∑j=1npj=1.{\displaystyle g(p_{1},p_{2},\ldots ,p_{n})=\sum _{j=1}^{n}p_{j}=1~.} We use Lagrange multipliers to find the point of maximum entropy,p→∗,{\displaystyle \ {\vec {p}}^{\,*}\ ,}across all discrete probability distributionsp→{\displaystyle \ {\vec {p}}\ }on{x1,x2,…,xn}.{\displaystyle \ \{x_{1},x_{2},\ldots ,x_{n}\}~.}We require that:∂∂p→(f+λ(g−1))|p→=p→∗=0,{\displaystyle \left.{\frac {\partial }{\partial {\vec {p}}}}(f+\lambda (g-1))\right|_{{\vec {p}}={\vec {p}}^{\,*}}=0\ ,}which gives a system ofnequations,k=1,…,n,{\displaystyle \ k=1,\ \ldots ,n\ ,}such that:∂∂pk{−(∑j=1npjlog2⁡pj)+λ(∑j=1npj−1)}|pk=p⋆k=0.{\displaystyle \left.{\frac {\partial }{\partial p_{k}}}\left\{-\left(\sum _{j=1}^{n}p_{j}\log _{2}p_{j}\right)+\lambda \left(\sum _{j=1}^{n}p_{j}-1\right)\right\}\right|_{p_{k}=p_{\star k}}=0~.} Carrying out the differentiation of thesenequations, we get−(1ln⁡2+log2⁡p⋆k)+λ=0.{\displaystyle -\left({\frac {1}{\ln 2}}+\log _{2}p_{\star k}\right)+\lambda =0~.} This shows that allp⋆k{\displaystyle \ p_{\star k}\ }are equal (because they depend onλonly). By using the constraint∑jpj=1,{\displaystyle \sum _{j}p_{j}=1\ ,}we findp⋆k=1n.{\displaystyle p_{\star k}={\frac {1}{n}}~.} Hence, the uniform distribution is the distribution with the greatest entropy, among distributions onnpoints. The critical points of Lagrangians occur atsaddle points, rather than at local maxima (or minima).[4][17]Unfortunately, many numerical optimization techniques, such ashill climbing,gradient descent, some of thequasi-Newton methods, among others, are designed to find local maxima (or minima) and not saddle points. For this reason, one must either modify the formulation to ensure that it's a minimization problem (for example, by extremizing the square of thegradientof the Lagrangian as below), or else use an optimization technique that findsstationary points(such asNewton's methodwithout an extremum seekingline search) and not necessarily extrema. As a simple example, consider the problem of finding the value ofxthat minimizesf(x)=x2,{\displaystyle \ f(x)=x^{2}\ ,}constrained such thatx2=1.{\displaystyle \ x^{2}=1~.}(This problem is somewhat untypical because there are only two values that satisfy this constraint, but it is useful for illustration purposes because the corresponding unconstrained function can be visualized in three dimensions.) Using Lagrange multipliers, this problem can be converted into an unconstrained optimization problem:L(x,λ)=x2+λ(x2−1).{\displaystyle {\mathcal {L}}(x,\lambda )=x^{2}+\lambda (x^{2}-1)~.} The two critical points occur at saddle points wherex= 1andx= −1. In order to solve this problem with a numerical optimization technique, we must first transform this problem such that the critical points occur at local minima. This is done by computing the magnitude of the gradient of the unconstrained optimization problem. First, we compute the partial derivative of the unconstrained problem with respect to each variable:∂L∂x=2x+2xλ∂L∂λ=x2−1.{\displaystyle {\begin{aligned}&{\frac {\partial {\mathcal {L}}}{\partial x}}=2x+2x\lambda \\[5pt]&{\frac {\partial {\mathcal {L}}}{\partial \lambda }}=x^{2}-1~.\end{aligned}}} If the target function is not easily differentiable, the differential with respect to each variable can be approximated as∂L∂x≈L(x+ε,λ)−L(x,λ)ε,∂L∂λ≈L(x,λ+ε)−L(x,λ)ε,{\displaystyle {\begin{aligned}{\frac {\ \partial {\mathcal {L}}\ }{\partial x}}\approx {\frac {{\mathcal {L}}(x+\varepsilon ,\lambda )-{\mathcal {L}}(x,\lambda )}{\varepsilon }},\\[5pt]{\frac {\ \partial {\mathcal {L}}\ }{\partial \lambda }}\approx {\frac {{\mathcal {L}}(x,\lambda +\varepsilon )-{\mathcal {L}}(x,\lambda )}{\varepsilon }},\end{aligned}}}whereε{\displaystyle \varepsilon }is a small value. Next, we compute the magnitude of the gradient, which is the square root of the sum of the squares of the partial derivatives:h(x,λ)=(2x+2xλ)2+(x2−1)2≈(L(x+ε,λ)−L(x,λ)ε)2+(L(x,λ+ε)−L(x,λ)ε)2.{\displaystyle {\begin{aligned}h(x,\lambda )&={\sqrt {(2x+2x\lambda )^{2}+(x^{2}-1)^{2}\ }}\\[4pt]&\approx {\sqrt {\left({\frac {\ {\mathcal {L}}(x+\varepsilon ,\lambda )-{\mathcal {L}}(x,\lambda )\ }{\varepsilon }}\right)^{2}+\left({\frac {\ {\mathcal {L}}(x,\lambda +\varepsilon )-{\mathcal {L}}(x,\lambda )\ }{\varepsilon }}\right)^{2}\ }}~.\end{aligned}}} (Since magnitude is always non-negative, optimizing over the squared-magnitude is equivalent to optimizing over the magnitude. Thus, the "square root" may be omitted from these equations with no expected difference in the results of optimization.) The critical points ofhoccur atx= 1andx= −1, just as inL.{\displaystyle {\mathcal {L}}~.}Unlike the critical points inL,{\displaystyle {\mathcal {L}}\,,}however, the critical points inhoccur at local minima, so numerical optimization techniques can be used to find them. Inoptimal controltheory, the Lagrange multipliers are interpreted ascostatevariables, and Lagrange multipliers are reformulated as the minimization of theHamiltonian, inPontryagin's maximum principle. The Lagrange multiplier method has several generalizations. Innonlinear programmingthere are several multiplier rules, e.g. the Carathéodory–John Multiplier Rule and the Convex Multiplier Rule, for inequality constraints.[18] In many models inmathematical economicssuch asgeneral equilibrium models, consumer behavior is implemented asutility maximizationand firm behavior asprofit maximization, both entities being subject to constraints such asbudget constraintsandproduction constraints. The usual way to determine an optimal solution is achieved by maximizing some function, where the constraints are enforced using Lagrangian multipliers.[19][20][21][22] Methods based on Lagrange multipliers have applications inpower systems, e.g. in distributed-energy-resources (DER) placement and load shedding.[23] The method of Lagrange multipliers applies toconstrained Markov decision processes.[24]It naturally produces gradient-based primal-dual algorithms in safe reinforcement learning.[25] Considering the PDE problems with constraints, i.e., the study of the properties of the normalized solutions, Lagrange multipliers play an important role.
https://en.wikipedia.org/wiki/Lagrange_multiplier
Inmathematics, more specifically in the theory ofMonte Carlo methods,variance reductionis a procedure used to increase theprecisionof theestimatesobtained for a given simulation or computational effort.[1]Every output random variable from the simulation is associated with avariancewhich limits the precision of the simulation results. In order to make a simulation statistically efficient, i.e., to obtain a greater precision and smallerconfidence intervalsfor the output random variable of interest, variance reduction techniques can be used. The main variance reduction methods are For simulation withblack-boxmodelssubset simulationandline samplingcan also be used. Under these headings are a variety of specialized techniques; for example, particle transport simulations make extensive use of "weight windows" and "splitting/Russian roulette" techniques, which are a form of importance sampling. Suppose one wants to computez:=E(Z){\displaystyle z:=E(Z)}with the random variableZ{\displaystyle Z}defined on theprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}. Monte Carlo does this by samplingi.i.d. copiesZ1,...,ZR{\displaystyle Z_{1},...,Z_{R}}ofZ{\displaystyle Z}and then to estimatez{\displaystyle z}via the sample-mean estimator Under further mild conditions such asvar(Z)<∞{\displaystyle var(Z)<\infty }, acentral limit theoremwill apply such that for largen→∞{\displaystyle n\rightarrow \infty }, the distribution ofz¯{\displaystyle {\overline {z}}}converges to a normal distribution with meanz{\displaystyle z}and standard errorσ/n{\displaystyle \sigma /{\sqrt {n}}}. Because the standard deviation only converges towards0{\displaystyle 0}at the raten{\displaystyle {\sqrt {n}}}, implying one needs to increase the number of simulations (n{\displaystyle n}) by a factor of4{\displaystyle 4}to halve the standard deviation ofz¯{\displaystyle {\overline {z}}}, variance reduction methods are often useful for obtaining more precise estimates forz{\displaystyle z}without needing very large numbers of simulations. The common random numbers variance reduction technique is a popular and useful variance reduction technique which applies when we are comparing two or more alternative configurations (of a system) instead of investigating a single configuration. CRN has also been calledcorrelated sampling,matched streamsormatched pairs. CRN requires synchronization of the random number streams, which ensures that in addition to using the same random numbers to simulate all configurations, a specific random number used for a specific purpose in one configuration is used for exactly the same purpose in all other configurations. For example, in queueing theory, if we are comparing two different configurations of tellers in a bank, we would want the (random) time of arrival of theN-th customer to be generated using the same draw from a random number stream for both configurations. SupposeX1j{\displaystyle X_{1j}}andX2j{\displaystyle X_{2j}}are the observations from the first and second configurations on thej-th independent replication. We want to estimate If we performnreplications of each configuration and let thenE(Zj)=ξ{\displaystyle E(Z_{j})=\xi }andZ(n)=∑j=1,…,nZjn{\displaystyle Z(n)={\frac {\sum _{j=1,\ldots ,n}Z_{j}}{n}}}is an unbiased estimator ofξ{\displaystyle \xi }. And since theZj{\displaystyle Z_{j}}'s are independent identically distributed random variables, In case of independent sampling, i.e., no common random numbers used then Cov(X1j,X2j) = 0. But if we succeed to induce an element of positive correlation betweenX1andX2such that Cov(X1j,X2j) > 0, it can be seen from the equation above that the variance is reduced. It can also be observed that if the CRN induces a negative correlation, i.e., Cov(X1j,X2j) < 0, this technique can actually backfire, where the variance is increased and not decreased (as intended).[2]
https://en.wikipedia.org/wiki/Variance_reduction
Innumerical analysis, thecondition numberof afunctionmeasures how much the output value of the function can change for a small change in the input argument. This is used to measure howsensitivea function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: givenf(x)=y,{\displaystyle f(x)=y,}one is solving forx,and thus the condition number of the (local) inverse must be used.[1][2] The condition number is derived from the theory ofpropagation of uncertainty, and is formally defined as the value of theasymptoticworst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions inlinear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to bewell-conditioned, while a problem with a high condition number is said to beill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (theindependent variables) there is a large change in the answer ordependent variable. This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property calledbackward stability; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition numberκ(A)=10k{\displaystyle \kappa (A)=10^{k}}, then you may lose up tok{\displaystyle k}digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods.[3]However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy). For example, the condition number associated with thelinear equationAx=bgives a bound on how inaccurate the solutionxwill be after approximation. Note that this is before the effects ofround-off errorare taken into account; conditioning is a property of thematrix, not thealgorithmorfloating-pointaccuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solutionxwill change with respect to a change inb. Thus, if the condition number is large, even a small error inbmay cause a large error inx. On the other hand, if the condition number is small, then the error inxwill not be much bigger than the error inb. The condition number is defined more precisely to be the maximum ratio of therelative errorinxto the relative error inb. Letebe the error inb. Assuming thatAis anonsingularmatrix, the error in the solutionA−1bisA−1e. The ratio of the relative error in the solution to the relative error inbis The maximum value (for nonzerobande) is then seen to be the product of the twooperator normsas follows: The same definition is used for any consistentnorm, i.e. one that satisfies When the condition number is exactly one (which can only happen ifAis a scalar multiple of alinear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors.[clarification needed] The condition number may also be infinite, but this implies that the problem isill-posed(does not possess a unique, well-defined solution for each choice of data; that is, the matrix is notinvertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice ofnorm, as can be illustrated by two examples. If‖⋅‖{\displaystyle \|\cdot \|}is thematrix norm induced by the (vector) Euclidean norm(sometimes known as theL2norm and typically denoted as‖⋅‖2{\displaystyle \|\cdot \|_{2}}), then whereσmax(A){\displaystyle \sigma _{\text{max}}(A)}andσmin(A){\displaystyle \sigma _{\text{min}}(A)}are maximal and minimalsingular valuesofA{\displaystyle A}respectively. Hence: The condition number with respect toL2arises so often innumerical linear algebrathat it is given a name, thecondition number of a matrix. If‖⋅‖{\displaystyle \|\cdot \|}is thematrix norm induced by theL∞{\displaystyle L^{\infty }}(vector) normandA{\displaystyle A}islower triangularnon-singular (i.e.aii≠0{\displaystyle a_{ii}\neq 0}for alli{\displaystyle i}), then recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to theEuclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves anon-linear algebra[clarification needed], for example when approximating irrational andtranscendentalfunctions or numbers with numerical methods). If the condition number is not significantly larger than one, the matrix iswell-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to beill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible is often said to have a condition number equal to infinity. Alternatively, it can be defined asκ(A)=‖A‖‖A†‖{\displaystyle \kappa (A)=\|A\|\|A^{\dagger }\|}, whereA†{\displaystyle A^{\dagger }}is the Moore-Penrosepseudoinverse. For square matrices, this unfortunately makes the condition number discontinuous, but it is a useful definition for rectangular matrices, which are never invertible but are still used to define systems of equations. Condition numbers can also be defined for nonlinear functions, and can be computed usingcalculus. The condition number varies with the point; in some cases one can use the maximum (orsupremum) condition number over thedomainof the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. Theabsolutecondition number of adifferentiable functionf{\displaystyle f}in one variable is theabsolute valueof thederivativeof the function: Therelativecondition number off{\displaystyle f}as a function is|xf′/f|{\displaystyle \left|xf'/f\right|}. Evaluated at a pointx{\displaystyle x}, this is Note that this is the absolute value of theelasticityof a function in economics. Most elegantly, this can be understood as (the absolute value of) the ratio of thelogarithmic derivativeoff{\displaystyle f}, which is(log⁡f)′=f′/f{\displaystyle (\log f)'=f'/f}, and the logarithmic derivative ofx{\displaystyle x}, which is(log⁡x)′=x′/x=1/x{\displaystyle (\log x)'=x'/x=1/x}, yielding a ratio ofxf′/f{\displaystyle xf'/f}. This is because the logarithmic derivative is theinfinitesimalrate of relative change in a function: it is the derivativef′{\displaystyle f'}scaled by the value off{\displaystyle f}. Note that if a function has azeroat a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small changeΔx{\displaystyle \Delta x}inx{\displaystyle x}, the relative change inx{\displaystyle x}is[(x+Δx)−x]/x=(Δx)/x{\displaystyle [(x+\Delta x)-x]/x=(\Delta x)/x}, while the relative change inf(x){\displaystyle f(x)}is[f(x+Δx)−f(x)]/f(x){\displaystyle [f(x+\Delta x)-f(x)]/f(x)}. Taking the ratio yields The last term is thedifference quotient(the slope of thesecant line), and taking thelimityields the derivative. Condition numbers of commonelementary functionsare particularly important in computingsignificant figuresand can be computed immediately from the derivative. A few important ones are given below: Condition numbers can be defined for any functionf{\displaystyle f}mapping its data from somedomain(e.g. anm{\displaystyle m}-tuple ofreal numbersx{\displaystyle x}) into somecodomain(e.g. ann{\displaystyle n}-tuple of real numbersf(x){\displaystyle f(x)}), where both the domain and codomain areBanach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example,polynomial root findingor computingeigenvalues. The condition number off{\displaystyle f}at a pointx{\displaystyle x}(specifically, itsrelative condition number[4]) is then defined to be the maximum ratio of the fractional change inf(x){\displaystyle f(x)}to any fractional change inx{\displaystyle x}, in the limit where the changeδx{\displaystyle \delta x}inx{\displaystyle x}becomes infinitesimally small:[4] where‖⋅‖{\displaystyle \|\cdot \|}is anormon the domain/codomain off{\displaystyle f}. Iff{\displaystyle f}is differentiable, this is equivalent to:[4] where⁠J(x){\displaystyle J(x)}⁠denotes theJacobian matrixofpartial derivativesoff{\displaystyle f}atx{\displaystyle x}, and‖J(x)‖{\displaystyle \|J(x)\|}is theinduced normon the matrix.
https://en.wikipedia.org/wiki/Ill-conditioned_matrix
In the theory ofvector spaces, asetofvectorsis said to belinearly independentif there exists no nontriviallinear combinationof the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to belinearly dependent. These concepts are central to the definition ofdimension.[1] A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space. A sequence of vectorsv1,v2,…,vk{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{k}}from avector spaceVis said to belinearly dependent, if there existscalarsa1,a2,…,ak,{\displaystyle a_{1},a_{2},\dots ,a_{k},}not all zero, such that where0{\displaystyle \mathbf {0} }denotes the zero vector. This implies that at least one of the scalars is nonzero, saya1≠0{\displaystyle a_{1}\neq 0}, and the above equation is able to be written as ifk>1,{\displaystyle k>1,}andv1=0{\displaystyle \mathbf {v} _{1}=\mathbf {0} }ifk=1.{\displaystyle k=1.} Thus, a set of vectors is linearly dependent if and only if one of them is zero or alinear combinationof the others. A sequence of vectorsv1,v2,…,vn{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{n}}is said to belinearly independentif it is not linearly dependent, that is, if the equation can only be satisfied byai=0{\displaystyle a_{i}=0}fori=1,…,n.{\displaystyle i=1,\dots ,n.}This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of0{\displaystyle \mathbf {0} }as a linear combination of its vectors is the trivial representation in which all the scalarsai{\textstyle a_{i}}are zero.[2]Even more concisely, a sequence of vectors is linearly independent if and only if0{\displaystyle \mathbf {0} }can be represented as a linear combination of its vectors in a unique way. If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors islinearly independentif the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful. A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent. An infinite set of vectors islinearly independentif every finitesubsetis linearly independent. This definition applies also to finite sets of vectors, since a finite set is a finite subset of itself, and every subset of a linearly independent set is also linearly independent. Conversely, an infinite set of vectors islinearly dependentif it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set. Anindexed familyof vectors islinearly independentif it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise, the family is said to belinearly dependent. A set of vectors which is linearly independent andspanssome vector space, forms abasisfor that vector space. For example, the vector space of allpolynomialsinxover the reals has the (infinite) subset{1,x,x2, ...}as a basis. LetV{\displaystyle V}be a vector space. A setX⊆V{\displaystyle X\subseteq V}islinearly independentif and only ifX{\displaystyle X}is aminimal elementof by theinclusion order. In contrast,X{\displaystyle X}islinearly dependentif it has a proper subset whose span is a superset ofX{\displaystyle X}. A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement istrue, but it is not necessary to find the location. In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is alinear combinationof the other two vectors, and it makes the set of vectorslinearly dependent, that is, one of the three vectors is unnecessary to define a specific location on a plane. Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general,nlinearly independent vectors are required to describe all locations inn-dimensional space. If one or more vectors from a given sequence of vectorsv1,…,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}}is the zero vector0{\displaystyle \mathbf {0} }then the vectorsv1,…,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}}are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose thati{\displaystyle i}is an index (i.e. an element of{1,…,k}{\displaystyle \{1,\ldots ,k\}}) such thatvi=0.{\displaystyle \mathbf {v} _{i}=\mathbf {0} .}Then letai:=1{\displaystyle a_{i}:=1}(alternatively, lettingai{\displaystyle a_{i}}be equal to any other non-zero scalar will also work) and then let all other scalars be0{\displaystyle 0}(explicitly, this means that for any indexj{\displaystyle j}other thani{\displaystyle i}(i.e. forj≠i{\displaystyle j\neq i}), letaj:=0{\displaystyle a_{j}:=0}so that consequentlyajvj=0vj=0{\displaystyle a_{j}\mathbf {v} _{j}=0\mathbf {v} _{j}=\mathbf {0} }). Simplifyinga1v1+⋯+akvk{\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}}gives: Because not all scalars are zero (in particular,ai≠0{\displaystyle a_{i}\neq 0}), this proves that the vectorsv1,…,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}}are linearly dependent. As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearlyindependent. Now consider the special case where the sequence ofv1,…,vk{\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}}has length1{\displaystyle 1}(i.e. the case wherek=1{\displaystyle k=1}). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, ifv1{\displaystyle \mathbf {v} _{1}}is any vector then the sequencev1{\displaystyle \mathbf {v} _{1}}(which is a sequence of length1{\displaystyle 1}) is linearly dependent if and only ifv1=0{\displaystyle \mathbf {v} _{1}=\mathbf {0} };alternatively, the collectionv1{\displaystyle \mathbf {v} _{1}}is linearly independent if and only ifv1≠0.{\displaystyle \mathbf {v} _{1}\neq \mathbf {0} .} This example considers the special case where there are exactly two vectoru{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }from some real or complex vector space. The vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }are linearly dependentif and only ifat least one of the following is true: Ifu=0{\displaystyle \mathbf {u} =\mathbf {0} }then by settingc:=0{\displaystyle c:=0}we havecv=0v=0=u{\displaystyle c\mathbf {v} =0\mathbf {v} =\mathbf {0} =\mathbf {u} }(this equality holds no matter what the value ofv{\displaystyle \mathbf {v} }is), which shows that (1) is true in this particular case. Similarly, ifv=0{\displaystyle \mathbf {v} =\mathbf {0} }then (2) is true becausev=0u.{\displaystyle \mathbf {v} =0\mathbf {u} .}Ifu=v{\displaystyle \mathbf {u} =\mathbf {v} }(for instance, if they are both equal to the zero vector0{\displaystyle \mathbf {0} }) thenboth(1) and (2) are true (by usingc:=1{\displaystyle c:=1}for both). Ifu=cv{\displaystyle \mathbf {u} =c\mathbf {v} }thenu≠0{\displaystyle \mathbf {u} \neq \mathbf {0} }is only possible ifc≠0{\displaystyle c\neq 0}andv≠0{\displaystyle \mathbf {v} \neq \mathbf {0} }; in this case, it is possible to multiply both sides by1c{\textstyle {\frac {1}{c}}}to concludev=1cu.{\textstyle \mathbf {v} ={\frac {1}{c}}\mathbf {u} .}This shows that ifu≠0{\displaystyle \mathbf {u} \neq \mathbf {0} }andv≠0{\displaystyle \mathbf {v} \neq \mathbf {0} }then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearlyindependent). Ifu=cv{\displaystyle \mathbf {u} =c\mathbf {v} }but insteadu=0{\displaystyle \mathbf {u} =\mathbf {0} }then at least one ofc{\displaystyle c}andv{\displaystyle \mathbf {v} }must be zero. Moreover, if exactly one ofu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }is0{\displaystyle \mathbf {0} }(while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false). The vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }are linearlyindependent if and only ifu{\displaystyle \mathbf {u} }is not a scalar multiple ofv{\displaystyle \mathbf {v} }andv{\displaystyle \mathbf {v} }is not a scalar multiple ofu{\displaystyle \mathbf {u} }. Three vectors:Consider the set of vectorsv1=(1,1),{\displaystyle \mathbf {v} _{1}=(1,1),}v2=(−3,2),{\displaystyle \mathbf {v} _{2}=(-3,2),}andv3=(2,4),{\displaystyle \mathbf {v} _{3}=(2,4),}then the condition for linear dependence seeks a set of non-zero scalars, such that or Row reducethis matrix equation by subtracting the first row from the second to obtain, Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is Rearranging this equation allows us to obtain which shows that non-zeroaiexist such thatv3=(2,4){\displaystyle \mathbf {v} _{3}=(2,4)}can be defined in terms ofv1=(1,1){\displaystyle \mathbf {v} _{1}=(1,1)}andv2=(−3,2).{\displaystyle \mathbf {v} _{2}=(-3,2).}Thus, the three vectors are linearly dependent. Two vectors:Now consider the linear dependence of the two vectorsv1=(1,1){\displaystyle \mathbf {v} _{1}=(1,1)}andv2=(−3,2),{\displaystyle \mathbf {v} _{2}=(-3,2),}and check, or The same row reduction presented above yields, This shows thatai=0,{\displaystyle a_{i}=0,}which means that the vectorsv1=(1,1){\displaystyle \mathbf {v} _{1}=(1,1)}andv2=(−3,2){\displaystyle \mathbf {v} _{2}=(-3,2)}are linearly independent. In order to determine if the three vectors inR4,{\displaystyle \mathbb {R} ^{4},} are linearly dependent, form the matrix equation, Row reduce this equation to obtain, Rearrange to solve for v3and obtain, This equation is easily solved to define non-zeroai, wherea3{\displaystyle a_{3}}can be chosen arbitrarily. Thus, the vectorsv1,v2,{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},}andv3{\displaystyle \mathbf {v} _{3}}are linearly dependent. An alternative method relies on the fact thatn{\displaystyle n}vectors inRn{\displaystyle \mathbb {R} ^{n}}are linearlyindependentif and only ifthedeterminantof thematrixformed by taking the vectors as its columns is non-zero. In this case, the matrix formed by the vectors is We may write a linear combination of the columns as We are interested in whetherAΛ =0for some nonzero vector Λ. This depends on the determinant ofA{\displaystyle A}, which is Since thedeterminantis non-zero, the vectors(1,1){\displaystyle (1,1)}and(−3,2){\displaystyle (-3,2)}are linearly independent. Otherwise, suppose we havem{\displaystyle m}vectors ofn{\displaystyle n}coordinates, withm<n.{\displaystyle m<n.}ThenAis ann×mmatrix and Λ is a column vector withm{\displaystyle m}entries, and we are again interested inAΛ =0. As we saw previously, this is equivalent to a list ofn{\displaystyle n}equations. Consider the firstm{\displaystyle m}rows ofA{\displaystyle A}, the firstm{\displaystyle m}equations; any solution of the full list of equations must also be true of the reduced list. In fact, if⟨i1,...,im⟩is any list ofm{\displaystyle m}rows, then the equation must be true for those rows. Furthermore, the reverse is true. That is, we can test whether them{\displaystyle m}vectors are linearly dependent by testing whether for all possible lists ofm{\displaystyle m}rows. (In casem=n{\displaystyle m=n}, this requires only one determinant, as above. Ifm>n{\displaystyle m>n}, then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available. If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors inR2.{\displaystyle \mathbb {R} ^{2}.} LetV=Rn{\displaystyle V=\mathbb {R} ^{n}}and consider the following elements inV{\displaystyle V}, known as thenatural basisvectors: Thene1,e2,…,en{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\ldots ,\mathbf {e} _{n}}are linearly independent. Suppose thata1,a2,…,an{\displaystyle a_{1},a_{2},\ldots ,a_{n}}are real numbers such that Since thenai=0{\displaystyle a_{i}=0}for alli=1,…,n.{\displaystyle i=1,\ldots ,n.} LetV{\displaystyle V}be thevector spaceof all differentiablefunctionsof a real variablet{\displaystyle t}. Then the functionset{\displaystyle e^{t}}ande2t{\displaystyle e^{2t}}inV{\displaystyle V}are linearly independent. Supposea{\displaystyle a}andb{\displaystyle b}are two real numbers such that Take the first derivative of the above equation: forallvalues oft.{\displaystyle t.}We need to show thata=0{\displaystyle a=0}andb=0.{\displaystyle b=0.}In order to do this, we subtract the first equation from the second, givingbe2t=0{\displaystyle be^{2t}=0}. Sincee2t{\displaystyle e^{2t}}is not zero for somet{\displaystyle t},b=0.{\displaystyle b=0.}It follows thata=0{\displaystyle a=0}too. Therefore, according to the definition of linear independence,et{\displaystyle e^{t}}ande2t{\displaystyle e^{2t}}are linearly independent. Alinear dependencyorlinear relationamong vectorsv1, ...,vnis atuple(a1, ...,an)withnscalarcomponents such that If such a linear dependence exists with at least a nonzero component, then thenvectors are linearly dependent. Linear dependencies amongv1, ...,vnform a vector space. If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneoussystem of linear equations, with the coordinates of the vectors as coefficients. Abasisof the vector space of linear dependencies can therefore be computed byGaussian elimination. A set of vectors is said to beaffinely dependentif at least one of the vectors in the set can be defined as anaffine combinationof the others. Otherwise, the set is calledaffinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Contrapositively, every linearly independent set is affinely independent. Note that an affinely independent set is not necessarily linearly independent. Consider a set ofm{\displaystyle m}vectorsv1,…,vm{\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{m}}of sizen{\displaystyle n}each, and consider the set ofm{\displaystyle m}augmented vectors([1v1],…,[1vm]){\textstyle \left(\left[{\begin{smallmatrix}1\\\mathbf {v} _{1}\end{smallmatrix}}\right],\ldots ,\left[{\begin{smallmatrix}1\\\mathbf {v} _{m}\end{smallmatrix}}\right]\right)}of sizen+1{\displaystyle n+1}each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent.[3]: 256 Two vector subspacesM{\displaystyle M}andN{\displaystyle N}of a vector spaceX{\displaystyle X}are said to belinearly independentifM∩N={0}.{\displaystyle M\cap N=\{0\}.}[4]More generally, a collectionM1,…,Md{\displaystyle M_{1},\ldots ,M_{d}}of subspaces ofX{\displaystyle X}are said to belinearly independentifMi∩∑k≠iMk={0}{\textstyle M_{i}\cap \sum _{k\neq i}M_{k}=\{0\}}for every indexi,{\displaystyle i,}where∑k≠iMk={m1+⋯+mi−1+mi+1+⋯+md:mk∈Mkfor allk}=span⁡⋃k∈{1,…,i−1,i+1,…,d}Mk.{\textstyle \sum _{k\neq i}M_{k}={\Big \{}m_{1}+\cdots +m_{i-1}+m_{i+1}+\cdots +m_{d}:m_{k}\in M_{k}{\text{ for all }}k{\Big \}}=\operatorname {span} \bigcup _{k\in \{1,\ldots ,i-1,i+1,\ldots ,d\}}M_{k}.}[4]The vector spaceX{\displaystyle X}is said to be adirect sumofM1,…,Md{\displaystyle M_{1},\ldots ,M_{d}}if these subspaces are linearly independent andM1+⋯+Md=X.{\displaystyle M_{1}+\cdots +M_{d}=X.}
https://en.wikipedia.org/wiki/Linear_independence
Thecross-entropy(CE)methodis aMonte Carlomethod forimportance samplingandoptimization. It is applicable to bothcombinatorialandcontinuousproblems, with either a static or noisy objective. The method approximates the optimal importance sampling estimator by repeating two phases:[1] Reuven Rubinsteindeveloped the method in the context ofrare-event simulation, where tiny probabilities must be estimated, for example in network reliability analysis, queueing models, or performance analysis of telecommunication systems. The method has also been applied to thetraveling salesman,quadratic assignment,DNA sequence alignment,max-cutand buffer allocation problems. Consider the general problem of estimating the quantity ℓ=Eu[H(X)]=∫H(x)f(x;u)dx{\displaystyle \ell =\mathbb {E} _{\mathbf {u} }[H(\mathbf {X} )]=\int H(\mathbf {x} )\,f(\mathbf {x} ;\mathbf {u} )\,{\textrm {d}}\mathbf {x} }, whereH{\displaystyle H}is someperformance functionandf(x;u){\displaystyle f(\mathbf {x} ;\mathbf {u} )}is a member of someparametric familyof distributions. Usingimportance samplingthis quantity can be estimated as ℓ^=1N∑i=1NH(Xi)f(Xi;u)g(Xi){\displaystyle {\hat {\ell }}={\frac {1}{N}}\sum _{i=1}^{N}H(\mathbf {X} _{i}){\frac {f(\mathbf {X} _{i};\mathbf {u} )}{g(\mathbf {X} _{i})}}}, whereX1,…,XN{\displaystyle \mathbf {X} _{1},\dots ,\mathbf {X} _{N}}is a random sample fromg{\displaystyle g\,}. For positiveH{\displaystyle H}, the theoreticallyoptimalimportance samplingdensity(PDF) is given by g∗(x)=H(x)f(x;u)/ℓ{\displaystyle g^{*}(\mathbf {x} )=H(\mathbf {x} )f(\mathbf {x} ;\mathbf {u} )/\ell }. This, however, depends on the unknownℓ{\displaystyle \ell }. The CE method aims to approximate the optimal PDF by adaptively selecting members of the parametric family that are closest (in theKullback–Leiblersense) to the optimal PDFg∗{\displaystyle g^{*}}. In several cases, the solution to step 3 can be foundanalytically. Situations in which this occurs are The same CE algorithm can be used for optimization, rather than estimation. Suppose the problem is to maximize some functionS{\displaystyle S}, for example,S(x)=e−(x−2)2+0.8e−(x+2)2{\displaystyle S(x)={\textrm {e}}^{-(x-2)^{2}}+0.8\,{\textrm {e}}^{-(x+2)^{2}}}. To apply CE, one considers first theassociated stochastic problemof estimatingPθ(S(X)≥γ){\displaystyle \mathbb {P} _{\boldsymbol {\theta }}(S(X)\geq \gamma )}for a givenlevelγ{\displaystyle \gamma \,}, and parametric family{f(⋅;θ)}{\displaystyle \left\{f(\cdot ;{\boldsymbol {\theta }})\right\}}, for example the 1-dimensionalGaussian distribution, parameterized by its meanμt{\displaystyle \mu _{t}\,}and varianceσt2{\displaystyle \sigma _{t}^{2}}(soθ=(μ,σ2){\displaystyle {\boldsymbol {\theta }}=(\mu ,\sigma ^{2})}here). Hence, for a givenγ{\displaystyle \gamma \,}, the goal is to findθ{\displaystyle {\boldsymbol {\theta }}}so thatDKL(I{S(x)≥γ}‖fθ){\displaystyle D_{\mathrm {KL} }({\textrm {I}}_{\{S(x)\geq \gamma \}}\|f_{\boldsymbol {\theta }})}is minimized. This is done by solving the sample version (stochastic counterpart) of the KL divergence minimization problem, as in step 3 above. It turns out that parameters that minimize the stochastic counterpart for this choice of target distribution and parametric family are the sample mean and sample variance corresponding to theelite samples, which are those samples that have objective function value≥γ{\displaystyle \geq \gamma }. The worst of the elite samples is then used as the level parameter for the next iteration. This yields the following randomized algorithm that happens to coincide with the so-called Estimation of Multivariate Normal Algorithm (EMNA), anestimation of distribution algorithm.
https://en.wikipedia.org/wiki/Cross-entropy_method
Inmathematical statistics, theKullback–Leibler(KL)divergence(also calledrelative entropyandI-divergence[1]), denotedDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}, is a type ofstatistical distance: a measure of how much a modelprobability distributionQis different from a true probability distributionP.[2][3]Mathematically, it is defined as DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}.} A simpleinterpretationof the KL divergence ofPfromQis theexpectedexcesssurprisefrom usingQas a model instead ofPwhen the actual distribution isP. While it is a measure of how different two distributions are and is thus a distance in some sense, it is not actually ametric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast tovariation of information), and does not satisfy thetriangle inequality. Instead, in terms ofinformation geometry, it is a type ofdivergence,[4]a generalization ofsquared distance, and for certain classes of distributions (notably anexponential family), it satisfies a generalizedPythagorean theorem(which applies to squared distances).[5] Relative entropy is always a non-negativereal number, with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative(Shannon) entropyin information systems, randomness in continuoustime-series, and information gain when comparing statistical models ofinference; and practical, such as applied statistics,fluid mechanics,neuroscience,bioinformatics, andmachine learning. Consider two probability distributionsPandQ. Usually,Prepresents the data, the observations, or a measured probability distribution. DistributionQrepresents instead a theory, a model, a description or an approximation ofP. The Kullback–Leibler divergenceDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is then interpreted as the average difference of the number of bits required for encoding samples ofPusing a code optimized forQrather than one optimized forP. Note that the roles ofPandQcan be reversed in some situations where that is easier to compute, such as with theexpectation–maximization algorithm (EM)andevidence lower bound (ELBO)computations. The relative entropy was introduced bySolomon KullbackandRichard LeiblerinKullback & Leibler (1951)as "the mean information for discrimination betweenH1{\displaystyle H_{1}}andH2{\displaystyle H_{2}}per observation fromμ1{\displaystyle \mu _{1}}",[6]where one is comparing two probability measuresμ1,μ2{\displaystyle \mu _{1},\mu _{2}}, andH1,H2{\displaystyle H_{1},H_{2}}are the hypotheses that one is selecting from measureμ1,μ2{\displaystyle \mu _{1},\mu _{2}}(respectively). They denoted this byI(1:2){\displaystyle I(1:2)}, and defined the "'divergence' betweenμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}" as the symmetrized quantityJ(1,2)=I(1:2)+I(2:1){\displaystyle J(1,2)=I(1:2)+I(2:1)}, which had already been defined and used byHarold Jeffreysin 1948.[7]InKullback (1959), the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions;[8]Kullback preferred the termdiscrimination information.[9]The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality.[10]Numerous references to earlier uses of the symmetrized divergence and to otherstatistical distancesare given inKullback (1959, pp. 6–7, §1.3 Divergence). The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as theJeffreys divergence. Fordiscrete probability distributionsPandQdefined on the samesample space,X{\displaystyle {\mathcal {X}}},the relative entropy fromQtoPis defined[11]to be DKL(P∥Q)=∑x∈XP(x)log⁡P(x)Q(x),{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}\,,} which is equivalent to DKL(P∥Q)=−∑x∈XP(x)log⁡Q(x)P(x).{\displaystyle D_{\text{KL}}(P\parallel Q)=-\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {Q(x)}{P(x)}}\,.} In other words, it is theexpectationof the logarithmic difference between the probabilitiesPandQ, where the expectation is taken using the probabilitiesP. Relative entropy is only defined in this way if, for allx,Q(x)=0{\displaystyle Q(x)=0}impliesP(x)=0{\displaystyle P(x)=0}(absolute continuity). Otherwise, it is often defined as+∞{\displaystyle +\infty },[1]but the value+∞{\displaystyle \ +\infty \ }is possible even ifQ(x)≠0{\displaystyle Q(x)\neq 0}everywhere,[12][13]provided thatX{\displaystyle {\mathcal {X}}}is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below. WheneverP(x){\displaystyle P(x)}is zero the contribution of the corresponding term is interpreted as zero because limx→0+xlog⁡(x)=0.{\displaystyle \lim _{x\to 0^{+}}x\,\log(x)=0\,.} For distributionsPandQof acontinuous random variable, relative entropy is defined to be the integral[14] DKL(P∥Q)=∫−∞∞p(x)log⁡p(x)q(x)dx,{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\,,} wherepandqdenote theprobability densitiesofPandQ. More generally, ifPandQare probabilitymeasureson ameasurable spaceX,{\displaystyle {\mathcal {X}}\,,}andPisabsolutely continuouswith respect toQ, then the relative entropy fromQtoPis defined as DKL(P∥Q)=∫x∈Xlog⁡P(dx)Q(dx)P(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}\log {\frac {P(dx)}{Q(dx)}}\,P(dx)\,,} whereP(dx)Q(dx){\displaystyle {\frac {P(dx)}{Q(dx)}}}is theRadon–Nikodym derivativeofPwith respect toQ, i.e. the uniqueQalmost everywhere defined functionronX{\displaystyle {\mathcal {X}}}such thatP(dx)=r(x)Q(dx){\displaystyle P(dx)=r(x)Q(dx)}which exists becausePis absolutely continuous with respect toQ. Also we assume the expression on the right-hand side exists. Equivalently (by thechain rule), this can be written as DKL(P∥Q)=∫x∈XP(dx)Q(dx)log⁡P(dx)Q(dx)Q(dx),{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}{\frac {P(dx)}{Q(dx)}}\ \log {\frac {P(dx)}{Q(dx)}}\ Q(dx)\,,} which is theentropyofPrelative toQ. Continuing in this case, ifμ{\displaystyle \mu }is any measure onX{\displaystyle {\mathcal {X}}}for which densitiespandqwithP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}exist (meaning thatPandQare both absolutely continuous with respect toμ{\displaystyle \mu }),then the relative entropy fromQtoPis given as DKL(P∥Q)=∫x∈Xp(x)log⁡p(x)q(x)μ(dx).{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}p(x)\,\log {\frac {p(x)}{q(x)}}\ \mu (dx)\,.} Note that such a measureμ{\displaystyle \mu }for which densities can be defined always exists, since one can takeμ=12(P+Q){\textstyle \mu ={\frac {1}{2}}\left(P+Q\right)}although in practice it will usually be one that applies in the context likecounting measurefor discrete distributions, orLebesgue measureor a convenient variant thereof likeGaussian measureor the uniform measure on thesphere,Haar measureon aLie groupetc. for continuous distributions. The logarithms in these formulae are usually taken tobase2 if information is measured in units ofbits, or to baseeif information is measured innats. Most formulas involving relative entropy hold regardless of the base of the logarithm. Various conventions exist for referring toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}in words. Often it is referred to as the divergencebetweenPandQ, but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence ofPfromQor as the divergencefromQtoP. This reflects theasymmetryinBayesian inference, which startsfromapriorQand updatestotheposteriorP. Another common way to refer toDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is as the relative entropy ofPwith respect toQor theinformation gainfromPoverQ. Kullback[3]gives the following example (Table 2.1, Example 2.1). LetPandQbe the distributions shown in the table and figure.Pis the distribution on the left side of the figure, abinomial distributionwithN=2{\displaystyle N=2}andp=0.4{\displaystyle p=0.4}.Qis the distribution on the right side of the figure, adiscrete uniform distributionwith the three possible outcomesx=0,1,2(i.e.X={0,1,2}{\displaystyle {\mathcal {X}}=\{0,1,2\}}), each with probabilityp=1/3{\displaystyle p=1/3}. Relative entropiesDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}andDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}are calculated as follows. This example uses thenatural logwith basee, designatedlnto get results innats(seeunits of information): DKL(P∥Q)=∑x∈XP(x)ln⁡P(x)Q(x)=925ln⁡9/251/3+1225ln⁡12/251/3+425ln⁡4/251/3=125(32ln⁡2+55ln⁡3−50ln⁡5)≈0.0852996,{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}P(x)\,\ln {\frac {P(x)}{Q(x)}}\\&={\frac {9}{25}}\ln {\frac {9/25}{1/3}}+{\frac {12}{25}}\ln {\frac {12/25}{1/3}}+{\frac {4}{25}}\ln {\frac {4/25}{1/3}}\\&={\frac {1}{25}}\left(32\ln 2+55\ln 3-50\ln 5\right)\\&\approx 0.0852996,\end{aligned}}} DKL(Q∥P)=∑x∈XQ(x)ln⁡Q(x)P(x)=13ln⁡1/39/25+13ln⁡1/312/25+13ln⁡1/34/25=13(−4ln⁡2−6ln⁡3+6ln⁡5)≈0.097455.{\displaystyle {\begin{aligned}D_{\text{KL}}(Q\parallel P)&=\sum _{x\in {\mathcal {X}}}Q(x)\,\ln {\frac {Q(x)}{P(x)}}\\&={\frac {1}{3}}\,\ln {\frac {1/3}{9/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{12/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{4/25}}\\&={\frac {1}{3}}\left(-4\ln 2-6\ln 3+6\ln 5\right)\\&\approx 0.097455.\end{aligned}}} In the field of statistics, theNeyman–Pearson lemmastates that the most powerful way to distinguish between the two distributionsPandQbased on an observationY(drawn from one of them) is through the log of the ratio of their likelihoods:log⁡P(Y)−log⁡Q(Y){\displaystyle \log P(Y)-\log Q(Y)}. The KL divergence is the expected value of this statistic ifYis actually drawn fromP. Kullback motivated the statistic as an expected log likelihood ratio.[15] In the context ofcoding theory,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be constructed by measuring the expected number of extrabitsrequired tocodesamples fromPusing a code optimized forQrather than the code optimized forP. In the context ofmachine learning,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is often called theinformation gainachieved ifPwould be used instead ofQwhich is currently used. By analogy with information theory, it is called therelative entropyofPwith respect toQ. Expressed in the language ofBayesian inference,DKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}is a measure of the information gained by revising one's beliefs from theprior probability distributionQto theposterior probability distributionP. In other words, it is the amount of information lost whenQis used to approximateP.[16] In applications,Ptypically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, whileQtypically represents a theory, model, description, orapproximationofP. In order to find a distributionQthat is closest toP, we can minimize the KL divergence and compute aninformation projection. While it is astatistical distance, it is not ametric, the most familiar type of distance, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and the asymmetry is an important part of the geometry.[4]Theinfinitesimalform of relative entropy, specifically itsHessian, gives ametric tensorthat equals theFisher information metric; see§ Fisher information metric. Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms.[17]Its quantum version is Fubini-study metric.[18]Relative entropy satisfies a generalized Pythagorean theorem forexponential families(geometrically interpreted asdually flat manifolds), and this allows one to minimize relative entropy by geometric means, for example byinformation projectionand inmaximum likelihood estimation.[5] The relative entropy is theBregman divergencegenerated by the negative entropy, but it is also of the form of anf-divergence. For probabilities over a finitealphabet, it is unique in being a member of both of these classes ofstatistical divergences. The application of Bregman divergence can be found in mirror descent.[19] Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes (e.g. a “horse race” in which the official odds add up to one). The rate of return expected by such an investor is equal to the relative entropy between the investor's believed probabilities and the official odds.[20]This is a special case of a much more general connection between financial returns and divergence measures.[21] Financial risks are connected toDKL{\displaystyle D_{\text{KL}}}via information geometry.[22]Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example.[23] In information theory, theKraft–McMillan theoremestablishes that any directly decodable coding scheme for coding a message to identify one valuexi{\displaystyle x_{i}}out of a set of possibilitiesXcan be seen as representing an implicit probability distributionq(xi)=2−ℓi{\displaystyle q(x_{i})=2^{-\ell _{i}}}overX, whereℓi{\displaystyle \ell _{i}}is the length of the code forxi{\displaystyle x_{i}}in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distributionQis used, compared to using a code based on the true distributionP: it is theexcessentropy. DKL(P∥Q)=∑x∈Xp(x)log⁡1q(x)−∑x∈Xp(x)log⁡1p(x)=H(P,Q)−H(P){\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{q(x)}}-\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{p(x)}}\\[5pt]&=\mathrm {H} (P,Q)-\mathrm {H} (P)\end{aligned}}} whereH(P,Q){\displaystyle \mathrm {H} (P,Q)}is thecross entropyofQrelative toPandH(P){\displaystyle \mathrm {H} (P)}is theentropyofP(which is the same as the cross-entropy of P with itself). The relative entropyDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}can be thought of geometrically as astatistical distance, a measure of how far the distributionQis from the distributionP. Geometrically it is adivergence: an asymmetric, generalized form of squared distance. The cross-entropyH(P,Q){\displaystyle H(P,Q)}is itself such a measurement (formally aloss function), but it cannot be thought of as a distance, sinceH(P,P)=:H(P){\displaystyle H(P,P)=:H(P)}is not zero. This can be fixed by subtractingH(P){\displaystyle H(P)}to makeDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}agree more closely with our notion of distance, as theexcessloss. The resulting function is asymmetric, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetric form is more useful. See§ Interpretationsfor more on the geometric interpretation. Relative entropy relates to "rate function" in the theory oflarge deviations.[24][25] Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly usedcharacterization of entropy.[26]Consequently,mutual informationis the only measure of mutual dependence that obeys certain related conditions, since it can be definedin terms of Kullback–Leibler divergence. In particular, ifP(dx)=p(x)μ(dx){\displaystyle P(dx)=p(x)\mu (dx)}andQ(dx)=q(x)μ(dx){\displaystyle Q(dx)=q(x)\mu (dx)}, thenp(x)=q(x){\displaystyle p(x)=q(x)}μ{\displaystyle \mu }-almost everywhere. The entropyH(P){\displaystyle \mathrm {H} (P)}thus sets a minimum value for the cross-entropyH(P,Q){\displaystyle \mathrm {H} (P,Q)}, theexpectednumber ofbitsrequired when using a code based onQrather thanP; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a valuexdrawn fromX, if a code is used corresponding to the probability distributionQ, rather than the "true" distributionP. Denotef(α):=DKL((1−α)Q+αP∥Q){\displaystyle f(\alpha ):=D_{\text{KL}}((1-\alpha )Q+\alpha P\parallel Q)}and note thatDKL(P∥Q)=f(1){\displaystyle D_{\text{KL}}(P\parallel Q)=f(1)}. The first derivative off{\displaystyle f}may be derived and evaluated as followsf′(α)=∑x∈X(P(x)−Q(x))(log⁡((1−α)Q(x)+αP(x)Q(x))+1)=∑x∈X(P(x)−Q(x))log⁡((1−α)Q(x)+αP(x)Q(x))f′(0)=0{\displaystyle {\begin{aligned}f'(\alpha )&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\left(\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)+1\right)\\&=\sum _{x\in {\mathcal {X}}}(P(x)-Q(x))\log \left({\frac {(1-\alpha )Q(x)+\alpha P(x)}{Q(x)}}\right)\\f'(0)&=0\end{aligned}}}Further derivatives may be derived and evaluated as followsf″(α)=∑x∈X(P(x)−Q(x))2(1−α)Q(x)+αP(x)f″(0)=∑x∈X(P(x)−Q(x))2Q(x)f(n)(α)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))n((1−α)Q(x)+αP(x))n−1f(n)(0)=(−1)n(n−2)!∑x∈X(P(x)−Q(x))nQ(x)n−1{\displaystyle {\begin{aligned}f''(\alpha )&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{(1-\alpha )Q(x)+\alpha P(x)}}\\f''(0)&=\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{2}}{Q(x)}}\\f^{(n)}(\alpha )&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{\left((1-\alpha )Q(x)+\alpha P(x)\right)^{n-1}}}\\f^{(n)}(0)&=(-1)^{n}(n-2)!\sum _{x\in {\mathcal {X}}}{\frac {(P(x)-Q(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}Hence solving forDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}via the Taylor expansion off{\displaystyle f}about0{\displaystyle 0}evaluated atα=1{\displaystyle \alpha =1}yieldsDKL(P∥Q)=∑n=0∞f(n)(0)n!=∑n=2∞1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}\\&=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument∑n=2∞|1n(n−1)∑x∈X(Q(x)−P(x))nQ(x)n−1|=∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)||1−P(x)Q(x)|n−1≤∑n=2∞1n(n−1)∑x∈X|Q(x)−P(x)|≤∑n=2∞1n(n−1)=1{\displaystyle {\begin{aligned}\sum _{n=2}^{\infty }\left\vert {\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}\right\vert &=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \left\vert 1-{\frac {P(x)}{Q(x)}}\right\vert ^{n-1}\\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}\left\vert Q(x)-P(x)\right\vert \\&\leq \sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\\&=1\end{aligned}}}P≤2Q{\displaystyle P\leq 2Q}a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume thatP>2Q{\displaystyle P>2Q}with measure strictly greater than0{\displaystyle 0}. It then follows that there must exist some valuesε>0{\displaystyle \varepsilon >0},ρ>0{\displaystyle \rho >0}, andU<∞{\displaystyle U<\infty }such thatP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }andQ≤U{\displaystyle Q\leq U}with measureρ{\displaystyle \rho }. The previous proof of sufficiency demonstrated that the measure1−ρ{\displaystyle 1-\rho }component of the series whereP≤2Q{\displaystyle P\leq 2Q}is bounded, so we need only concern ourselves with the behavior of the measureρ{\displaystyle \rho }component of the series whereP≥2Q+ε{\displaystyle P\geq 2Q+\varepsilon }. The absolute value of then{\displaystyle n}th term of this component of the series is then lower bounded by1n(n−1)ρ(1+εU)n{\displaystyle {\frac {1}{n(n-1)}}\rho \left(1+{\frac {\varepsilon }{U}}\right)^{n}}, which is unbounded asn→∞{\displaystyle n\to \infty }, so the series diverges. The following result, due to Donsker and Varadhan,[29]is known asDonsker and Varadhan's variational formula. Theorem [Duality Formula for Variational Inference]—LetΘ{\displaystyle \Theta }be a set endowed with an appropriateσ{\displaystyle \sigma }-fieldF{\displaystyle {\mathcal {F}}}, and two probability measuresPandQ, which formulate twoprobability spaces(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}and(Θ,F,Q){\displaystyle (\Theta ,{\mathcal {F}},Q)}, withQ≪P{\displaystyle Q\ll P}. (Q≪P{\displaystyle Q\ll P}indicates thatQis absolutely continuous with respect toP.) Lethbe a real-valued integrablerandom variableon(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}. Then the following equality holds log⁡EP[exp⁡h]=supQ≪P⁡{EQ[h]−DKL(Q∥P)}.{\displaystyle \log E_{P}[\exp h]=\operatorname {sup} _{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.} Further, the supremum on the right-hand side is attained if and only if it holds Q(dθ)P(dθ)=exp⁡h(θ)EP[exp⁡h],{\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},} almost surely with respect to probability measureP, whereQ(dθ)P(dθ){\displaystyle {\frac {Q(d\theta )}{P(d\theta )}}}denotes the Radon-Nikodym derivative ofQwith respect toP. For a short proof assuming integrability ofexp⁡(h){\displaystyle \exp(h)}with respect toP, letQ∗{\displaystyle Q^{*}}haveP-densityexp⁡h(θ)EP[exp⁡h]{\displaystyle {\frac {\exp h(\theta )}{E_{P}[\exp h]}}}, i.e.Q∗(dθ)=exp⁡h(θ)EP[exp⁡h]P(dθ){\displaystyle Q^{*}(d\theta )={\frac {\exp h(\theta )}{E_{P}[\exp h]}}P(d\theta )}Then DKL(Q∥Q∗)−DKL(Q∥P)=−EQ[h]+log⁡EP[exp⁡h].{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})-D_{\text{KL}}(Q\parallel P)=-E_{Q}[h]+\log E_{P}[\exp h].} Therefore, EQ[h]−DKL(Q∥P)=log⁡EP[exp⁡h]−DKL(Q∥Q∗)≤log⁡EP[exp⁡h],{\displaystyle E_{Q}[h]-D_{\text{KL}}(Q\parallel P)=\log E_{P}[\exp h]-D_{\text{KL}}(Q\parallel Q^{*})\leq \log E_{P}[\exp h],} where the last inequality follows fromDKL(Q∥Q∗)≥0{\displaystyle D_{\text{KL}}(Q\parallel Q^{*})\geq 0}, for which equality occurs if and only ifQ=Q∗{\displaystyle Q=Q^{*}}. The conclusion follows. Suppose that we have twomultivariate normal distributions, with meansμ0,μ1{\displaystyle \mu _{0},\mu _{1}}and with (non-singular)covariance matricesΣ0,Σ1.{\displaystyle \Sigma _{0},\Sigma _{1}.}If the two distributions have the same dimension,k, then the relative entropy between the distributions is as follows:[30] DKL(N0∥N1)=12[tr⁡(Σ1−1Σ0)−k+(μ1−μ0)TΣ1−1(μ1−μ0)+ln⁡detΣ1detΣ0].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left[\operatorname {tr} \left(\Sigma _{1}^{-1}\Sigma _{0}\right)-k+\left(\mu _{1}-\mu _{0}\right)^{\mathsf {T}}\Sigma _{1}^{-1}\left(\mu _{1}-\mu _{0}\right)+\ln {\frac {\det \Sigma _{1}}{\det \Sigma _{0}}}\right].} Thelogarithmin the last term must be taken to baseesince all terms apart from the last are base-elogarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured innats. Dividing the entire expression above byln⁡(2){\displaystyle \ln(2)}yields the divergence inbits. In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositionsL0,L1{\displaystyle L_{0},L_{1}}such thatΣ0=L0L0T{\displaystyle \Sigma _{0}=L_{0}L_{0}^{T}}andΣ1=L1L1T{\displaystyle \Sigma _{1}=L_{1}L_{1}^{T}}. Then withMandysolutions to the triangular linear systemsL1M=L0{\displaystyle L_{1}M=L_{0}}, andL1y=μ1−μ0{\displaystyle L_{1}y=\mu _{1}-\mu _{0}}, DKL(N0∥N1)=12(∑i,j=1k(Mij)2−k+|y|2+2∑i=1kln⁡(L1)ii(L0)ii).{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left(\sum _{i,j=1}^{k}{\left(M_{ij}\right)}^{2}-k+|y|^{2}+2\sum _{i=1}^{k}\ln {\frac {(L_{1})_{ii}}{(L_{0})_{ii}}}\right).} A special case, and a common quantity invariational inference, is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance): DKL(N((μ1,…,μk)T,diag⁡(σ12,…,σk2))∥N(0,I))=12∑i=1k[σi2+μi2−1−ln⁡(σi2)].{\displaystyle D_{\text{KL}}\left({\mathcal {N}}\left(\left(\mu _{1},\ldots ,\mu _{k}\right)^{\mathsf {T}},\operatorname {diag} \left(\sigma _{1}^{2},\ldots ,\sigma _{k}^{2}\right)\right)\parallel {\mathcal {N}}\left(\mathbf {0} ,\mathbf {I} \right)\right)={\frac {1}{2}}\sum _{i=1}^{k}\left[\sigma _{i}^{2}+\mu _{i}^{2}-1-\ln \left(\sigma _{i}^{2}\right)\right].} For two univariate normal distributionspandqthe above simplifies to[31]DKL(p∥q)=log⁡σ1σ0+σ02+(μ0−μ1)22σ12−12{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {\sigma _{1}}{\sigma _{0}}}+{\frac {\sigma _{0}^{2}+{\left(\mu _{0}-\mu _{1}\right)}^{2}}{2\sigma _{1}^{2}}}-{\frac {1}{2}}} In the case of co-centered normal distributions withk=σ1/σ0{\displaystyle k=\sigma _{1}/\sigma _{0}}, this simplifies[32]to: DKL(p∥q)=log2⁡k+(k−2−1)/2/ln⁡(2)bits{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log _{2}k+(k^{-2}-1)/2/\ln(2)\mathrm {bits} } Consider two uniform distributions, with the support ofp=[A,B]{\displaystyle p=[A,B]}enclosed withinq=[C,D]{\displaystyle q=[C,D]}(C≤A<B≤D{\displaystyle C\leq A<B\leq D}). Then the information gain is: DKL(p∥q)=log⁡D−CB−A{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {D-C}{B-A}}} Intuitively,[32]the information gain to aktimes narrower uniform distribution containslog2⁡k{\displaystyle \log _{2}k}bits. This connects with the use of bits in computing, wherelog2⁡k{\displaystyle \log _{2}k}bits would be needed to identify one element of aklong stream. Theexponential familyof distribution is given by pX(x|θ)=h(x)exp⁡(θTT(x)−A(θ)){\displaystyle p_{X}(x|\theta )=h(x)\exp \left(\theta ^{\mathsf {T}}T(x)-A(\theta )\right)} whereh(x){\displaystyle h(x)}is reference measure,T(x){\displaystyle T(x)}is sufficient statistics,θ{\displaystyle \theta }is canonical natural parameters, andA(θ){\displaystyle A(\theta )}is the log-partition function. The KL divergence between two distributionsp(x|θ1){\displaystyle p(x|\theta _{1})}andp(x|θ2){\displaystyle p(x|\theta _{2})}is given by[33] DKL(θ1∥θ2)=(θ1−θ2)Tμ1−A(θ1)+A(θ2){\displaystyle D_{\text{KL}}(\theta _{1}\parallel \theta _{2})={\left(\theta _{1}-\theta _{2}\right)}^{\mathsf {T}}\mu _{1}-A(\theta _{1})+A(\theta _{2})} whereμ1=Eθ1[T(X)]=∇A(θ1){\displaystyle \mu _{1}=E_{\theta _{1}}[T(X)]=\nabla A(\theta _{1})}is the mean parameter ofp(x|θ1){\displaystyle p(x|\theta _{1})}. For example, for the Poisson distribution with meanλ{\displaystyle \lambda }, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=log⁡λ{\displaystyle \theta =\log \lambda }, and log partition functionA(θ)=eθ{\displaystyle A(\theta )=e^{\theta }}. As such, the divergence between two Poisson distributions with meansλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}is DKL(λ1∥λ2)=λ1log⁡λ1λ2−λ1+λ2.{\displaystyle D_{\text{KL}}(\lambda _{1}\parallel \lambda _{2})=\lambda _{1}\log {\frac {\lambda _{1}}{\lambda _{2}}}-\lambda _{1}+\lambda _{2}.} As another example, for a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}, the sufficient statisticsT(x)=x{\displaystyle T(x)=x}, the natural parameterθ=μ{\displaystyle \theta =\mu }, and log partition functionA(θ)=μ2/2{\displaystyle A(\theta )=\mu ^{2}/2}. Thus, the divergence between two normal distributionsN(μ1,1){\displaystyle N(\mu _{1},1)}andN(μ2,1){\displaystyle N(\mu _{2},1)}is DKL(μ1∥μ2)=(μ1−μ2)μ1−μ122+μ222=(μ2−μ1)22.{\displaystyle D_{\text{KL}}(\mu _{1}\parallel \mu _{2})=\left(\mu _{1}-\mu _{2}\right)\mu _{1}-{\frac {\mu _{1}^{2}}{2}}+{\frac {\mu _{2}^{2}}{2}}={\frac {{\left(\mu _{2}-\mu _{1}\right)}^{2}}{2}}.} As final example, the divergence between a normal distribution with unit varianceN(μ,1){\displaystyle N(\mu ,1)}and a Poisson distribution with meanλ{\displaystyle \lambda }is DKL(μ∥λ)=(μ−log⁡λ)μ−μ22+λ.{\displaystyle D_{\text{KL}}(\mu \parallel \lambda )=(\mu -\log \lambda )\mu -{\frac {\mu ^{2}}{2}}+\lambda .} While relative entropy is astatistical distance, it is not ametricon the space of probability distributions, but instead it is adivergence.[4]While metrics are symmetric and generalizelineardistance, satisfying thetriangle inequality, divergences are asymmetric in general and generalizesquareddistance, in some cases satisfying a generalizedPythagorean theorem. In generalDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}does not equalDKL(Q∥P){\displaystyle D_{\text{KL}}(Q\parallel P)}, and while this can be symmetrized (see§ Symmetrised divergence), the asymmetry is an important part of the geometry.[4] It generates atopologyon the space ofprobability distributions. More concretely, if{P1,P2,…}{\displaystyle \{P_{1},P_{2},\ldots \}}is a sequence of distributions such that limn→∞DKL(Pn∥Q)=0,{\displaystyle \lim _{n\to \infty }D_{\text{KL}}(P_{n}\parallel Q)=0,} then it is said that Pn→DQ.{\displaystyle P_{n}\xrightarrow {D} \,Q.} Pinsker's inequalityentails that Pn→DP⇒Pn→TVP,{\displaystyle P_{n}\xrightarrow {D} P\Rightarrow P_{n}\xrightarrow {TV} P,} where the latter stands for the usual convergence intotal variation. Relative entropy is directly related to theFisher information metric. This can be made explicit as follows. Assume that the probability distributionsPandQare both parameterized by some (possibly multi-dimensional) parameterθ{\displaystyle \theta }. Consider then two close by values ofP=P(θ){\displaystyle P=P(\theta )}andQ=P(θ0){\displaystyle Q=P(\theta _{0})}so that the parameterθ{\displaystyle \theta }differs by only a small amount from the parameter valueθ0{\displaystyle \theta _{0}}. Specifically, up to first order one has (using theEinstein summation convention)P(θ)=P(θ0)+ΔθjPj(θ0)+⋯{\displaystyle P(\theta )=P(\theta _{0})+\Delta \theta _{j}\,P_{j}(\theta _{0})+\cdots } withΔθj=(θ−θ0)j{\displaystyle \Delta \theta _{j}=(\theta -\theta _{0})_{j}}a small change ofθ{\displaystyle \theta }in thejdirection, andPj(θ0)=∂P∂θj(θ0){\displaystyle P_{j}\left(\theta _{0}\right)={\frac {\partial P}{\partial \theta _{j}}}(\theta _{0})}the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 forP=Q{\displaystyle P=Q}, i.e.θ=θ0{\displaystyle \theta =\theta _{0}}, it changes only tosecondorder in the small parametersΔθj{\displaystyle \Delta \theta _{j}}. More formally, as for any minimum, the first derivatives of the divergence vanish ∂∂θj|θ=θ0DKL(P(θ)∥P(θ0))=0,{\displaystyle \left.{\frac {\partial }{\partial \theta _{j}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))=0,} and by theTaylor expansionone has up to second order DKL(P(θ)∥P(θ0))=12ΔθjΔθkgjk(θ0)+⋯{\displaystyle D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))={\frac {1}{2}}\,\Delta \theta _{j}\,\Delta \theta _{k}\,g_{jk}(\theta _{0})+\cdots } where theHessian matrixof the divergence gjk(θ0)=∂2∂θj∂θk|θ=θ0DKL(P(θ)∥P(θ0)){\displaystyle g_{jk}(\theta _{0})=\left.{\frac {\partial ^{2}}{\partial \theta _{j}\,\partial \theta _{k}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))} must bepositive semidefinite. Lettingθ0{\displaystyle \theta _{0}}vary (and dropping the subindex 0) the Hessiangjk(θ){\displaystyle g_{jk}(\theta )}defines a (possibly degenerate)Riemannian metricon theθparameter space, called the Fisher information metric. Whenp(x,ρ){\displaystyle p_{(x,\rho )}}satisfies the following regularity conditions: ∂log⁡(p)∂ρ,∂2log⁡(p)∂ρ2,∂3log⁡(p)∂ρ3{\displaystyle {\frac {\partial \log(p)}{\partial \rho }},{\frac {\partial ^{2}\log(p)}{\partial \rho ^{2}}},{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}}exist,|∂p∂ρ|<F(x):∫x=0∞F(x)dx<∞,|∂2p∂ρ2|<G(x):∫x=0∞G(x)dx<∞|∂3log⁡(p)∂ρ3|<H(x):∫x=0∞p(x,0)H(x)dx<ξ<∞{\displaystyle {\begin{aligned}\left|{\frac {\partial p}{\partial \rho }}\right|&<F(x):\int _{x=0}^{\infty }F(x)\,dx<\infty ,\\\left|{\frac {\partial ^{2}p}{\partial \rho ^{2}}}\right|&<G(x):\int _{x=0}^{\infty }G(x)\,dx<\infty \\\left|{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}\right|&<H(x):\int _{x=0}^{\infty }p(x,0)H(x)\,dx<\xi <\infty \end{aligned}}} whereξis independent ofρ∫x=0∞∂p(x,ρ)∂ρ|ρ=0dx=∫x=0∞∂2p(x,ρ)∂ρ2|ρ=0dx=0{\displaystyle \left.\int _{x=0}^{\infty }{\frac {\partial p(x,\rho )}{\partial \rho }}\right|_{\rho =0}\,dx=\left.\int _{x=0}^{\infty }{\frac {\partial ^{2}p(x,\rho )}{\partial \rho ^{2}}}\right|_{\rho =0}\,dx=0} then:D(p(x,0)∥p(x,ρ))=cρ22+O(ρ3)asρ→0.{\displaystyle {\mathcal {D}}(p(x,0)\parallel p(x,\rho ))={\frac {c\rho ^{2}}{2}}+{\mathcal {O}}\left(\rho ^{3}\right){\text{ as }}\rho \to 0.} Another information-theoretic metric isvariation of information, which is roughly a symmetrization ofconditional entropy. It is a metric on the set ofpartitionsof a discreteprobability space. MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback–Leibler divergences between the two distributions in a quantized embedding space of a foundation model. Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases. Theself-information, also known as theinformation contentof a signal, random variable, oreventis defined as the negative logarithm of theprobabilityof the given outcome occurring. When applied to adiscrete random variable, the self-information can be represented as[citation needed] I⁡(m)=DKL(δim∥{pi}),{\displaystyle \operatorname {\operatorname {I} } (m)=D_{\text{KL}}\left(\delta _{\text{im}}\parallel \{p_{i}\}\right),} is the relative entropy of the probability distributionP(i){\displaystyle P(i)}from aKronecker deltarepresenting certainty thati=m{\displaystyle i=m}— i.e. the number of extra bits that must be transmitted to identifyiif only the probability distributionP(i){\displaystyle P(i)}is available to the receiver, not the fact thati=m{\displaystyle i=m}. Themutual information, I⁡(X;Y)=DKL(P(X,Y)∥P(X)P(Y))=EX⁡{DKL(P(Y∣X)∥P(Y))}=EY⁡{DKL(P(X∣Y)∥P(X))}{\displaystyle {\begin{aligned}\operatorname {I} (X;Y)&=D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))\\[5pt]&=\operatorname {E} _{X}\{D_{\text{KL}}(P(Y\mid X)\parallel P(Y))\}\\[5pt]&=\operatorname {E} _{Y}\{D_{\text{KL}}(P(X\mid Y)\parallel P(X))\}\end{aligned}}} is the relative entropy of thejoint probability distributionP(X,Y){\displaystyle P(X,Y)}from the productP(X)P(Y){\displaystyle P(X)P(Y)}of the twomarginal probability distributions— i.e. the expected number of extra bits that must be transmitted to identifyXandYif they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probabilityP(X,Y){\displaystyle P(X,Y)}isknown, it is the expected number of extra bits that must on average be sent to identifyYif the value ofXis not already known to the receiver. TheShannon entropy, H(X)=E⁡[IX⁡(x)]=log⁡N−DKL(pX(x)∥PU(X)){\displaystyle {\begin{aligned}\mathrm {H} (X)&=\operatorname {E} \left[\operatorname {I} _{X}(x)\right]\\&=\log N-D_{\text{KL}}{\left(p_{X}(x)\parallel P_{U}(X)\right)}\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the uniform distribution on therandom variatesofX,PU(X){\displaystyle P_{U}(X)}, from the true distributionP(X){\displaystyle P(X)}— i.e.lessthe expected number of bits saved, which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the true distributionP(X){\displaystyle P(X)}. This definition of Shannon entropy forms the basis ofE.T. Jaynes's alternative generalization to continuous distributions, thelimiting density of discrete points(as opposed to the usualdifferential entropy), which defines the continuous entropy aslimN→∞HN(X)=log⁡N−∫p(x)log⁡p(x)m(x)dx,{\displaystyle \lim _{N\to \infty }H_{N}(X)=\log N-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx,}which is equivalent to:log⁡(N)−DKL(p(x)||m(x)){\displaystyle \log(N)-D_{\text{KL}}(p(x)||m(x))} Theconditional entropy[34], H(X∣Y)=log⁡N−DKL(P(X,Y)∥PU(X)P(Y))=log⁡N−DKL(P(X,Y)∥P(X)P(Y))−DKL(P(X)∥PU(X))=H(X)−I⁡(X;Y)=log⁡N−EY⁡[DKL(P(X∣Y)∥PU(X))]{\displaystyle {\begin{aligned}\mathrm {H} (X\mid Y)&=\log N-D_{\text{KL}}(P(X,Y)\parallel P_{U}(X)P(Y))\\[5pt]&=\log N-D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))-D_{\text{KL}}(P(X)\parallel P_{U}(X))\\[5pt]&=\mathrm {H} (X)-\operatorname {I} (X;Y)\\[5pt]&=\log N-\operatorname {E} _{Y}\left[D_{\text{KL}}\left(P\left(X\mid Y\right)\parallel P_{U}(X)\right)\right]\end{aligned}}} is the number of bits which would have to be transmitted to identifyXfromNequally likely possibilities,lessthe relative entropy of the product distributionPU(X)P(Y){\displaystyle P_{U}(X)P(Y)}from the true joint distributionP(X,Y){\displaystyle P(X,Y)}— i.e.lessthe expected number of bits saved which would have had to be sent if the value ofXwere coded according to the uniform distributionPU(X){\displaystyle P_{U}(X)}rather than the conditional distributionP(X|Y){\displaystyle P(X|Y)}ofXgivenY. When we have a set of possible events, coming from the distributionp, we can encode them (with alossless data compression) usingentropy encoding. This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length,prefix-free code(e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distributionpin advance, we can devise an encoding that would be optimal (e.g.: usingHuffman coding). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled fromp), which will be equal toShannon's Entropyofp(denoted asH(p){\displaystyle \mathrm {H} (p)}). However, if we use a different probability distribution (q) when creating the entropy encoding scheme, then a larger number ofbitswill be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by thecross entropybetweenpandq. Thecross entropybetween twoprobability distributions(pandq) measures the average number ofbitsneeded to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distributionq, rather than the "true" distributionp. The cross entropy for two distributionspandqover the sameprobability spaceis thus defined as follows. H(p,q)=Ep⁡[−log⁡q]=H(p)+DKL(p∥q).{\displaystyle \mathrm {H} (p,q)=\operatorname {E} _{p}[-\log q]=\mathrm {H} (p)+D_{\text{KL}}(p\parallel q).} For explicit derivation of this, see theMotivationsection above. Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyondH(p){\displaystyle \mathrm {H} (p)}) for encoding the events because of usingqfor constructing the encoding scheme instead ofp. InBayesian statistics, relative entropy can be used as a measure of the information gain in moving from aprior distributionto aposterior distribution:p(x)→p(x∣I){\displaystyle p(x)\to p(x\mid I)}. If some new factY=y{\displaystyle Y=y}is discovered, it can be used to update the posterior distribution forXfromp(x∣I){\displaystyle p(x\mid I)}to a new posterior distributionp(x∣y,I){\displaystyle p(x\mid y,I)}usingBayes' theorem: p(x∣y,I)=p(y∣x,I)p(x∣I)p(y∣I){\displaystyle p(x\mid y,I)={\frac {p(y\mid x,I)p(x\mid I)}{p(y\mid I)}}} This distribution has a newentropy: H(p(x∣y,I))=−∑xp(x∣y,I)log⁡p(x∣y,I),{\displaystyle \mathrm {H} {\big (}p(x\mid y,I){\big )}=-\sum _{x}p(x\mid y,I)\log p(x\mid y,I),} which may be less than or greater than the original entropyH(p(x∣I)){\displaystyle \mathrm {H} (p(x\mid I))}. However, from the standpoint of the new probability distribution one can estimate that to have used the original code based onp(x∣I){\displaystyle p(x\mid I)}instead of a new code based onp(x∣y,I){\displaystyle p(x\mid y,I)}would have added an expected number of bits: DKL(p(x∣y,I)∥p(x∣I))=∑xp(x∣y,I)log⁡p(x∣y,I)p(x∣I){\displaystyle D_{\text{KL}}{\big (}p(x\mid y,I)\parallel p(x\mid I){\big )}=\sum _{x}p(x\mid y,I)\log {\frac {p(x\mid y,I)}{p(x\mid I)}}} to the message length. This therefore represents the amount of useful information, or information gain, aboutX, that has been learned by discoveringY=y{\displaystyle Y=y}. If a further piece of data,Y2=y2{\displaystyle Y_{2}=y_{2}}, subsequently comes in, the probability distribution forxcan be updated further, to give a new best guessp(x∣y1,y2,I){\displaystyle p(x\mid y_{1},y_{2},I)}. If one reinvestigates the information gain for usingp(x∣y1,I){\displaystyle p(x\mid y_{1},I)}rather thanp(x∣I){\displaystyle p(x\mid I)}, it turns out that it may be either greater or less than previously estimated: ∑xp(x∣y1,y2,I)log⁡p(x∣y1,y2,I)p(x∣I){\displaystyle \sum _{x}p(x\mid y_{1},y_{2},I)\log {\frac {p(x\mid y_{1},y_{2},I)}{p(x\mid I)}}}may be ≤ or > than∑xp(x∣y1,I)log⁡p(x∣y1,I)p(x∣I){\textstyle \sum _{x}p(x\mid y_{1},I)\log {\frac {p(x\mid y_{1},I)}{p(x\mid I)}}} and so the combined information gain doesnotobey the triangle inequality: DKL(p(x∣y1,y2,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid I){\big )}}may be <, = or > thanDKL(p(x∣y1,y2,I)∥p(x∣y1,I))+DKL(p(x∣y1,I)∥p(x∣I)){\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid y_{1},I){\big )}+D_{\text{KL}}{\big (}p(x\mid y_{1},I)\parallel p(x\mid I){\big )}} All one can say is that onaverage, averaging usingp(y2∣y1,x,I){\displaystyle p(y_{2}\mid y_{1},x,I)}, the two sides will average out. A common goal inBayesian experimental designis to maximise the expected relative entropy between the prior and the posterior.[35]When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is calledBayes d-optimal. Relative entropyDKL(p(x∣H1)∥p(x∣H0)){\textstyle D_{\text{KL}}{\bigl (}p(x\mid H_{1})\parallel p(x\mid H_{0}){\bigr )}}can also be interpreted as the expecteddiscrimination informationforH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}: the mean information per sample for discriminating in favor of a hypothesisH1{\displaystyle H_{1}}against a hypothesisH0{\displaystyle H_{0}}, when hypothesisH1{\displaystyle H_{1}}is true.[36]Another name for this quantity, given to it byI. J. Good, is the expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}to be expected from each sample. The expected weight of evidence forH1{\displaystyle H_{1}}overH0{\displaystyle H_{0}}isnotthe same as the information gain expected per sample about the probability distributionp(H){\displaystyle p(H)}of the hypotheses, DKL(p(x∣H1)∥p(x∣H0))≠IG=DKL(p(H∣x)∥p(H∣I)).{\displaystyle D_{\text{KL}}(p(x\mid H_{1})\parallel p(x\mid H_{0}))\neq IG=D_{\text{KL}}(p(H\mid x)\parallel p(H\mid I)).} Either of the two quantities can be used as autility functionin Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies. On the entropy scale ofinformation gainthere is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on thelogitscale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, theRiemann hypothesisis correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales ofloss functionfor uncertainty arebothuseful, according to how well each reflects the particular circumstances of the problem in question. The idea of relative entropy as discrimination information led Kullback to propose the Principle ofMinimum Discrimination Information(MDI): given new facts, a new distributionfshould be chosen which is as hard to discriminate from the original distributionf0{\displaystyle f_{0}}as possible; so that the new data produces as small an information gainDKL(f∥f0){\displaystyle D_{\text{KL}}(f\parallel f_{0})}as possible. For example, if one had a prior distributionp(x,a){\displaystyle p(x,a)}overxanda, and subsequently learnt the true distribution ofawasu(a){\displaystyle u(a)}, then the relative entropy between the new joint distribution forxanda,q(x∣a)u(a){\displaystyle q(x\mid a)u(a)}, and the earlier prior distribution would be: DKL(q(x∣a)u(a)∥p(x,a))=Eu(a)⁡{DKL(q(x∣a)∥p(x∣a))}+DKL(u(a)∥p(a)),{\displaystyle D_{\text{KL}}(q(x\mid a)u(a)\parallel p(x,a))=\operatorname {E} _{u(a)}\left\{D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))\right\}+D_{\text{KL}}(u(a)\parallel p(a)),} i.e. the sum of the relative entropy ofp(a){\displaystyle p(a)}the prior distribution forafrom the updated distributionu(a){\displaystyle u(a)}, plus the expected value (using the probability distributionu(a){\displaystyle u(a)}) of the relative entropy of the prior conditional distributionp(x∣a){\displaystyle p(x\mid a)}from the new conditional distributionq(x∣a){\displaystyle q(x\mid a)}. (Note that often the later expected value is called theconditional relative entropy(orconditional Kullback–Leibler divergence) and denoted byDKL(q(x∣a)∥p(x∣a)){\displaystyle D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))}[3][34]) This is minimized ifq(x∣a)=p(x∣a){\displaystyle q(x\mid a)=p(x\mid a)}over the whole support ofu(a){\displaystyle u(a)}; and we note that this result incorporates Bayes' theorem, if the new distributionu(a){\displaystyle u(a)}is in fact a δ function representing certainty thatahas one particular value. MDI can be seen as an extension ofLaplace'sPrinciple of Insufficient Reason, and thePrinciple of Maximum EntropyofE.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (seedifferential entropy), but the relative entropy continues to be just as relevant. In the engineering literature, MDI is sometimes called thePrinciple of Minimum Cross-Entropy(MCE) orMinxentfor short. Minimising relative entropy frommtopwith respect tomis equivalent to minimizing the cross-entropy ofpandm, since H(p,m)=H(p)+DKL(p∥m),{\displaystyle \mathrm {H} (p,m)=\mathrm {H} (p)+D_{\text{KL}}(p\parallel m),} which is appropriate if one is trying to choose an adequate approximation top. However, this is just as oftennotthe task one is trying to achieve. Instead, just as often it ismthat is some fixed prior reference measure, andpthat one is attempting to optimise by minimisingDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to beDKL(p∥m){\displaystyle D_{\text{KL}}(p\parallel m)}, rather thanH(p,m){\displaystyle \mathrm {H} (p,m)}[citation needed]. Surprisals[37]add where probabilities multiply. The surprisal for an event of probabilitypis defined ass=−kln⁡p{\displaystyle s=-k\ln p}. Ifkis{1,1/ln⁡2,1.38×10−23}{\displaystyle \left\{1,1/\ln 2,1.38\times 10^{-23}\right\}}then surprisal is in{{\displaystyle \{}nats, bits, orJ/K}{\displaystyle J/K\}}so that, for instance, there areNbits of surprisal for landing all "heads" on a toss ofNcoins. Best-guess states (e.g. for atoms in a gas) are inferred by maximizing theaverage surprisalS(entropy) for a given set of control parameters (like pressurePor volumeV). This constrainedentropy maximization, both classically[38]and quantum mechanically,[39]minimizesGibbsavailability in entropy units[40]A≡−kln⁡Z{\displaystyle A\equiv -k\ln Z}whereZis a constrained multiplicity orpartition function. When temperatureTis fixed, free energy (T×A{\displaystyle T\times A}) is also minimized. Thus ifT,V{\displaystyle T,V}and number of moleculesNare constant, theHelmholtz free energyF≡U−TS{\displaystyle F\equiv U-TS}(whereUis energy andSis entropy) is minimized as a system "equilibrates." IfTandPare held constant (say during processes in your body), theGibbs free energyG=U+PV−TS{\displaystyle G=U+PV-TS}is minimized instead. The change in free energy under these conditions is a measure of availableworkthat might be done in the process. Thus available work for an ideal gas at constant temperatureTo{\displaystyle T_{o}}and pressurePo{\displaystyle P_{o}}isW=ΔG=NkToΘ(V/Vo){\displaystyle W=\Delta G=NkT_{o}\Theta (V/V_{o})}whereVo=NkTo/Po{\displaystyle V_{o}=NkT_{o}/P_{o}}andΘ(x)=x−1−ln⁡x≥0{\displaystyle \Theta (x)=x-1-\ln x\geq 0}(see alsoGibbs inequality). More generally[41]thework availablerelative to some ambient is obtained by multiplying ambient temperatureTo{\displaystyle T_{o}}by relative entropy ornet surprisalΔI≥0,{\displaystyle \Delta I\geq 0,}defined as the average value ofkln⁡(p/po){\displaystyle k\ln(p/p_{o})}wherepo{\displaystyle p_{o}}is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values ofVo{\displaystyle V_{o}}andTo{\displaystyle T_{o}}is thusW=ToΔI{\displaystyle W=T_{o}\Delta I}, where relative entropy ΔI=Nk[Θ(VVo)+32Θ(TTo)].{\displaystyle \Delta I=Nk\left[\Theta {\left({\frac {V}{V_{o}}}\right)}+{\frac {3}{2}}\Theta {\left({\frac {T}{T_{o}}}\right)}\right].} The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here.[42]Thus relative entropy measures thermodynamic availability in bits. Fordensity matricesPandQon aHilbert space, thequantum relative entropyfromQtoPis defined to be DKL(P∥Q)=Tr⁡(P(log⁡P−log⁡Q)).{\displaystyle D_{\text{KL}}(P\parallel Q)=\operatorname {Tr} (P(\log P-\log Q)).} Inquantum information sciencethe minimum ofDKL(P∥Q){\displaystyle D_{\text{KL}}(P\parallel Q)}over all separable statesQcan also be used as a measure ofentanglementin the stateP. Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describesdistance to equilibriumor (when multiplied by ambient temperature) the amount ofavailable work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words,how much the model has yet to learn. Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting astatistical modelviaAkaike information criterionare particularly well described in papers[43]and a book[44]by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like themean squared deviation) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models. When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such asmaximum likelihoodandmaximum spacingestimators.[citation needed] Kullback & Leibler (1951)also considered the symmetrized function:[6] DKL(P∥Q)+DKL(Q∥P){\displaystyle D_{\text{KL}}(P\parallel Q)+D_{\text{KL}}(Q\parallel P)} which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see§ Etymologyfor the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used byHarold Jeffreysin 1948;[7]it is accordingly called theJeffreys divergence. This quantity has sometimes been used forfeature selectioninclassificationproblems, wherePandQare the conditionalpdfsof a feature under two different classes. In the Banking and Finance industries, this quantity is referred to asPopulation Stability Index(PSI), and is used to assess distributional shifts in model features through time. An alternative is given via theλ{\displaystyle \lambda }-divergence, Dλ(P∥Q)=λDKL(P∥λP+(1−λ)Q)+(1−λ)DKL(Q∥λP+(1−λ)Q),{\displaystyle D_{\lambda }(P\parallel Q)=\lambda D_{\text{KL}}(P\parallel \lambda P+(1-\lambda )Q)+(1-\lambda )D_{\text{KL}}(Q\parallel \lambda P+(1-\lambda )Q),} which can be interpreted as the expected information gain aboutXfrom discovering which probability distributionXis drawn from,PorQ, if they currently have probabilitiesλ{\displaystyle \lambda }and1−λ{\displaystyle 1-\lambda }respectively.[clarification needed][citation needed] The valueλ=0.5{\displaystyle \lambda =0.5}gives theJensen–Shannon divergence, defined by DJS=12DKL(P∥M)+12DKL(Q∥M){\displaystyle D_{\text{JS}}={\tfrac {1}{2}}D_{\text{KL}}(P\parallel M)+{\tfrac {1}{2}}D_{\text{KL}}(Q\parallel M)} whereMis the average of the two distributions, M=12(P+Q).{\displaystyle M={\tfrac {1}{2}}\left(P+Q\right).} We can also interpretDJS{\displaystyle D_{\text{JS}}}as the capacity of a noisy information channel with two inputs giving the output distributionsPandQ. The Jensen–Shannon divergence, like allf-divergences, islocallyproportional to theFisher information metric. It is similar to theHellinger metric(in the sense that it induces the same affine connection on astatistical manifold). Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M.[45][46] There are many other important measures ofprobability distance. Some of these are particularly connected with relative entropy. For example: Other notable measures of distance include theHellinger distance,histogram intersection,Chi-squared statistic,quadratic form distance,match distance,Kolmogorov–Smirnov distance, andearth mover's distance.[49] Just asabsoluteentropy serves as theoretical background fordatacompression,relativeentropy serves as theoretical background fordatadifferencing– the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the targetgiventhe source (minimum size of apatch).
https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_distance
Instatistics,maximum likelihood estimation(MLE) is a method ofestimatingtheparametersof an assumedprobability distribution, given some observed data. This is achieved bymaximizingalikelihood functionso that, under the assumedstatistical model, theobserved datais most probable. Thepointin theparameter spacethat maximizes the likelihood function is called the maximum likelihood estimate.[1]The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means ofstatistical inference.[2][3][4] If the likelihood function isdifferentiable, thederivative testfor finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, theordinary least squaresestimator for alinear regressionmodel maximizes the likelihood when the random errors are assumed to havenormaldistributions with the same variance.[5] From the perspective ofBayesian inference, MLE is generally equivalent tomaximum a posteriori (MAP) estimationwith aprior distributionthat isuniformin the region of interest. Infrequentist inference, MLE is a special case of anextremum estimator, with the objective function being the likelihood. We model a set of observations as a randomsamplefrom an unknownjoint probability distributionwhich is expressed in terms of a set ofparameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vectorθ=[θ1,θ2,…,θk]T{\displaystyle \;\theta =\left[\theta _{1},\,\theta _{2},\,\ldots ,\,\theta _{k}\right]^{\mathsf {T}}\;}so that this distribution falls within aparametric family{f(⋅;θ)∣θ∈Θ},{\displaystyle \;\{f(\cdot \,;\theta )\mid \theta \in \Theta \}\;,}whereΘ{\displaystyle \,\Theta \,}is called theparameter space, a finite-dimensional subset ofEuclidean space. Evaluating the joint density at the observed data sampley=(y1,y2,…,yn){\displaystyle \;\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})\;}gives a real-valued function,Ln(θ)=Ln(θ;y)=fn(y;θ),{\displaystyle {\mathcal {L}}_{n}(\theta )={\mathcal {L}}_{n}(\theta ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,}which is called thelikelihood function. Forindependent random variables,fn(y;θ){\displaystyle f_{n}(\mathbf {y} ;\theta )}will be the product of univariatedensity functions:fn(y;θ)=∏k=1nfkunivar(yk;θ).{\displaystyle f_{n}(\mathbf {y} ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.} The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space,[6]that is:θ^=argmaxθ∈ΘLn(θ;y).{\displaystyle {\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.} Intuitively, this selects the parameter values that make the observed data most probable. The specific valueθ^=θ^n(y)∈Θ{\displaystyle ~{\hat {\theta }}={\hat {\theta }}_{n}(\mathbf {y} )\in \Theta ~}that maximizes the likelihood functionLn{\displaystyle \,{\mathcal {L}}_{n}\,}is called the maximum likelihood estimate. Further, if the functionθ^n:Rn→Θ{\displaystyle \;{\hat {\theta }}_{n}:\mathbb {R} ^{n}\to \Theta \;}so defined ismeasurable, then it is called the maximum likelihoodestimator. It is generally a function defined over thesample space, i.e. taking a given sample as its argument. Asufficient but not necessarycondition for its existence is for the likelihood function to becontinuousover a parameter spaceΘ{\displaystyle \,\Theta \,}that iscompact.[7]For anopenΘ{\displaystyle \,\Theta \,}the likelihood function may increase without ever reaching a supremum value. In practice, it is often convenient to work with thenatural logarithmof the likelihood function, called thelog-likelihood:ℓ(θ;y)=ln⁡Ln(θ;y).{\displaystyle \ell (\theta \,;\mathbf {y} )=\ln {\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}Since the logarithm is amonotonic function, the maximum ofℓ(θ;y){\displaystyle \;\ell (\theta \,;\mathbf {y} )\;}occurs at the same value ofθ{\displaystyle \theta }as does the maximum ofLn.{\displaystyle \,{\mathcal {L}}_{n}~.}[8]Ifℓ(θ;y){\displaystyle \ell (\theta \,;\mathbf {y} )}isdifferentiableinΘ,{\displaystyle \,\Theta \,,}sufficient conditionsfor the occurrence of a maximum (or a minimum) are∂ℓ∂θ1=0,∂ℓ∂θ2=0,…,∂ℓ∂θk=0,{\displaystyle {\frac {\partial \ell }{\partial \theta _{1}}}=0,\quad {\frac {\partial \ell }{\partial \theta _{2}}}=0,\quad \ldots ,\quad {\frac {\partial \ell }{\partial \theta _{k}}}=0~,}known as the likelihood equations. For some models, these equations can be explicitly solved forθ^,{\displaystyle \,{\widehat {\theta \,}}\,,}but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found vianumerical optimization. Another problem is that in finite samples, there may exist multiplerootsfor the likelihood equations.[9]Whether the identified rootθ^{\displaystyle \,{\widehat {\theta \,}}\,}of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-calledHessian matrix H(θ^)=[∂2ℓ∂θ12|θ=θ^∂2ℓ∂θ1∂θ2|θ=θ^…∂2ℓ∂θ1∂θk|θ=θ^∂2ℓ∂θ2∂θ1|θ=θ^∂2ℓ∂θ22|θ=θ^…∂2ℓ∂θ2∂θk|θ=θ^⋮⋮⋱⋮∂2ℓ∂θk∂θ1|θ=θ^∂2ℓ∂θk∂θ2|θ=θ^…∂2ℓ∂θk2|θ=θ^],{\displaystyle \mathbf {H} \left({\widehat {\theta \,}}\right)={\begin{bmatrix}\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\vdots &\vdots &\ddots &\vdots \\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}\end{bmatrix}}~,} isnegative semi-definiteatθ^{\displaystyle {\widehat {\theta \,}}}, as this indicates localconcavity. Conveniently, most commonprobability distributions– in particular theexponential family– arelogarithmically concave.[10][11] While the domain of the likelihood function—theparameter space—is generally a finite-dimensional subset ofEuclidean space, additionalrestrictionssometimes need to be incorporated into the estimation process. The parameter space can be expressed asΘ={θ:θ∈Rk,h(θ)=0},{\displaystyle \Theta =\left\{\theta :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,} whereh(θ)=[h1(θ),h2(θ),…,hr(θ)]{\displaystyle \;h(\theta )=\left[h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{r}(\theta )\right]\;}is avector-valued functionmappingRk{\displaystyle \,\mathbb {R} ^{k}\,}intoRr.{\displaystyle \;\mathbb {R} ^{r}~.}Estimating the true parameterθ{\displaystyle \theta }belonging toΘ{\displaystyle \Theta }then, as a practical matter, means to find the maximum of the likelihood function subject to theconstrainth(θ)=0.{\displaystyle ~h(\theta )=0~.} Theoretically, the most natural approach to thisconstrained optimizationproblem is the method of substitution, that is "filling out" the restrictionsh1,h2,…,hr{\displaystyle \;h_{1},h_{2},\ldots ,h_{r}\;}to a seth1,h2,…,hr,hr+1,…,hk{\displaystyle \;h_{1},h_{2},\ldots ,h_{r},h_{r+1},\ldots ,h_{k}\;}in such a way thath∗=[h1,h2,…,hk]{\displaystyle \;h^{\ast }=\left[h_{1},h_{2},\ldots ,h_{k}\right]\;}is aone-to-one functionfromRk{\displaystyle \mathbb {R} ^{k}}to itself, and reparameterize the likelihood function by settingϕi=hi(θ1,θ2,…,θk).{\displaystyle \;\phi _{i}=h_{i}(\theta _{1},\theta _{2},\ldots ,\theta _{k})~.}[12]Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also.[13]For instance, in amultivariate normal distributionthecovariance matrixΣ{\displaystyle \,\Sigma \,}must bepositive-definite; this restriction can be imposed by replacingΣ=ΓTΓ,{\displaystyle \;\Sigma =\Gamma ^{\mathsf {T}}\Gamma \;,}whereΓ{\displaystyle \Gamma }is a realupper triangular matrixandΓT{\displaystyle \Gamma ^{\mathsf {T}}}is itstranspose.[14] In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to therestricted likelihood equations∂ℓ∂θ−∂h(θ)T∂θλ=0{\displaystyle {\frac {\partial \ell }{\partial \theta }}-{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\lambda =0}andh(θ)=0,{\displaystyle h(\theta )=0\;,} whereλ=[λ1,λ2,…,λr]T{\displaystyle ~\lambda =\left[\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\right]^{\mathsf {T}}~}is a column-vector ofLagrange multipliersand∂h(θ)T∂θ{\displaystyle \;{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\;}is thek × rJacobian matrixof partial derivatives.[12]Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero.[15]This in turn allows for a statistical test of the "validity" of the constraint, known as theLagrange multiplier test. Nonparametric maximum likelihood estimation can be performed using theempirical likelihood. A maximum likelihood estimator is anextremum estimatorobtained by maximizing, as a function ofθ, theobjective functionℓ^(θ;x){\displaystyle {\widehat {\ell \,}}(\theta \,;x)}. If the data areindependent and identically distributed, then we haveℓ^(θ;x)=∑i=1nln⁡f(xi∣θ),{\displaystyle {\widehat {\ell \,}}(\theta \,;x)=\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}this being the sample analogue of the expected log-likelihoodℓ(θ)=E⁡[ln⁡f(xi∣θ)]{\displaystyle \ell (\theta )=\operatorname {\mathbb {E} } [\,\ln f(x_{i}\mid \theta )\,]}, where this expectation is taken with respect to the true density. Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[16]However, like other estimation methods, maximum likelihood estimation possesses a number of attractivelimiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties: Under the conditions outlined below, the maximum likelihood estimator isconsistent. The consistency means that if the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}and we have a sufficiently large number of observationsn, then it is possible to find the value ofθ0with arbitrary precision. In mathematical terms this means that asngoes to infinity the estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges in probabilityto its true value:θ^mle→pθ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{p}}}\ \theta _{0}.} Under slightly stronger conditions, the estimator convergesalmost surely(orstrongly):θ^mle→a.s.θ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.} In practical applications, data is never generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}. Rather,f(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics thatall models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have. To establish consistency, the following conditions are sufficient.[17] θ≠θ0⇔f(⋅∣θ)≠f(⋅∣θ0).{\displaystyle \theta \neq \theta _{0}\quad \Leftrightarrow \quad f(\cdot \mid \theta )\neq f(\cdot \mid \theta _{0}).}In other words, different parameter valuesθcorrespond to different distributions within the model. If this condition did not hold, there would be some valueθ1such thatθ0andθ1generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have beenobservationally equivalent. The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right). Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as: P⁡[ln⁡f(x∣θ)∈C0(Θ)]=1.{\displaystyle \operatorname {\mathbb {P} } {\Bigl [}\;\ln f(x\mid \theta )\;\in \;C^{0}(\Theta )\;{\Bigr ]}=1.} The dominance condition can be employed in the case ofi.i.d.observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequenceℓ^(θ∣x){\displaystyle {\widehat {\ell \,}}(\theta \mid x)}isstochastically equicontinuous. If one wants to demonstrate that the ML estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges toθ0almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:supθ∈Θ‖ℓ^(θ∣x)−ℓ(θ)‖→a.s.0.{\displaystyle \sup _{\theta \in \Theta }\left\|\;{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\;\right\|\ \xrightarrow {\text{a.s.}} \ 0.} Additionally, if (as assumed above) the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}, then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. Specifically,[18]n(θ^mle−θ0)→dN(0,I−1){\displaystyle {\sqrt {n}}\left({\widehat {\theta \,}}_{\mathrm {mle} }-\theta _{0}\right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\,I^{-1}\right)}whereIis theFisher information matrix. The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, ifθ^{\displaystyle {\widehat {\theta \,}}}is the MLE forθ{\displaystyle \theta }, and ifg(θ){\displaystyle g(\theta )}is any transformation ofθ{\displaystyle \theta }, then the MLE forα=g(θ){\displaystyle \alpha =g(\theta )}is by definition[19] α^=g(θ^).{\displaystyle {\widehat {\alpha }}=g(\,{\widehat {\theta \,}}\,).\,} It maximizes the so-calledprofile likelihood: L¯(α)=supθ:α=g(θ)L(θ).{\displaystyle {\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,} The MLE is also equivariant with respect to certain transformations of the data. Ify=g(x){\displaystyle y=g(x)}whereg{\displaystyle g}is one to one and does not depend on the parameters to be estimated, then the density functions satisfy fY(y)=fX(g−1(y))|(g−1(y))′|{\displaystyle f_{Y}(y)=f_{X}(g^{-1}(y))\,|(g^{-1}(y))^{\prime }|} and hence the likelihood functions forX{\displaystyle X}andY{\displaystyle Y}differ only by a factor that does not depend on the model parameters. For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case ifX∼N(0,1){\displaystyle X\sim {\mathcal {N}}(0,1)}, thenY=g(X)=eX{\displaystyle Y=g(X)=e^{X}}follows alog-normal distribution. The density of Y follows withfX{\displaystyle f_{X}}standardNormalandg−1(y)=log⁡(y){\displaystyle g^{-1}(y)=\log(y)},|(g−1(y))′|=1y{\displaystyle |(g^{-1}(y))^{\prime }|={\frac {1}{y}}}fory>0{\displaystyle y>0}. As assumed above, if the data were generated byf(⋅;θ0),{\displaystyle ~f(\cdot \,;\theta _{0})~,}then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. It is√n-consistent and asymptotically efficient, meaning that it reaches theCramér–Rao bound. Specifically,[18] n(θ^mle−θ0)→dN(0,I−1),{\displaystyle {\sqrt {n\,}}\,\left({\widehat {\theta \,}}_{\text{mle}}-\theta _{0}\right)\ \ \xrightarrow {d} \ \ {\mathcal {N}}\left(0,\ {\mathcal {I}}^{-1}\right)~,}whereI{\displaystyle ~{\mathcal {I}}~}is theFisher information matrix:Ijk=E[−∂2ln⁡fθ0(Xt)∂θj∂θk].{\displaystyle {\mathcal {I}}_{jk}=\operatorname {\mathbb {E} } \,{\biggl [}\;-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}\;{\biggr ]}~.} In particular, it means that thebiasof the maximum likelihood estimator is equal to zero up to the order⁠1/√n⁠. However, when we consider the higher-order terms in theexpansionof the distribution of this estimator, it turns out thatθmlehas bias of order1⁄n. This bias is equal to (componentwise)[20] bh≡E⁡[(θ^mle−θ0)h]=1n∑i,j,k=1mIhiIjk(12Kijk+Jj,ik){\displaystyle b_{h}\;\equiv \;\operatorname {\mathbb {E} } {\biggl [}\;\left({\widehat {\theta }}_{\mathrm {mle} }-\theta _{0}\right)_{h}\;{\biggr ]}\;=\;{\frac {1}{\,n\,}}\,\sum _{i,j,k=1}^{m}\;{\mathcal {I}}^{hi}\;{\mathcal {I}}^{jk}\left({\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\right)} whereIjk{\displaystyle {\mathcal {I}}^{jk}}(with superscripts) denotes the (j,k)-th component of theinverseFisher information matrixI−1{\displaystyle {\mathcal {I}}^{-1}}, and 12Kijk+Jj,ik=E[12∂3ln⁡fθ0(Xt)∂θi∂θj∂θk+∂ln⁡fθ0(Xt)∂θj∂2ln⁡fθ0(Xt)∂θi∂θk].{\displaystyle {\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\;=\;\operatorname {\mathbb {E} } \,{\biggl [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\;\partial \theta _{j}\;\partial \theta _{k}}}+{\frac {\;\partial \ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{j}}}\,{\frac {\;\partial ^{2}\ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\biggr ]}~.} Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, andcorrectfor that bias by subtracting it:θ^mle∗=θ^mle−b^.{\displaystyle {\widehat {\theta \,}}_{\text{mle}}^{*}={\widehat {\theta \,}}_{\text{mle}}-{\widehat {b\,}}~.}This estimator is unbiased up to the terms of order⁠1/n⁠, and is called thebias-corrected maximum likelihood estimator. This bias-corrected estimator issecond-order efficient(at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order⁠1/n2⁠. It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator isnotthird-order efficient.[21] A maximum likelihood estimator coincides with themost probableBayesian estimatorgiven auniformprior distributionon theparameters. Indeed, themaximum a posteriori estimateis the parameterθthat maximizes the probability ofθgiven the data, given by Bayes' theorem: P⁡(θ∣x1,x2,…,xn)=f(x1,x2,…,xn∣θ)P⁡(θ)P⁡(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}{\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}}} whereP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is the prior distribution for the parameterθand whereP⁡(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}is the probability of the data averaged over all parameters. Since the denominator is independent ofθ, the Bayesian estimator is obtained by maximizingf(x1,x2,…,xn∣θ)P⁡(θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}with respect toθ. If we further assume that the priorP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood functionf(x1,x2,…,xn∣θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )}. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distributionP⁡(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}. In many practical applications inmachine learning, maximum-likelihood estimation is used as the model for parameter estimation. The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.[22] Thus, the Bayes Decision Rule is stated as wherew1,w2{\displaystyle \;w_{1}\,,w_{2}\;}are predictions of different classes. From a perspective of minimizing error, it can also be stated asw=argmaxw∫−∞∞P⁡(error∣x)P⁡(x)d⁡x{\displaystyle w={\underset {w}{\operatorname {arg\;max} }}\;\int _{-\infty }^{\infty }\operatorname {\mathbb {P} } ({\text{ error}}\mid x)\operatorname {\mathbb {P} } (x)\,\operatorname {d} x~}whereP⁡(error∣x)=P⁡(w1∣x){\displaystyle \operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{1}\mid x)~}if we decidew2{\displaystyle \;w_{2}\;}andP⁡(error∣x)=P⁡(w2∣x){\displaystyle \;\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{2}\mid x)\;}if we decidew1.{\displaystyle \;w_{1}\;.} By applyingBayes' theoremP⁡(wi∣x)=P⁡(x∣wi)P⁡(wi)P⁡(x){\displaystyle \operatorname {\mathbb {P} } (w_{i}\mid x)={\frac {\operatorname {\mathbb {P} } (x\mid w_{i})\operatorname {\mathbb {P} } (w_{i})}{\operatorname {\mathbb {P} } (x)}}}, and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:hBayes=argmaxw[P⁡(x∣w)P⁡(w)],{\displaystyle h_{\text{Bayes}}={\underset {w}{\operatorname {arg\;max} }}\,{\bigl [}\,\operatorname {\mathbb {P} } (x\mid w)\,\operatorname {\mathbb {P} } (w)\,{\bigr ]}\;,}wherehBayes{\displaystyle h_{\text{Bayes}}}is the prediction andP⁡(w){\displaystyle \;\operatorname {\mathbb {P} } (w)\;}is theprior probability. Findingθ^{\displaystyle {\hat {\theta }}}that maximizes the likelihood is asymptotically equivalent to finding theθ^{\displaystyle {\hat {\theta }}}that defines a probability distribution (Qθ^{\displaystyle Q_{\hat {\theta }}}) that has a minimal distance, in terms ofKullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated byPθ0{\displaystyle P_{\theta _{0}}}).[23]In an ideal world, P and Q are the same (and the only thing unknown isθ{\displaystyle \theta }that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends onθ^{\displaystyle {\hat {\theta }}}) to the real distributionPθ0{\displaystyle P_{\theta _{0}}}.[24] For simplicity of notation, let's assume that P=Q. Let there beni.i.ddata samplesy=(y1,y2,…,yn){\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})}from some probabilityy∼Pθ0{\displaystyle y\sim P_{\theta _{0}}}, that we try to estimate by findingθ^{\displaystyle {\hat {\theta }}}that will maximize the likelihood usingPθ{\displaystyle P_{\theta }}, then:θ^=argmaxθLPθ(y)=argmaxθPθ(y)=argmaxθP(y∣θ)=argmaxθ∏i=1nP(yi∣θ)=argmaxθ∑i=1nlog⁡P(yi∣θ)=argmaxθ(∑i=1nlog⁡P(yi∣θ)−∑i=1nlog⁡P(yi∣θ0))=argmaxθ∑i=1n(log⁡P(yi∣θ)−log⁡P(yi∣θ0))=argmaxθ∑i=1nlog⁡P(yi∣θ)P(yi∣θ0)=argminθ∑i=1nlog⁡P(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nlog⁡P(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nhθ(yi)⟶n→∞argminθE[hθ(y)]=argminθ∫Pθ0(y)hθ(y)dy=argminθ∫Pθ0(y)log⁡P(y∣θ0)P(y∣θ)dy=argminθDKL(Pθ0∥Pθ){\displaystyle {\begin{aligned}{\hat {\theta }}&={\underset {\theta }{\operatorname {arg\,max} }}\,L_{P_{\theta }}(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P_{\theta }(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P(\mathbf {y} \mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\prod _{i=1}^{n}P(y_{i}\mid \theta )={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log P(y_{i}\mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\left(\sum _{i=1}^{n}\log P(y_{i}\mid \theta )-\sum _{i=1}^{n}\log P(y_{i}\mid \theta _{0})\right)={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\left(\log P(y_{i}\mid \theta )-\log P(y_{i}\mid \theta _{0})\right)\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta )}{P(y_{i}\mid \theta _{0})}}={\underset {\theta }{\operatorname {arg\,min} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}\\&={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}h_{\theta }(y_{i})\quad {\underset {n\to \infty }{\longrightarrow }}\quad {\underset {\theta }{\operatorname {arg\,min} }}\,E[h_{\theta }(y)]\\&={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)h_{\theta }(y)dy={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)\log {\frac {P(y\mid \theta _{0})}{P(y\mid \theta )}}dy\\&={\underset {\theta }{\operatorname {arg\,min} }}\,D_{\text{KL}}(P_{\theta _{0}}\parallel P_{\theta })\end{aligned}}} Wherehθ(x)=log⁡P(x∣θ0)P(x∣θ){\displaystyle h_{\theta }(x)=\log {\frac {P(x\mid \theta _{0})}{P(x\mid \theta )}}}. Usinghhelps see how we are using thelaw of large numbersto move from the average ofh(x) to theexpectancyof it using thelaw of the unconscious statistician. The first several transitions have to do with laws oflogarithmand that findingθ^{\displaystyle {\hat {\theta }}}that maximizes some function will also be the one that maximizes some monotonic transformation of that function (i.e.: adding/multiplying by a constant). Sincecross entropyis justShannon's entropyplus KL divergence, and since the entropy ofPθ0{\displaystyle P_{\theta _{0}}}is constant, then the MLE is also asymptotically minimizing cross entropy.[25] Consider a case wherentickets numbered from 1 tonare placed in a box and one is selected at random (seeuniform distribution); thus, the sample size is 1. Ifnis unknown, then the maximum likelihood estimatorn^{\displaystyle {\widehat {n}}}ofnis the numbermon the drawn ticket. (The likelihood is 0 forn<m,1⁄nforn≥m, and this is greatest whenn=m. Note that the maximum likelihood estimate ofnoccurs at the lower extreme of possible values {m,m+ 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) Theexpected valueof the numbermon the drawn ticket, and therefore the expected value ofn^{\displaystyle {\widehat {n}}}, is (n+ 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator fornwill systematically underestimatenby (n− 1)/2. Suppose one wishes to determine just how biased anunfair coinis. Call the probability of tossing a 'head'p. The goal then becomes to determinep. Suppose the coin is tossed 80 times: i.e. the sample might be something likex1= H,x2= T, ...,x80= T, and the count of the number ofheads"H" is observed. The probability of tossingtailsis 1 −p(so herepisθabove). Suppose the outcome is 49 heads and 31tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probabilityp=1⁄3, one which gives heads with probabilityp=1⁄2and another which gives heads with probabilityp=2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using theprobability mass functionof thebinomial distributionwith sample size equal to 80, number successes equal to 49 but for different values ofp(the "probability of success"), the likelihood function (defined below) takes one of three values: P⁡[H=49∣p=13]=(8049)(13)49(1−13)31≈0.000,P⁡[H=49∣p=12]=(8049)(12)49(1−12)31≈0.012,P⁡[H=49∣p=23]=(8049)(23)49(1−23)31≈0.054.{\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{3}})^{49}(1-{\tfrac {1}{3}})^{31}\approx 0.000,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{2}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{2}})^{49}(1-{\tfrac {1}{2}})^{31}\approx 0.012,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {2}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {2}{3}})^{49}(1-{\tfrac {2}{3}})^{31}\approx 0.054~.\end{aligned}}} The likelihood is maximized whenp=2⁄3, and so this is themaximum likelihood estimateforp. Now suppose that there was only one coin but itspcould have been any value0 ≤p≤ 1 .The likelihood function to be maximised isL(p)=fD(H=49∣p)=(8049)p49(1−p)31,{\displaystyle L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31}~,} and the maximisation is over all possible values0 ≤p≤ 1 . One way to maximize this function is bydifferentiatingwith respect topand setting to zero: 0=∂∂p((8049)p49(1−p)31),0=49p48(1−p)31−31p49(1−p)30=p48(1−p)30[49(1−p)−31p]=p48(1−p)30[49−80p].{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right)~,\\[8pt]0&=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&=p^{48}(1-p)^{30}\left[49-80p\right]~.\end{aligned}}} This is a product of three terms. The first term is 0 whenp= 0. The second is 0 whenp= 1. The third is zero whenp=49⁄80. The solution that maximizes the likelihood is clearlyp=49⁄80(sincep= 0 andp= 1 result in a likelihood of 0). Thus themaximum likelihood estimatorforpis49⁄80. This result is easily generalized by substituting a letter such assin the place of 49 to represent the observed number of 'successes' of ourBernoulli trials, and a letter such asnin the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yieldss⁄nwhich is the maximum likelihood estimator for any sequence ofnBernoulli trials resulting ins'successes'. For thenormal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}which hasprobability density function f(x∣μ,σ2)=12πσ2exp⁡(−(x−μ)22σ2),{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right),} the correspondingprobability density functionfor a sample ofnindependent identically distributednormal random variables (the likelihood) is f(x1,…,xn∣μ,σ2)=∏i=1nf(xi∣μ,σ2)=(12πσ2)n/2exp⁡(−∑i=1n(xi−μ)22σ2).{\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right).} This family of distributions has two parameters:θ= (μ,σ); so we maximize the likelihood,L(μ,σ2)=f(x1,…,xn∣μ,σ2){\displaystyle {\mathcal {L}}(\mu ,\sigma ^{2})=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})}, over both parameters simultaneously, or if possible, individually. Since thelogarithmfunction itself is acontinuousstrictly increasingfunction over therangeof the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows: log⁡(L(μ,σ2))=−n2log⁡(2πσ2)−12σ2∑i=1n(xi−μ)2{\displaystyle \log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{2}}\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}} (Note: the log-likelihood is closely related toinformation entropyandFisher information.) We now compute the derivatives of this log-likelihood as follows. 0=∂∂μlog⁡(L(μ,σ2))=0−−2n(x¯−μ)2σ2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=0-{\frac {\;-2n({\bar {x}}-\mu )\;}{2\sigma ^{2}}}.\end{aligned}}}wherex¯{\displaystyle {\bar {x}}}is thesample mean. This is solved by μ^=x¯=∑i=1nxin.{\displaystyle {\widehat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {\,x_{i}\,}{n}}.} This is indeed the maximum of the function, since it is the only turning point inμand the second derivative is strictly less than zero. Itsexpected valueis equal to the parameterμof the given distribution, E⁡[μ^]=μ,{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\mu }}\;{\bigr ]}=\mu ,\,} which means that the maximum likelihood estimatorμ^{\displaystyle {\widehat {\mu }}}is unbiased. Similarly we differentiate the log-likelihood with respect toσand equate to zero: 0=∂∂σlog⁡(L(μ,σ2))=−nσ+1σ3∑i=1n(xi−μ)2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{\sigma }}+{\frac {1}{\sigma ^{3}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}.\end{aligned}}} which is solved by σ^2=1n∑i=1n(xi−μ)2.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.} Inserting the estimateμ=μ^{\displaystyle \mu ={\widehat {\mu }}}we obtain σ^2=1n∑i=1n(xi−x¯)2=1n∑i=1nxi2−1n2∑i=1n∑j=1nxixj.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.} To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error)δi≡μ−xi{\displaystyle \delta _{i}\equiv \mu -x_{i}}. Expressing the estimate in these variables yields σ^2=1n∑i=1n(μ−δi)2−1n2∑i=1n∑j=1n(μ−δi)(μ−δj).{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).} Simplifying the expression above, utilizing the facts thatE⁡[δi]=0{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;\delta _{i}\;{\bigr ]}=0}andE⁡[δi2]=σ2{\displaystyle \operatorname {E} {\bigl [}\;\delta _{i}^{2}\;{\bigr ]}=\sigma ^{2}}, allows us to obtain E⁡[σ^2]=n−1nσ2.{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\sigma }}^{2}\;{\bigr ]}={\frac {\,n-1\,}{n}}\sigma ^{2}.} This means that the estimatorσ^2{\displaystyle {\widehat {\sigma }}^{2}}is biased forσ2{\displaystyle \sigma ^{2}}. It can also be shown thatσ^{\displaystyle {\widehat {\sigma }}}is biased forσ{\displaystyle \sigma }, but that bothσ^2{\displaystyle {\widehat {\sigma }}^{2}}andσ^{\displaystyle {\widehat {\sigma }}}are consistent. Formally we say that themaximum likelihood estimatorforθ=(μ,σ2){\displaystyle \theta =(\mu ,\sigma ^{2})}is θ^=(μ^,σ^2).{\displaystyle {\widehat {\theta \,}}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).} In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously. The normal log-likelihood at its maximum takes a particularly simple form: log⁡(L(μ^,σ^))=−n2(log⁡(2πσ^2)+1){\displaystyle \log {\Bigl (}{\mathcal {L}}({\widehat {\mu }},{\widehat {\sigma }}){\Bigr )}={\frac {\,-n\;\;}{2}}{\bigl (}\,\log(2\pi {\widehat {\sigma }}^{2})+1\,{\bigr )}} This maximum log-likelihood can be shown to be the same for more generalleast squares, even fornon-linear least squares. This is often used in determining likelihood-based approximateconfidence intervalsandconfidence regions, which are generally more accurate than those using the asymptotic normality discussed above. It may be the case that variables are correlated, or more generally, not independent. Two random variablesy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}are independent only if their joint probability density function is the product of the individual probability density functions, i.e. f(y1,y2)=f(y1)f(y2){\displaystyle f(y_{1},y_{2})=f(y_{1})f(y_{2})\,} Suppose one constructs an order-nGaussian vector out of random variables(y1,…,yn){\displaystyle (y_{1},\ldots ,y_{n})}, where each variable has means given by(μ1,…,μn){\displaystyle (\mu _{1},\ldots ,\mu _{n})}. Furthermore, let thecovariance matrixbe denoted byΣ{\displaystyle {\mathit {\Sigma }}}. The joint probability density function of thesenrandom variables then follows amultivariate normal distributiongiven by: f(y1,…,yn)=1(2π)n/2det(Σ)exp⁡(−12[y1−μ1,…,yn−μn]Σ−1[y1−μ1,…,yn−μn]T){\displaystyle f(y_{1},\ldots ,y_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {\det({\mathit {\Sigma }})}}}}\exp \left(-{\frac {1}{2}}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]{\mathit {\Sigma }}^{-1}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]^{\mathrm {T} }\right)} In thebivariatecase, the joint probability density function is given by: f(y1,y2)=12πσ1σ21−ρ2exp⁡[−12(1−ρ2)((y1−μ1)2σ12−2ρ(y1−μ1)(y2−μ2)σ1σ2+(y2−μ2)2σ22)]{\displaystyle f(y_{1},y_{2})={\frac {1}{2\pi \sigma _{1}\sigma _{2}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(y_{1}-\mu _{1})^{2}}{\sigma _{1}^{2}}}-{\frac {2\rho (y_{1}-\mu _{1})(y_{2}-\mu _{2})}{\sigma _{1}\sigma _{2}}}+{\frac {(y_{2}-\mu _{2})^{2}}{\sigma _{2}^{2}}}\right)\right]} In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density. X1,X2,…,Xm{\displaystyle X_{1},\ X_{2},\ldots ,\ X_{m}}are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to ben{\displaystyle n}:x1+x2+⋯+xm=n{\displaystyle x_{1}+x_{2}+\cdots +x_{m}=n}. The probability of each box ispi{\displaystyle p_{i}}, with a constraint:p1+p2+⋯+pm=1{\displaystyle p_{1}+p_{2}+\cdots +p_{m}=1}. This is a case in which theXi{\displaystyle X_{i}}sare not independent, the joint probability of a vectorx1,x2,…,xm{\displaystyle x_{1},\ x_{2},\ldots ,x_{m}}is called the multinomial and has the form: f(x1,x2,…,xm∣p1,p2,…,pm)=n!∏xi!∏pixi=(nx1,x2,…,xm)p1x1p2x2⋯pmxm{\displaystyle f(x_{1},x_{2},\ldots ,x_{m}\mid p_{1},p_{2},\ldots ,p_{m})={\frac {n!}{\prod x_{i}!}}\prod p_{i}^{x_{i}}={\binom {n}{x_{1},x_{2},\ldots ,x_{m}}}p_{1}^{x_{1}}p_{2}^{x_{2}}\cdots p_{m}^{x_{m}}} Each box taken separately against all the other boxes is a binomial and this is an extension thereof. The log-likelihood of this is: ℓ(p1,p2,…,pm)=log⁡n!−∑i=1mlog⁡xi!+∑i=1mxilog⁡pi{\displaystyle \ell (p_{1},p_{2},\ldots ,p_{m})=\log n!-\sum _{i=1}^{m}\log x_{i}!+\sum _{i=1}^{m}x_{i}\log p_{i}} The constraint has to be taken into account and use the Lagrange multipliers: L(p1,p2,…,pm,λ)=ℓ(p1,p2,…,pm)+λ(1−∑i=1mpi){\displaystyle L(p_{1},p_{2},\ldots ,p_{m},\lambda )=\ell (p_{1},p_{2},\ldots ,p_{m})+\lambda \left(1-\sum _{i=1}^{m}p_{i}\right)} By posing all the derivatives to be 0, the most natural estimate is derived p^i=xin{\displaystyle {\hat {p}}_{i}={\frac {x_{i}}{n}}} Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures. Except for special cases, the likelihood equations∂ℓ(θ;y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}=0} cannot be solved explicitly for an estimatorθ^=θ^(y){\displaystyle {\widehat {\theta }}={\widehat {\theta }}(\mathbf {y} )}. Instead, they need to be solvediteratively: starting from an initial guess ofθ{\displaystyle \theta }(sayθ^1{\displaystyle {\widehat {\theta }}_{1}}), one seeks to obtain a convergent sequence{θ^r}{\displaystyle \left\{{\widehat {\theta }}_{r}\right\}}. Many methods for this kind ofoptimization problemare available,[26][27]but the most commonly used ones are algorithms based on an updating formula of the formθ^r+1=θ^r+ηrdr(θ^){\displaystyle {\widehat {\theta }}_{r+1}={\widehat {\theta }}_{r}+\eta _{r}\mathbf {d} _{r}\left({\widehat {\theta }}\right)} where the vectordr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)}indicates thedescent directionof therth "step," and the scalarηr{\displaystyle \eta _{r}}captures the "step length,"[28][29]also known as thelearning rate.[30] (Note: here it is a maximization problem, so the sign before gradient is flipped) ηr∈R+{\displaystyle \eta _{r}\in \mathbb {R} ^{+}}that is small enough for convergence anddr(θ^)=∇ℓ(θ^r;y){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=\nabla \ell \left({\widehat {\theta }}_{r};\mathbf {y} \right)} Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method. ηr=1{\displaystyle \eta _{r}=1}anddr(θ^)=−Hr−1(θ^)sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)\mathbf {s} _{r}\left({\widehat {\theta }}\right)} wheresr(θ^){\displaystyle \mathbf {s} _{r}({\widehat {\theta }})}is thescoreandHr−1(θ^){\displaystyle \mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)}is theinverseof theHessian matrixof the log-likelihood function, both evaluated therth iteration.[31][32]But because the calculation of the Hessian matrix iscomputationally costly, numerous alternatives have been proposed. The popularBerndt–Hall–Hall–Hausman algorithmapproximates the Hessian with theouter productof the expected gradient, such that dr(θ^)=−[1n∑t=1n∂ℓ(θ;y)∂θ(∂ℓ(θ;y)∂θ)T]−1sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\left[{\frac {1}{n}}\sum _{t=1}^{n}{\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)} Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix. DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:Hk+1=(I−γkykskT)Hk(I−γkskykT)+γkykykT,{\displaystyle \mathbf {H} _{k+1}=\left(I-\gamma _{k}y_{k}s_{k}^{\mathsf {T}}\right)\mathbf {H} _{k}\left(I-\gamma _{k}s_{k}y_{k}^{\mathsf {T}}\right)+\gamma _{k}y_{k}y_{k}^{\mathsf {T}},} where yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}γk=1ykTsk,{\displaystyle \gamma _{k}={\frac {1}{y_{k}^{T}s_{k}}},}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.} BFGS also gives a solution that is symmetric and positive-definite: Bk+1=Bk+ykykTykTsk−BkskskTBkTskTBksk,{\displaystyle B_{k+1}=B_{k}+{\frac {y_{k}y_{k}^{\mathsf {T}}}{y_{k}^{\mathsf {T}}s_{k}}}-{\frac {B_{k}s_{k}s_{k}^{\mathsf {T}}B_{k}^{\mathsf {T}}}{s_{k}^{\mathsf {T}}B_{k}s_{k}}}\ ,} where yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.} BFGS method is not guaranteed to converge unless the function has a quadraticTaylor expansionnear an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances Another popular method is to replace the Hessian with theFisher information matrix,I(θ)=E⁡[Hr(θ^)]{\displaystyle {\mathcal {I}}(\theta )=\operatorname {\mathbb {E} } \left[\mathbf {H} _{r}\left({\widehat {\theta }}\right)\right]}, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such asgeneralized linear models. Although popular, quasi-Newton methods may converge to astationary pointthat is not necessarily a local or global maximum,[33]but rather a local minimum or asaddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is bothnegative definiteandwell-conditioned.[34] Early users of maximum likelihood includeCarl Friedrich Gauss,Pierre-Simon Laplace,Thorvald N. Thiele, andFrancis Ysidro Edgeworth.[35][36]It wasRonald Fisherhowever, between 1912 and 1922, who singlehandedly created the modern version of the method.[37][38] Maximum-likelihood estimation finally transcendedheuristicjustification in a proof published bySamuel S. Wilksin 1938, now calledWilks' theorem.[39]The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptoticallyχ2-distributed, which enables convenient determination of aconfidence regionaround any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of theFisher informationmatrix, which is provided by a theorem proven by Fisher.[40]Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[41] Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[42][43][44][45][46][47][48][49]
https://en.wikipedia.org/wiki/Maximum-likelihood_estimation
Ininformation theory,perplexityis a measure of uncertainty in the value of a sample from a discrete probability distribution. The larger the perplexity, the less likely it is that an observer can guess the value which will be drawn from the distribution. Perplexity was originally introduced in 1977 in the context ofspeech recognitionbyFrederick Jelinek,Robert Leroy Mercer, Lalit R. Bahl, andJames K. Baker.[1] The perplexityPPof a discreteprobability distributionpis a concept widely used in information theory,machine learning, and statistical modeling. It is defined as whereH(p) is theentropy(inbits) of the distribution, andxranges over theevents. The base of the logarithm need not be 2: The perplexity is independent of the base, provided that the entropy and the exponentiation use the same base. In some contexts, this measure is also referred to as the(order-1 true) diversity. Perplexity of arandom variableXmay be defined as the perplexity of the distribution over its possible valuesx. It can be thought of as a measure of uncertainty or "surprise" related to the outcomes. For a probability distributionpwhere exactlykoutcomes each have a probability of1/kand all other outcomes have a probability of zero, the perplexity of this distribution is simplyk. This is because the distribution models a fairk-sideddie, with each of thekoutcomes being equally likely. In this context, the perplexitykindicates that there is as much uncertainty as there would be when rolling a fairk-sided die. Even if a random variable has more thankpossible outcomes, the perplexity will still bekif the distribution is uniform overkoutcomes and zero for the rest. Thus, a random variable with a perplexity ofkcan be described as being "k-ways perplexed," meaning it has the same level of uncertainty as a fairk-sided die. Perplexity is sometimes used as a measure of the difficulty of a prediction problem. It is, however, generally not a straight forward representation of the relevant probability. For example, if you have two choices, one with probability 0.9, your chances of a correct guess using the optimal strategy are 90 percent. Yet, the perplexity is 2−0.9 log20.9 - 0.1 log20.1= 1.38. The inverse of the perplexity, 1/1.38 = 0.72, does not correspond to the 0.9 probability. The perplexity is the exponentiation of the entropy, a more commonly encountered quantity. Entropy measures the expected or "average" number of bits required to encode the outcome of the random variable using an optimalvariable-length code. It can also be regarded as the expected information gain from learning the outcome of the random variable, providing insight into the uncertainty and complexity of the underlying probability distribution. A model of an unknown probability distributionp, may be proposed based on a training sample that was drawn fromp. Given a proposed probability modelq, one may evaluateqby asking how well it predicts a separate test samplex1,x2, ...,xNalso drawn fromp. The perplexity of the modelqis defined as whereb{\displaystyle b}is customarily 2. Better modelsqof the unknown distributionpwill tend to assign higher probabilitiesq(xi) to the test events. Thus, they have lower perplexity because they are less surprised by the test sample. This is equivalent to saying that better models have higherlikelihoodsfor the test data, which leads to a lower perplexity value. The exponent above may be regarded as the average number of bits needed to represent a test eventxiif one uses an optimal code based onq. Low-perplexity models do a better job ofcompressingthe test sample, requiring few bits per test element on average becauseq(xi) tends to be high. The exponent−1N∑i=1Nlogb⁡q(xi){\displaystyle -{\tfrac {1}{N}}\sum _{i=1}^{N}\log _{b}q(x_{i})}may also be interpreted as across-entropy: wherep~{\displaystyle {\tilde {p}}}denotes theempirical distributionof the test sample (i.e.,p~(x)=n/N{\displaystyle {\tilde {p}}(x)=n/N}ifxappearedntimes in the test sample of sizeN). By the definition ofKL divergence, it is also equal toH(p~)+DKL(p~‖q){\displaystyle H({\tilde {p}})+D_{KL}({\tilde {p}}\|q)}, which is≥H(p~){\displaystyle \geq H({\tilde {p}})}. Consequently, the perplexity is minimized whenq=p~{\displaystyle q={\tilde {p}}}. Innatural language processing(NLP), acorpusis a structured collection oftextsor documents, and alanguage modelis a probability distribution over entire texts or documents. Consequently, in NLP, the more commonly used measure isperplexity per token(word or, more frequently, sub-word), defined as:(∏i=1nq(si))−1/N{\displaystyle \left(\prod _{i=1}^{n}q(s_{i})\right)^{-1/N}}wheres1,...,sn{\displaystyle s_{1},...,s_{n}}are then{\displaystyle n}documents in the corpus andN{\displaystyle N}is the number oftokensin the corpus. This normalizes the perplexity by the length of the text, allowing for more meaningful comparisons between different texts or models rather than documents. Suppose the average textxiin the corpus has a probability of2−190{\displaystyle 2^{-190}}according to the language model. This would give a model perplexity of 2190per sentence. However, in NLP, it is more common to normalize by the length of a text. Thus, if the test sample has a length of 1,000 tokens, and could be coded using 7.95 bits per token, one could report a model perplexity of 27.95= 247per token.In other words, the model is as confused on test data as if it had to choose uniformly and independently among 247 possibilities for each token. There are two standard evaluation metrics for language models: perplexity or word error rate(WER). The simpler of these measures, WER, is simply the percentage of erroneously recognized words E (deletions, insertions, substitutions) to total number of words N, in a speech recognition task i.e.WER=(EN)∗100%{\displaystyle WER=\left({\frac {E}{\mathbb {N} }}\right)*100\%}The second metric, perplexity (per token), is an information theoretic measure that evaluates the similarity of proposed modelmto the original distributionp. It can be computed as a inverse of (geometric) average probability of test setT PPL(D)=1m(T)N=2−1Nlog2⁡(m(T)){\displaystyle PPL(D)={\sqrt[{N}]{\frac {1}{m(T)}}}=2^{-{\frac {1}{N}}\log _{2}{\big (}m(T){\big )}}} whereNis the number of tokens in test setT. This equation can be seen as the exponentiated cross entropy, where cross entropy H (p;m) is approximated as H(p;m)=−1Nlog2⁡(m(T)){\displaystyle H(p;m)=-{\frac {1}{N}}\log _{2}{\big (}m(T){\big )}} Since 2007, significant advancements in language modeling have emerged, particularly with the advent ofdeep learningtechniques. Perplexity per token, a measure that quantifies the predictive power of a language model, has remained central to evaluating models such as the dominanttransformermodels like Google'sBERT, OpenAI'sGPT-4and otherlarge language models(LLMs). This measure was employed to compare different models on the same dataset and guide the optimization ofhyperparameters, although it has been found sensitive to factors such as linguistic features and sentence length.[2] Despite its pivotal role in language model development, perplexity has shown limitations, particularly as an inadequate predictor ofspeech recognitionperformance,overfittingandgeneralization,[3][4]raising questions about the benefits of blindly optimizing perplexity alone. The lowest perplexity that had been published on theBrown Corpus(1 million words of American English of varying topics and genres) as of 1992 is indeed about 247 per word/token, corresponding to a cross-entropy of log2247 = 7.95 bits per word or 1.75 bits per letter[5]using atrigrammodel. While this figure represented the state of the art (SOTA) at the time, advancements in techniques such as deep learning have led to significant improvements in perplexity on other benchmarks, such as the One Billion Word Benchmark.[6] In the context of theBrown Corpus, simply guessing that the next word is "the" will achieve an accuracy of 7 percent, contrasting with the 1/247 = 0.4 percent that might be expected from a naive use of perplexity. This difference underscores the importance of thestatistical modelused and the nuanced nature of perplexity as a measure of predictiveness.[7]The guess is based on unigram statistics, not on the trigram statistics that yielded the perplexity of 247, and utilizing trigram statistics would further refine the prediction.
https://en.wikipedia.org/wiki/Perplexity
Instatistics,multivariate adaptive regression splines(MARS) is a form ofregression analysisintroduced byJerome H. Friedmanin 1991.[1]It is anon-parametric regressiontechnique and can be seen as an extension oflinear modelsthat automatically models nonlinearities and interactions between variables. The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".[2][3] This section introduces MARS using a few examples. We start with a set of data: a matrix of input variablesx, and a vector of the observed responsesy, with a response for each row inx. For example, the data could be: Here there is only oneindependent variable, so thexmatrix is just a single column. Given these measurements, we would like to build a model which predicts the expectedyfor a givenx. Alinear modelfor the above data is The hat on they^{\displaystyle {\widehat {y}}}indicates thaty^{\displaystyle {\widehat {y}}}is estimated from the data. The figure on the right shows a plot of this function: a line giving the predictedy^{\displaystyle {\widehat {y}}}versusx, with the original values ofyshown as red dots. The data at the extremes ofxindicates that the relationship betweenyandxmay be non-linear (look at the red dots relative to the regression line at low and high values ofx). We thus turn to MARS to automatically build a model taking into account non-linearities. MARS software constructs a model from the givenxandyas follows The figure on the right shows a plot of this function: the predictedy^{\displaystyle {\widehat {y}}}versusx, with the original values ofyonce again shown as red dots. The predicted response is now a better fit to the originalyvalues. MARS has automatically produced a kink in the predictedyto take into account non-linearity. The kink is produced byhinge functions. The hinge functions are the expressions starting withmax{\displaystyle \max }(wheremax(a,b){\displaystyle \max(a,b)}isa{\displaystyle a}ifa>b{\displaystyle a>b}, elseb{\displaystyle b}). Hinge functions are described in more detail below. In this simple example, we can easily see from the plot thatyhas a non-linear relationship withx(and might perhaps guess that y varies with the square ofx). However, in general there will be multipleindependent variables, and the relationship betweenyand these variables will be unclear and not easily visible by plotting. We can use MARS to discover that non-linear relationship. An example MARS expression with multiple variables is This expression models air pollution (the ozone level) as a function of the temperature and a few other variables. Note that the last term in the formula (on the last line) incorporates an interaction betweenwind{\displaystyle \mathrm {wind} }andvis{\displaystyle \mathrm {vis} }. The figure on the right plots the predictedozone{\displaystyle \mathrm {ozone} }aswind{\displaystyle \mathrm {wind} }andvis{\displaystyle \mathrm {vis} }vary, with the other variables fixed at their median values. The figure shows that wind does not affect the ozone level unless visibility is low. We see that MARS can build quite flexible regression surfaces by combining hinge functions. To obtain the above expression, the MARS model building procedure automatically selects which variables to use (some variables are important, others not), the positions of the kinks in the hinge functions, and how the hinge functions are combined. MARS builds models of the form The model is a weighted sum of basis functionsBi(x){\displaystyle B_{i}(x)}. Eachci{\displaystyle c_{i}}is a constant coefficient. For example, each line in the formula for ozone above is one basis function multiplied by its coefficient. Eachbasis functionBi(x){\displaystyle B_{i}(x)}takes one of the following three forms: 1) a constant 1. There is just one such term, the intercept. In the ozone formula above, the intercept term is 5.2. 2) ahingefunction. A hinge function has the formmax(0,x−constant){\displaystyle \max(0,x-{\text{constant}})}ormax(0,constant−x){\displaystyle \max(0,{\text{constant}}-x)}. MARS automatically selects variables and values of those variables for knots of the hinge functions. Examples of such basis functions can be seen in the middle three lines of the ozone formula. 3) a product of two or more hinge functions. These basis functions can model interaction between two or more variables. An example is the last line of the ozone formula. A key part of MARS models arehinge functionstaking the form or wherec{\displaystyle c}is a constant, called theknot. The figure on the right shows a mirrored pair of hinge functions with a knot at 3.1. A hinge function is zero for part of its range, so can be used to partition the data into disjoint regions, each of which can be treated independently. Thus for example a mirrored pair of hinge functions in the expression creates thepiecewiselinear graph shown for the simple MARS model in the previous section. One might assume that only piecewise linear functions can be formed from hinge functions, but hinge functions can be multiplied together to form non-linear functions. Hinge functions are also calledramp,hockey stick, orrectifierfunctions. Instead of themax{\displaystyle \max }notation used in this article, hinge functions are often represented by[±(xi−c)]+{\displaystyle [\pm (x_{i}-c)]_{+}}where[⋅]+{\displaystyle [\cdot ]_{+}}means take the positive part. MARS builds a model in two phases: the forward and the backward pass. This two-stage approach is the same as that used byrecursive partitioningtrees. MARS starts with a model which consists of just the intercept term (which is the mean of the response values). MARS then repeatedly adds basis function in pairs to the model. At each step it finds the pair of basis functions that gives the maximum reduction in sum-of-squaresresidualerror (it is agreedy algorithm). The two basis functions in the pair are identical except that a different side of a mirrored hinge function is used for each function. Each new basis function consists of a term already in the model (which could perhaps be the intercept term) multiplied by a new hinge function. A hinge function is defined by a variable and a knot, so to add a new basis function, MARS must search over all combinations of the following: 1) existing terms (calledparent termsin this context) 2) all variables (to select one for the new basis function) 3) all values of each variable (for the knot of the new hinge function). To calculate the coefficient of each term, MARS applies a linear regression over the terms. This process of adding terms continues until the change in residual error is too small to continue or until the maximum number of terms is reached. The maximum number of terms is specified by the user before model building starts. The search at each step is usually done in abrute-forcefashion, but a key aspect of MARS is that because of the nature of hinge functions, the search can be done quickly using a fast least-squares update technique. Brute-force search can be sped up by using aheuristicthat reduces the number of parent terms considered at each step ("Fast MARS"[4]). The forward pass usuallyoverfitsthe model. To build a model with better generalization ability, the backward pass prunes the model, deleting the least effective term at each step until it finds the best submodel. Model subsets are compared using the Generalized cross validation (GCV) criterion described below. The backward pass has an advantage over the forward pass: at any step it can choose any term to delete, whereas the forward pass at each step can only see the next pair of terms. The forward pass adds terms in pairs, but the backward pass typically discards one side of the pair and so terms are often not seen in pairs in the final model. A paired hinge can be seen in the equation fory^{\displaystyle {\widehat {y}}}in the first MARS example above; there are no complete pairs retained in the ozone example. The backward pass compares the performance of different models using Generalized Cross-Validation (GCV), a minor variant on theAkaike information criterionthat approximates theleave-one-out cross-validationscore in the special case where errors are Gaussian, or where the squared error loss function is used. GCV was introduced by Craven andWahbaand extended by Friedman for MARS; lower values of GCV indicate better models. The formula for the GCV is where RSS is the residual sum-of-squares measured on the training data andNis the number of observations (the number of rows in thexmatrix). The effective number of parameters is defined as wherepenaltyis typically 2 (giving results equivalent to theAkaike information criterion) but can be increased by the user if they so desire. Note that is the number of hinge-function knots, so the formula penalizes the addition of knots. Thus the GCV formula adjusts (i.e. increases) the training RSS to penalize more complex models. We penalize flexibility because models that are too flexible will model the specific realization of noise in the data instead of just the systematic structure of the data. One constraint has already been mentioned: the user can specify the maximum number of terms in the forward pass. A further constraint can be placed on the forward pass by specifying a maximum allowable degree of interaction. Typically only one or two degrees of interaction are allowed, but higher degrees can be used when the data warrants it. The maximum degree of interaction in the first MARS example above is one (i.e. no interactions or anadditive model); in the ozone example it is two. Other constraints on the forward pass are possible. For example, the user can specify that interactions are allowed only for certain input variables. Such constraints could make sense because of knowledge of the process that generated the data. No regression modeling technique is best for all situations. The guidelines below are intended to give an idea of the pros and cons of MARS, but there will be exceptions to the guidelines. It is useful to compare MARS torecursive partitioningand this is done below. (Recursive partitioning is also commonly calledregression trees,decision trees, orCART; see therecursive partitioningarticle for details). Several free and commercial software packages are available for fitting MARS-type models.
https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_spline#Hinge_functions
Incomputer science,array programmingrefers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used inscientificand engineering settings. Modern programming languages that support array programming (also known asvectorormultidimensionallanguages) have been engineered specifically to generalize operations onscalarsto apply transparently tovectors,matrices, and higher-dimensional arrays. These includeAPL,J,Fortran,MATLAB,Analytica,Octave,R,Cilk Plus,Julia,Perl Data Language (PDL),Raku (programming language). In these languages, an operation that operates on entire arrays can be called avectorizedoperation,[1]regardless of whether it is executed on avector processor, which implements vector instructions. Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon[example needed]to find array programming languageone-linersthat require several pages of object-oriented code. The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it ahigh-level programmingmodel as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations. Kenneth E. Iversondescribed the rationale behind array programming (actually referring to APL) as follows:[2] most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician. The thesis is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter. Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations. [...] Users of computers and programming languages are often concerned primarily with the efficiency of execution of algorithms, and might, therefore, summarily dismiss many of the algorithms presented here. Such dismissal would be short-sighted since a clear statement of an algorithm can usually be used as a basis from which one may easily derive a more efficient algorithm. The basis behind array programming and thinking is to find and exploit the properties of data where individual elements are similar or adjacent. Unlike object orientation which implicitly breaks down data to its constituent parts (orscalarquantities), array orientation looks to group data and apply a uniform handling. Function rankis an important concept to array programming languages in general, by analogy totensorrank in mathematics: functions that operate on data may be classified by the number of dimensions they act on. Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data (individual numbers). Thecross productoperation is an example of a vector rank function because it operates on vectors, not scalars.Matrix multiplicationis an example of a 2-rank function, because it operates on 2-dimensional objects (matrices).Collapse operatorsreduce the dimensionality of an input data array by one or more dimensions. For example, summing over elements collapses the input array by 1 dimension. Array programming is very well suited toimplicit parallelization; a topic of much research nowadays. Further,Inteland compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting fromMMXand continuing throughSSSE3and3DNow!, which include rudimentarySIMDarray capabilities. This has continued into the 2020s with instruction sets such asAVX-512, making modern CPUs sophisticated vector processors. Array processing is distinct fromparallel processingin that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones (MIMD) to be solved piecemeal by numerous processors. Processors withmultiple coresandGPUswith thousands ofgeneral computing coresare common as of 2023. The canonical examples of array programming languages areFortran,APL, andJ. Others include:A+,Analytica,Chapel,IDL,Julia,K, Klong,Q,MATLAB,GNU Octave,Scilab,FreeMat,Perl Data Language(PDL),R,Raku,S-Lang,SAC,Nial,ZPL,Futhark, andTI-BASIC. In scalar languages such asCandPascal, operations apply only to single values, soa+bexpresses the addition of two numbers. In such languages, adding one array to another requires indexing and looping, the coding of which is tedious. In array-based languages, for example in Fortran, the nested for-loop above can be written in array-format in one line, or alternatively, to emphasize the array nature of the objects, While scalar languages like C do not have native array programming elements as part of the language proper, this does not mean programs written in these languages never take advantage of the underlying techniques of vectorization (i.e., utilizing a CPU'svector-based instructionsif it has them or by using multiple CPU cores). Some C compilers likeGCCat some optimization levels detect and vectorize sections of code that its heuristics determine would benefit from it. Another approach is given by theOpenMPAPI, which allows one to parallelize applicable sections of code by taking advantage of multiple CPU cores. In array languages, operations are generalized to apply to both scalars and arrays. Thus,a+bexpresses the sum of two scalars ifaandbare scalars, or the sum of two arrays if they are arrays. An array language simplifies programming but possibly at a cost known as theabstraction penalty.[3][4][5]Because the additions are performed in isolation from the rest of the coding, they may not produce the optimally mostefficientcode. (For example, additions of other elements of the same array may be subsequently encountered during the same execution, causing unnecessary repeated lookups.) Even the most sophisticatedoptimizing compilerwould have an extremely hard time amalgamating two or more apparently disparate functions which might appear in different program sections or sub-routines, even though a programmer could do this easily, aggregating sums on the same pass over the array to minimizeoverhead). The previous C code would become the following in theAdalanguage,[6]which supports array-programming syntax. APL uses single character Unicode symbols with no syntactic sugar. This operation works on arrays of any rank (including rank 0), and on a scalar and an array. Dyalog APL extends the original language withaugmented assignments: Analytica provides the same economy of expression as Ada. Dartmouth BASIChad MAT statements for matrix and array manipulation in its third edition (1966). Stata's matrix programming language Mata supports array programming. Below, we illustrate addition, multiplication, addition of a matrix and a scalar, element by element multiplication, subscripting, and one of Mata's many inverse matrix functions. The implementation inMATLABallows the same economy allowed by using the Fortran language. A variant of the MATLAB language is theGNU Octavelanguage, which extends the original language with augmented assignments: Both MATLAB and GNU Octave natively supportlinear algebraoperations such as matrix multiplication,matrix inversion, and the numerical solution ofsystem of linear equations, even using theMoore–Penrose pseudoinverse.[7][8] TheNialexample of the inner product of two arrays can be implemented using the native matrix multiplication operator. Ifais a row vector of size [1 n] andbis a corresponding column vector of size [n 1]. By contrast, theentrywise productis implemented as: The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator(:), which reshapes a given matrix into a column vector, and thetransposeoperator': Therasdaman query languageis a database-oriented array-programming language. For example, two arrays could be added with the following query: The R language supportsarray paradigmby default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector: Raku supports the array paradigm via its Metaoperators.[9]The following example demonstrates the addition of arrays @a and @b using the Hyper-operator in conjunction with the plus operator. The matrix left-division operator concisely expresses some semantic properties of matrices. As in the scalar equivalent, if the (determinantof the) coefficient (matrix)Ais not null then it is possible to solve the (vectorial) equationA * x = bby left-multiplying both sides by theinverseofA:A−1(in both MATLAB and GNU Octave languages:A^-1). The following mathematical statements hold whenAis afull ranksquare matrix: where==is the equivalencerelational operator. The previous statements are also valid MATLAB expressions if the third one is executed before the others (numerical comparisons may be false because of round-off errors). If the system is overdetermined – so thatAhas more rows than columns – the pseudoinverseA+(in MATLAB and GNU Octave languages:pinv(A)) can replace the inverseA−1, as follows: However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalenta * x = b, for which the solutionx = a^-1 * bwould require two operations instead of the more efficientx = b / a. The problem is that generally matrix multiplications are notcommutativeas the extension of the scalar solution to the matrix case would require: The MATLAB language introduces the left-division operator\to maintain the essential part of the analogy with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness: This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such asATLASorLAPACK.[10] Returning to the previous quotation of Iverson, the rationale behind it should now be evident: it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter. Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations. The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. InC++several linear algebra libraries exploit the language's ability tooverload operators. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as theNumPyextension library toPython,ArmadilloandBlitz++libraries do.[11][12]
https://en.wikipedia.org/wiki/Array_programming
Listed here are notable end-user computer applications intended for use withnumericalordata analysis:
https://en.wikipedia.org/wiki/List_of_numerical-analysis_software
Theanois aPythonlibrary and optimizing compiler for manipulating and evaluating mathematical expressions, especially matrix-valued ones.[2]In Theano, computations are expressed using aNumPy-esque syntax andcompiledto run efficiently on either CPU orGPUarchitectures. Theano is anopen sourceproject[3]primarily developed by theMontreal Institute for Learning Algorithms(MILA) at theUniversité de Montréal.[4] The name of the software references the ancient philosopherTheano, long associated with the development of thegolden mean. On 28 September 2017, Pascal Lamblin posted a message fromYoshua Bengio, Head of MILA: major development would cease after the 1.0 release due to competing offerings by strong industrial players.[5]Theano 1.0.0 was then released on 15 November 2017.[6] On 17 May 2018, Chris Fonnesbeck wrote on behalf of thePyMCdevelopment team[7]that the PyMC developers will officially assume control of Theano maintenance once the MILA development team steps down. On 29 January 2021, they started using the name Aesara for their fork of Theano.[8] On 29 Nov 2022, thePyMCdevelopment team announced that the PyMC developers will fork the Aesara project under the name PyTensor.[9] The following code is the original Theano's example. It defines a computational graph with 2 scalarsaandbof typedoubleand an operation between them (addition) and then creates a Python functionfthat does the actual computation.[10] Thisscientific softwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Theano_(software)
Matplotlib(portmanteau ofMATLAB, plot, and library[3]) is aplottinglibraryfor thePythonprogramming languageand itsnumerical mathematicsextensionNumPy. It provides anobject-orientedAPIfor embedding plots into applications using general-purposeGUI toolkitslikeTkinter,wxPython,Qt, orGTK. There is also aprocedural"pylab" interface based on astate machine(likeOpenGL), designed to closely resemble that ofMATLAB, though its use is discouraged.[4]SciPymakes use of Matplotlib. Matplotlib was originally written byJohn D. Hunter. Since then it has had an active development community[5]and is distributed under aBSD-style license. Michael Droettboom was nominated as matplotlib's lead developer shortly before John Hunter's death in August 2012[6]and was further joined by Thomas Caswell.[7][8]Matplotlib is aNumFOCUSfiscally sponsored project.[9] Matplotlib is widely used in scientific research as a tool for data visualization. Researchers across disciplines such as physics, astronomy, engineering, and biology use Matplotlib to create publication-quality graphs and plots for their analyses and papers. The library has been used in well-known scientific projects; for example, theEvent Horizon Telescopecollaboration used Matplotlib to produce visualizations during the effort to create the first image of a black hole.[10]Matplotlib also underpins the plotting functionality of many scientificPythonlibraries (for instance,pandasuses Matplotlib as its default backend for plotting). Its importance to the scientific community has been acknowledged by institutions such asNASA, which in 2024 awarded a grant to support Matplotlib’s continued development as part of an initiative to fund widely used open-source scientific software.[11] In education and data science, Matplotlib is frequently used to teach programming and data visualization. It integrates withJupyter Notebook, allowing students and instructors to generate inline plots and interactively explore data within a notebook environment.[12]Many educational institutions incorporate Matplotlib into their curricula for teaching STEM concepts,[13]and it is widely featured in tutorials, workshops, and open online courses as a primary plotting library. This broad adoption across both academia and industry has helped establish Matplotlib as a standard component of scientific and educational visualization workflows. Pyplot is a Matplotlib module that provides a MATLAB-like interface.[14]Matplotlib is designed to be as usable as MATLAB, with the ability to use Python, and the advantage of being free andopen-source. Matplotlibsupports various types of 2 dimensional and 3 dimensional plots. The support for two dimensional plots is robust including: line plots, histogram, scatter plots, polar plots, box plots, pie charts, bar graphs, and heat maps. The support for three dimensional plots was added later and while it is good, it is not as robust as 2 dimensional plots. You can have 3 dimensional line plots, scatter plots, and surface plots. You can determine what plot type you need by considering a few factors:Comparing a relationship between different variables:line plot, heat map, contour plot, scatter plotLooking for distribution of your dataset:box plot, histogramComparing different categories:box plot, pie chart, bar graph Matplotlib-animation[15]capabilities are intended for visualizing how certain data changes. However, one can use the functionality in any way required. These animations are defined as a function of frame number (or time). In other words, one defines a function that takes a frame number as input and defines/updates the matplotlib-figure based on it. The time at the beginning of a frame-number since the start of animation can be calculated as -time=frame-number−1FPS{\displaystyle {\text{time}}={\frac {{\text{frame-number}}-1}{\text{FPS}}}} Several toolkits are available which extend Matplotlib functionality. Some are separatedownloads, others ship with the Matplotlibsource codebut have external dependencies.[16]
https://en.wikipedia.org/wiki/Matplotlib
Fortran(/ˈfɔːrtræn/; formerlyFORTRAN) is athird-generation,compiled,imperativeprogramming languagethat is especially suited tonumeric computationandscientific computing. Fortran was originally developed byIBMwith a reference manual being released in 1956;[3]however, the first compilers only began to produce accurate code two years later.[4]Fortrancomputer programshave been written to support scientific and engineering applications, such asnumerical weather prediction,finite element analysis,computational fluid dynamics,plasma physics,geophysics,computational physics,crystallographyandcomputational chemistry. It is a popular language forhigh-performance computing[5]and is used for programs that benchmark and rank the world'sfastest supercomputers.[6][7] Fortran has evolved through numerous versions and dialects. In 1966, theAmerican National Standards Institute(ANSI) developed a standard for Fortran to limit proliferation of compilers using slightly different syntax.[8]Successive versions have added support for a character data type (Fortran 77),structured programming,array programming,modular programming,generic programming(Fortran 90),parallel computing(Fortran 95),object-oriented programming(Fortran 2003), andconcurrent programming(Fortran 2008). Since April 2024, Fortran has ranked among the top ten languages in theTIOBE index, a measure of the popularity of programming languages.[9] The first manual for FORTRAN describes it as aFormula Translating System, and printed the name withsmall caps,Fortran.[10]: p.2[11]Other sources suggest the name stands forFormula Translator,[12]orFormula Translation.[13] Early IBM computers did not supportlowercaseletters, and the names of versions of the language through FORTRAN 77 were usually spelled in all-uppercase.[14]FORTRAN 77 was the last version in which the Fortran character set included only uppercase letters.[15] The official languagestandardsfor Fortran have referred to the language as "Fortran" with initial caps since Fortran 90.[citation needed] In late 1953,John W. Backussubmitted a proposal to his superiors atIBMto develop a more practical alternative toassembly languagefor programming theirIBM 704mainframe computer.[11]: 69Backus' historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan,Roy Nutt, Robert Nelson, Irving Ziller, Harold Stern,Lois Haibt, andDavid Sayre.[16]Its concepts included easier entry of equations into a computer, an idea developed byJ. Halcombe Laningand demonstrated in theLaning and Zierler systemof 1952.[17] A draft specification forThe IBM Mathematical Formula Translating Systemwas completed by November 1954.[11]: 71The first manual for FORTRAN appeared in October 1956,[10][11]: 72with the first FORTRANcompilerdelivered in April 1957.[11]: 75Fortran produced efficient enough code forassembly languageprogrammers to accept ahigh-level programming languagereplacement.[18] John Backus said during a 1979 interview withThink, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, and so, when I was working on theIBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."[19] The language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of acomplex number data typein the language made Fortran especially suited to technical applications such as electrical engineering.[20] By 1960, versions of FORTRAN were available for theIBM 709,650,1620, and7090computers. Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. FORTRAN was provided for theIBM 1401computer by an innovative 63-phase compiler that ran entirely in itscore memoryof only 8000 (six-bit) characters. The compiler could be run from tape, or from a 2200-card deck; it used no further tape or disk storage. It kept the program in memory and loadedoverlaysthat gradually transformed it, in place, into executable form, as described by Haines.[21]This article was reprinted, edited, in both editions ofAnatomy of a Compiler[22]and in the IBM manual "Fortran Specifications and Operating Procedures, IBM 1401".[23]The executable form was not entirelymachine language; rather, floating-point arithmetic, sub-scripting, input/output, and function references were interpreted, precedingUCSD PascalP-codeby two decades.GOTRAN, a simplified, interpreted version of FORTRAN I (with only 12 types of statements not 32) for "load and go" operation was available (at least for the earlyIBM 1620computer).[24]Modern Fortran, and almost all later versions, are fully compiled, as done for other high-performance languages. The development of Fortran paralleled theearly evolution of compiler technology, and many advances in the theory and design ofcompilerswere specifically motivated by the need to generate efficient code for Fortran programs. The initial release of FORTRAN for the IBM 704[10]contained 32 types ofstatements, including: The arithmeticIFstatement was reminiscent of (but not readily implementable by) a three-way comparison instruction (CAS—Compare Accumulator with Storage) available on the 704. The statement provided the only way to compare numbers—by testing their difference, with an attendant risk of overflow. This deficiency was later overcome by "logical" facilities introduced in FORTRAN IV. TheFREQUENCYstatement was used originally (and optionally) to give branch probabilities for the three branch cases of the arithmeticIFstatement. It could also be used to suggest how many iterations aDOloop might run. The first FORTRAN compiler used this weighting to performat compile timeaMonte Carlo simulationof the generated code, the results of which were used to optimize the placement of basic blocks in memory—a very sophisticated optimization for its time. The Monte Carlo technique is documented in Backus et al.'s paper on this original implementation,The FORTRAN Automatic Coding System: The fundamental unit of program is thebasic block; a basic block is a stretch of program which has one entry point and one exit point. The purpose of section 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by running the program once in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type statements and computed GO TO's is determined by a random number generator suitably weighted according to whatever FREQUENCY statements have been provided.[16] The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and outputting an error code on its console. That code could be looked up by the programmer in an error messages table in the operator's manual, providing them with a brief description of the problem.[10]: p.19–20[25]Later, an error-handling subroutine to handle user errors such as division by zero, developed by NASA,[26]was incorporated, informing users of which line of code contained the error. Before the development of disk files, text editors and terminals, programs were most often entered on akeypunchkeyboard onto 80-columnpunched cards, one line to a card. The resulting deck of cards would be fed into a card reader to be compiled. Punched card codes included no lower-case letters or many special characters, and special versions of the IBM 026keypunchwere offered that would correctly print the re-purposed special characters used in FORTRAN. Reflecting punched card input practice, Fortran programs were originally written in a fixed-column format, with the first 72 columns read into twelve 36-bit words. A letter "C" in column 1 caused the entire card to be treated as a comment and ignored by the compiler. Otherwise, the columns of the card were divided into four fields: Columns 73 to 80 could therefore be used for identification information, such as punching a sequence number or text, which could be used to re-order cards if a stack of cards was dropped; though in practice this was reserved for stable, production programs. AnIBM 519could be used to copy a program deck and add sequence numbers. Some early compilers, e.g., the IBM 650's, had additional restrictions due to limitations on their card readers.[28]Keypunchescould be programmed to tab to column 7 and skip out after column 72. Later compilers relaxed most fixed-format restrictions, and the requirement was eliminated in the Fortran 90 standard. Within the statement field,whitespace characters(blanks) were ignored outside a text literal. This allowed omitting spaces between tokens for brevity or including spaces within identifiers for clarity. For example,AVG OF Xwas a valid identifier, equivalent toAVGOFX, and101010DO101I=1,101was a valid statement, equivalent to10101DO101I=1,101because the zero in column 6 is treated as if it were a space (!), while101010DO101I=1.101was instead10101DO101I=1.101, the assignment of 1.101 to a variable calledDO101I. Note the slight visual difference between a comma and a period. Hollerith strings, originally allowed only in FORMAT and DATA statements, were prefixed by a character count and the letter H (e.g.,26HTHIS IS ALPHANUMERIC DATA.), allowing blanks to be retained within the character string. Miscounts were a problem. IBM'sFORTRAN IIappeared in 1958. The main enhancement was to supportprocedural programmingby allowing user-written subroutines and functions which returned values with parameters passed byreference. The COMMON statement provided a way for subroutines to access common (orglobal) variables. Six new statements were introduced:[29] Over the next few years, FORTRAN II added support for theDOUBLE PRECISIONandCOMPLEXdata types. Early FORTRAN compilers supported norecursionin subroutines. Early computer architectures supported no concept of a stack, and when they did directly support subroutine calls, the return location was often stored in one fixed location adjacent to the subroutine code (e.g. theIBM 1130) or a specific machine register (IBM 360et seq), which only allows recursion if a stack is maintained by software and the return address is stored on the stack before the call is made and restored after the call returns. Although not specified in FORTRAN 77, many F77 compilers supported recursion as an option, and theBurroughs mainframes, designed with recursion built-in, did so by default. It became a standard in Fortran 90 via the new keyword RECURSIVE.[30] This program, forHeron's formula, reads data on a tape reel containing three 5-digit integers A, B, and C as input. There are no "type" declarations available: variables whose name starts with I, J, K, L, M, or N are "fixed-point" (i.e. integers), otherwise floating-point. Since integers are to be processed in this example, the names of the variables start with the letter "I". The name of a variable must start with a letter and can continue with both letters and digits, up to a limit of six characters in FORTRAN II. If A, B, and C cannot represent the sides of a triangle in plane geometry, then the program's execution will end with an error code of "STOP 1". Otherwise, an output line will be printed showing the input values for A, B, and C, followed by the computed AREA of the triangle as a floating-point number occupying ten spaces along the line of output and showing 2 digits after the decimal point, the .2 in F10.2 of the FORMAT statement with label 601. IBM also developed aFORTRAN IIIin 1958 that allowed forinline assemblycode among other features; however, this version was never released as a product. Like the 704 FORTRAN and FORTRAN II, FORTRAN III included machine-dependent features that made code written in it unportable from machine to machine, as well as Boolean expression support.[11]: 76Early versions of FORTRAN provided by other vendors suffered from the same disadvantage. IBM began development ofFORTRAN IVin 1961 as a result of customer demands. FORTRAN IV removed the machine-dependent features of FORTRAN II (such asREAD INPUT TAPE), while adding new features such as aLOGICALdata type, logicalBoolean expressions, and thelogical IF statementas an alternative to thearithmetic IF statement.Type declarations were added, along with anIMPLICITstatement to override earlier conventions that variables areINTEGERif their name begins withI,J,K,L,M, orN; andREALotherwise.[31]: pp.70, 71[32]: p.6-9 FORTRAN IV was eventually released in 1962, first for theIBM 7030("Stretch") computer, followed by versions for theIBM 7090,IBM 7094, and later for theIBM 1401in 1966.[33] By 1965, FORTRAN IV was supposed to be compliant with thestandardbeing developed by theAmerican Standards AssociationX3.4.3 FORTRAN Working Group.[34] Between 1966 and 1968, IBM offered several FORTRAN IV compilers for itsSystem/360, each named by letters that indicated the minimum amount of memory the compiler needed to run.[35]The letters (F, G, H) matched the codes used with System/360 model numbers to indicate memory size, each letter increment being a factor of two larger:[36]: p. 5 Digital Equipment Corporationmaintained DECSYSTEM-10 Fortran IV (F40) forPDP-10from 1967 to 1975.[32]Compilers were also available for theUNIVAC 1100 seriesand theControl Data6000 seriesand7000 seriessystems.[37] At about this time FORTRAN IV had started to become an important educational tool and implementations such as the University of Waterloo's WATFOR andWATFIVwere created to simplify the complex compile and link processes of earlier compilers. In the FORTRAN IV programming environment of the era, except for that used on Control Data Corporation (CDC) systems, only one instruction was placed per line. The CDC version allowed for multiple instructions per line if separated by a$(dollar) character. The FORTRANsheetwas divided into four fields, as described above. Two compilers of the time, IBM "G" and UNIVAC, allowed comments to be written on the same line as instructions, separated by a special character: "master space": V (perforations 7 and 8) for UNIVAC and perforations 12/11/0/7/8/9 (hexadecimal FF) for IBM. These comments were not to be inserted in the middle of continuation cards.[32][37] Perhaps the most significant development in the early history of FORTRAN was the decision by theAmerican Standards Association(nowAmerican National Standards Institute(ANSI)) to form a committee sponsored by theBusiness Equipment Manufacturers Association(BEMA) to develop anAmerican Standard Fortran. The resulting two standards, approved in March 1966, defined two languages,FORTRAN(based on FORTRAN IV, which had served as a de facto standard), andBasic FORTRAN(based on FORTRAN II, but stripped of its machine-dependent features). The FORTRAN defined by the first standard, officially denoted X3.9-1966, became known asFORTRAN 66(although many continued to term it FORTRAN IV, the language on which the standard was largely based). FORTRAN 66 effectively became the first industry-standard version of FORTRAN. FORTRAN 66 included: The above Fortran II version of the Heron program needs several modifications to compile as a Fortran 66 program. Modifications include using the more machine independent versions of theREADandWRITEstatements, and removal of the unneededFLOATFtype conversion functions. Though not required, the arithmeticIFstatements can be re-written to use logicalIFstatements and expressions in a more structured fashion. After the release of the FORTRAN 66 standard, compiler vendors introduced several extensions toStandard Fortran, prompting ANSI committee X3J3 in 1969 to begin work on revising the 1966 standard, under sponsorship ofCBEMA, the Computer Business Equipment Manufacturers Association (formerly BEMA). Final drafts of this revised standard circulated in 1977, leading to formal approval of the new FORTRAN standard in April 1978. The new standard, calledFORTRAN 77and officially denoted X3.9-1978, added a number of significant features to address many of the shortcomings of FORTRAN 66: In this revision of the standard, a number of features were removed or altered in a manner that might invalidate formerly standard-conforming programs. (Removal was the only allowable alternative to X3J3 at that time, since the concept of "deprecation" was not yet available for ANSI standards.) While most of the 24 items in the conflict list (see Appendix A2 of X3.9-1978) addressed loopholes or pathological cases permitted by the prior standard but rarely used, a small number of specific capabilities were deliberately removed, such as: A Fortran 77 version of the Heron program requires no modifications to the Fortran 66 version. However this example demonstrates additional cleanup of the I/O statements, including using list-directed I/O, and replacing the Hollerith edit descriptors in theFORMATstatements with quoted strings. It also uses structuredIFandEND IFstatements, rather thanGOTO/CONTINUE. The development of a revised standard to succeed FORTRAN 77 would be repeatedly delayed as the standardization process struggled to keep up with rapid changes in computing and programming practice. In the meantime, as the "Standard FORTRAN" for nearly fifteen years, FORTRAN 77 would become the historically most important dialect. An important practical extension to FORTRAN 77 was the release of MIL-STD-1753 in 1978.[38]This specification, developed by theU.S. Department of Defense, standardized a number of features implemented by most FORTRAN 77 compilers but not included in the ANSI FORTRAN 77 standard. These features would eventually be incorporated into the Fortran 90 standard. TheIEEE1003.9POSIXStandard, released in 1991, provided a simple means for FORTRAN 77 programmers to issue POSIX system calls.[39]Over 100 calls were defined in the document – allowing access to POSIX-compatible process control, signal handling, file system control, device control, procedure pointing, and stream I/O in a portable manner. The much-delayed successor to FORTRAN 77, informally known asFortran 90(and prior to that,Fortran 8X), was finally released as ISO/IEC standard 1539:1991 in 1991 and an ANSI Standard in 1992. In addition to changing the official spelling from FORTRAN to Fortran, this major revision added many new features to reflect the significant changes in programming practice that had evolved since the 1978 standard: Unlike the prior revision, Fortran 90 removed no features.[40]Any standard-conforming FORTRAN 77 program was also standard-conforming under Fortran 90, and either standard should have been usable to define its behavior. A small set of features were identified as "obsolescent" and were expected to be removed in a future standard. All of the functionalities of these early-version features can be performed by newer Fortran features. Some are kept to simplify porting of old programs but many were deleted in Fortran 95. Fortran 95, published officially as ISO/IEC 1539-1:1997, was a minor revision, mostly to resolve some outstanding issues from the Fortran 90 standard. Nevertheless, Fortran 95 also added a number of extensions, notably from theHigh Performance Fortranspecification: A number of intrinsic functions were extended (for example adimargument was added to themaxlocintrinsic). Several features noted in Fortran 90 to be "obsolescent" were removed from Fortran 95: An important supplement to Fortran 95 was theISO technical reportTR-15581: Enhanced Data Type Facilities, informally known as theAllocatable TR.This specification defined enhanced use ofALLOCATABLEarrays, prior to the availability of fully Fortran 2003-compliant Fortran compilers. Such uses includeALLOCATABLEarrays as derived type components, in procedure dummy argument lists, and as function return values. (ALLOCATABLEarrays are preferable toPOINTER-based arrays becauseALLOCATABLEarrays are guaranteed by Fortran 95 to be deallocated automatically when they go out of scope, eliminating the possibility ofmemory leakage. In addition, elements of allocatable arrays are contiguous, andaliasingis not an issue for optimization of array references, allowing compilers to generate faster code than in the case of pointers.[41]) Another important supplement to Fortran 95 was theISOtechnical reportTR-15580: Floating-point exception handling, informally known as theIEEE TR.This specification defined support forIEEE floating-point arithmeticandfloating-pointexception handling. In addition to the mandatory "Base language" (defined in ISO/IEC 1539-1 : 1997), the Fortran 95 language also included two optional modules: which, together, compose the multi-part International Standard (ISO/IEC 1539). According to the standards developers, "the optional parts describe self-contained features which have been requested by a substantial body of users and/or implementors, but which are not deemed to be of sufficient generality for them to be required in all standard-conforming Fortran compilers." Nevertheless, if a standard-conforming Fortran does provide such options, then they "must be provided in accordance with the description of those facilities in the appropriate Part of the Standard". The language defined by the twenty-first century standards, in particular because of its incorporation ofobject-oriented programmingsupport and subsequentlyCoarray Fortran, is often referred to as 'Modern Fortran', and the term is increasingly used in the literature.[42] Fortran 2003,officially published as ISO/IEC 1539-1:2004, was a major revision introducing many new features.[43]A comprehensive summary of the new features of Fortran 2003 is available at the Fortran Working Group (ISO/IEC JTC1/SC22/WG5) official Web site.[44] From that article, the major enhancements for this revision include: An important supplement to Fortran 2003 was theISO technical reportTR-19767: Enhanced module facilities in Fortran.This report providedsub-modules,which make Fortran modules more similar toModula-2modules. They are similar toAdaprivate child sub-units. This allows the specification and implementation of a module to be expressed in separate program units, which improves packaging of large libraries, allows preservation of trade secrets while publishing definitive interfaces, and prevents compilation cascades. ISO/IEC 1539-1:2010, informally known as Fortran 2008, was approved in September 2010.[45][46]As with Fortran 95, this is a minor upgrade, incorporating clarifications and corrections to Fortran 2003, as well as introducing some new capabilities. The new capabilities include: The Final Draft international Standard (FDIS) is available as document N1830.[47] A supplement to Fortran 2008 is theInternational Organization for Standardization(ISO) Technical Specification (TS) 29113 onFurther Interoperability of Fortran with C,[48][49]which has been submitted to ISO in May 2012 for approval. The specification adds support for accessing the array descriptor from C and allows ignoring the type and rank of arguments. The Fortran 2018 revision of the language was earlier referred to as Fortran 2015.[50]It was a significant revision and was released on November 28, 2018.[51] Fortran 2018 incorporates two previously published Technical Specifications: Additional changes and new features include support for ISO/IEC/IEEE 60559:2011 (the version of theIEEE floating-point standardbefore the latest minor revision IEEE 754–2019), hexadecimal input/output, IMPLICIT NONE enhancements and other changes.[54][55][56][57] Fortran 2018 deleted the arithmetic IF statement. It also deleted non-block DO constructs - loops which do not end with an END DO or CONTINUE statement. These had been an obsolescent part of the language since Fortran 90. New obsolescences are: COMMON and EQUIVALENCE statements and the BLOCK DATA program unit, labelled DO loops, specific names for intrinsic functions, and the FORALL statement and construct. Fortran 2023 (ISO/IEC 1539-1:2023) was published in November 2023, and can be purchased from the ISO.[58]Fortran 2023 is a minor extension of Fortran 2018 that focuses on correcting errors and omissions in Fortran 2018. It also adds some small features, including anenumerated typecapability. A full description of the Fortran language features brought by Fortran 95 is covered in the related article,Fortran 95 language features. The language versions defined by later standards are often referred to collectively as 'Modern Fortran' and are described in the literature. Although a 1968 journal article by the authors ofBASICalready described FORTRAN as "old-fashioned",[59]programs have been written in Fortran for many decades and there is a vast body of Fortran software in daily use throughout the scientific and engineering communities.[60]Jay Pasachoffwrote in 1984 that "physics and astronomy students simply have to learn FORTRAN. So much exists in FORTRAN that it seems unlikely that scientists will change toPascal,Modula-2, or whatever."[61]In 1993,Cecil E. Leithcalled FORTRAN the "mother tongue of scientific computing", adding that its replacement by any other possible language "may remain a forlorn hope".[62] It is the primary language for some of the most intensivesuper-computingtasks, such as inastronomy,climate modeling,computational chemistry,computational economics,computational fluid dynamics,computational physics, data analysis,[63]hydrological modeling, numerical linear algebra and numerical libraries (LAPACK,IMSLandNAG),optimization, satellite simulation,structural engineering, andweather prediction.[64]Many of the floating-point benchmarks to gauge the performance of new computer processors, such as the floating-point components of theSPECbenchmarks (e.g.,CFP2006,CFP2017) are written in Fortran. Math algorithms are well documented inNumerical Recipes. Apart from this, more modern codes in computational science generally use large program libraries, such asMETISfor graph partitioning,PETScorTrilinosfor linear algebra capabilities,deal.IIorFEniCSfor mesh and finite element support, and other generic libraries. Since the early 2000s, many of the widely used support libraries have also been implemented inCand more recently, inC++. On the other hand, high-level languages such as theWolfram Language,MATLAB,Python, andRhave become popular in particular areas of computational science. Consequently, a growing fraction of scientific programs are also written in such higher-level scripting languages. For this reason,facilities for inter-operation with Cwere added to Fortran 2003 and enhanced by the ISO/IEC technical specification 29113, which was incorporated into Fortran 2018 to allow more flexible interoperation with other programming languages. Portabilitywas a problem in the early days because there was no agreed upon standard—not even IBM's reference manual—and computer companies vied to differentiate their offerings from others by providing incompatible features. Standards have improved portability. The 1966 standard provided a referencesyntaxand semantics, but vendors continued to provide incompatible extensions. Although careful programmers were coming to realize that use of incompatible extensions caused expensive portability problems, and were therefore using programs such asThe PFORT Verifier,[65][66]it was not until after the 1977 standard, when the National Bureau of Standards (nowNIST) publishedFIPS PUB 69, that processors purchased by the U.S. Government were required to diagnose extensions of the standard. Rather than offer two processors, essentially every compiler eventually had at least an option to diagnose extensions.[67][68] Incompatible extensions were not the only portability problem. For numerical calculations, it is important to take account of the characteristics of the arithmetic. This was addressed by Fox et al. in the context of the 1966 standard by thePORTlibrary.[66]The ideas therein became widely used, and were eventually incorporated into the 1990 standard by way of intrinsic inquiry functions. The widespread (now almost universal) adoption of theIEEE 754standard for binary floating-point arithmetic has essentially removed this problem. Access to the computing environment (e.g., the program's command line, environment variables, textual explanation of error conditions) remained a problem until it was addressed by the 2003 standard. Large collections of library software that could be described as being loosely related to engineering and scientific calculations, such as graphics libraries, have been written in C, and therefore access to them presented a portability problem. This has been addressed by incorporation of C interoperability into the 2003 standard. It is now possible (and relatively easy) to write an entirely portable program in Fortran, even without recourse to apreprocessor. Until the Fortran 66 standard was developed, each compiler supported its own variant of Fortran. Some were more divergent from the mainstream than others. The first Fortran compiler set a high standard of efficiency for compiled code. This goal made it difficult to create a compiler so it was usually done by the computer manufacturers to support hardware sales. This left an important niche: compilers that were fast and provided good diagnostics for the programmer (often a student). Examples include Watfor, Watfiv, PUFFT, and on a smaller scale, FORGO, Wits Fortran, and Kingston Fortran 2. Fortran 5was marketed byData GeneralCorp from the early 1970s to the early 1980s, for theNova,Eclipse, andMVline of computers. It had an optimizing compiler that was quite good for minicomputers of its time. The language most closely resembles FORTRAN 66. FORTRAN Vwas distributed byControl Data Corporationin 1968 for theCDC 6600series. The language was based upon FORTRAN IV.[69] Univac also offered a compiler for the 1100 series known as FORTRAN V. A spinoff of Univac Fortran V was Athena FORTRAN. Specific variantsproduced by the vendors of high-performance scientific computers (e.g.,Burroughs,Control Data Corporation(CDC),Cray,Honeywell,IBM,Texas Instruments, andUNIVAC) added extensions to Fortran to take advantage of special hardware features such asinstruction cache, CPUpipelines, and vector arrays. For example, one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered themachine codeinstructionsto keep multiple internal arithmetic units busy simultaneously. Another example isCFD, a special variant of FORTRAN designed specifically for theILLIAC IVsupercomputer, running atNASA'sAmes Research Center. IBM Research Labs also developed an extended FORTRAN-based language calledVECTRANfor processing vectors and matrices. Object-Oriented Fortranwas an object-oriented extension of Fortran, in which data items can be grouped into objects, which can be instantiated and executed in parallel. It was available forSolaris,IRIX,NeXTSTEP,iPSC, and nCUBE, but is no longer supported. Such machine-specific extensions have either disappeared over time or have had elements incorporated into the main standards. The major remaining extension isOpenMP, which is a cross-platform extension for shared memory programming. One new extension, Coarray Fortran, is intended to support parallel programming. FOR TRANSITwas the name of a reduced version of the IBM 704 FORTRAN language, which was implemented for the IBM 650, using a translator program developed at Carnegie in the late 1950s.[70]The following comment appears in the IBM Reference Manual (FOR TRANSIT Automatic Coding SystemC28-4038, Copyright 1957, 1959 by IBM): The FORTRAN system was designed for a more complex machine than the 650, and consequently some of the 32 statements found in the FORTRAN Programmer's Reference Manual are not acceptable to the FOR TRANSIT system. In addition, certain restrictions to the FORTRAN language have been added. However, none of these restrictions make a source program written for FOR TRANSIT incompatible with the FORTRAN system for the 704. The permissible statements were: Up to ten subroutines could be used in one program. FOR TRANSIT statements were limited to columns 7 through 56, only. Punched cards were used for input and output on the IBM 650. Three passes were required to translate source code to the "IT" language, then to compile the IT statements into SOAP assembly language, and finally to produce the object program, which could then be loaded into the machine to run the program (using punched cards for data input, and outputting results onto punched cards). Two versions existed for the 650s with a 2000 word memory drum: FOR TRANSIT I (S) and FOR TRANSIT II, the latter for machines equipped with indexing registers and automatic floating-point decimal (bi-quinary) arithmetic. Appendix A of the manual included wiring diagrams for theIBM 533card reader/punchcontrol panel. Prior to FORTRAN 77, manypreprocessorswere commonly used to provide a friendlier language, with the advantage that the preprocessed code could be compiled on any machine with a standard FORTRAN compiler.[71]These preprocessors would typically supportstructured programming, variable names longer than six characters, additional data types,conditional compilation, and evenmacrocapabilities. Popular preprocessors includedEFL,FLECS,iftran,MORTRAN,SFtran,S-Fortran,Ratfor, andRatfiv. EFL, Ratfor and Ratfiv, for example, implementedC-like languages, outputting preprocessed code in standard FORTRAN 66. ThePFORTpreprocessor was often used to verify that code conformed to a portable subset of the language. Despite advances in the Fortran language, preprocessors continue to be used for conditional compilation and macro substitution. One of the earliest versions of FORTRAN, introduced in the '60s, was popularly used in colleges and universities. Developed, supported, and distributed by theUniversity of Waterloo,WATFORwas based largely on FORTRAN IV. A student using WATFOR could submit their batch FORTRAN job and, if there were no syntax errors, the program would move straight to execution. This simplification allowed students to concentrate on their program's syntax and semantics, or execution logic flow, rather than dealing with submissionJob Control Language(JCL), the compile/link-edit/execution successive process(es), or other complexities of the mainframe/minicomputer environment. A down side to this simplified environment was that WATFOR was not a good choice for programmers needing the expanded abilities of their host processor(s), e.g., WATFOR typically had very limited access to I/O devices. WATFOR was succeeded byWATFIVand its later versions. (line programming) LRLTRANwas developed at theLawrence Radiation Laboratoryto provide support for vector arithmetic and dynamic storage, among other extensions to support systems programming. The distribution included theLivermore Time Sharing System(LTSS) operating system. The Fortran-95 Standard includes an optionalPart 3which defines an optionalconditional compilationcapability. This capability is often referred to as "CoCo". Many Fortran compilers have integrated subsets of theC preprocessorinto their systems. SIMSCRIPTis an application specific Fortran preprocessor for modeling and simulating large discrete systems. TheF programming languagewas designed to be a clean subset of Fortran 95 that attempted to remove the redundant, unstructured, and deprecated features of Fortran, such as theEQUIVALENCEstatement. F retains the array features added in Fortran 90, and removes control statements that were made obsolete by structured programming constructs added to both FORTRAN 77 and Fortran 90. F is described by its creators as "a compiled, structured, array programming language especially well suited to education and scientific computing".[72]Essential Lahey Fortran 90 (ELF90) was a similar subset. Lahey and Fujitsu teamed up to create Fortran for the Microsoft.NET Framework.[73]Silverfrost FTN95 is also capable of creating .NET code.[74] The following program illustrates dynamic memory allocation and array-based operations, two features introduced with Fortran 90. Particularly noteworthy is the absence ofDOloops andIF/THENstatements in manipulating the array; mathematical operations are applied to the array as a whole. Also apparent is the use of descriptive variable names and general code formatting that conform with contemporary programming style. This example computes an average over data entered interactively. During the same FORTRAN standards committee meeting at which the name "FORTRAN 77" was chosen, a satirical technical proposal was incorporated into the official distribution bearing the title "Letter OConsidered Harmful". This proposal purported to address the confusion that sometimes arises between the letter "O" and the numeral zero, by eliminating the letter from allowable variable names. However, the method proposed was to eliminate the letter from the character set entirely (thereby retaining 48 as the number of lexical characters, which the colon had increased to 49). This was considered beneficial in that it would promote structured programming, by making it impossible to use the notoriousGO TOstatement as before. (TroublesomeFORMATstatements would also be eliminated.) It was noted that this "might invalidate some existing programs" but that most of these "probably were non-conforming, anyway".[75][unreliable source?][76] When X3J3 debated whether the minimum trip count for a DO loop should be zero or one in Fortran 77, Loren Meissner suggested a minimum trip count of two—reasoning(tongue-in-cheek)that if it were less than two, then there would be no reason for a loop. When assumed-length arrays were being added, there was a dispute as to the appropriate character to separate upper and lower bounds. In a comment examining these arguments, Walt Brainerd penned an article entitled "Astronomy vs. Gastroenterology" because some proponents had suggested using the star or asterisk ("*"), while others favored the colon (":").[citation needed] Variable names beginning with the letters I–N have a default type of integer, while variables starting with any other letters defaulted to real, although programmers could override the defaults with an explicit declaration.[77]This led to the joke: "In FORTRAN, GOD is REAL (unless declared INTEGER)."
https://en.wikipedia.org/wiki/Fortran
In computing,row-major orderandcolumn-major orderare methods for storingmultidimensional arraysin linear storage such asrandom access memory. The difference between the orders lies in which elements of an array are contiguous in memory. In row-major order, the consecutive elements of a row reside next to each other, whereas the same holds true for consecutive elements of a column in column-major order. While the terms allude to the rows and columns of a two-dimensional array, i.e. amatrix, the orders can be generalized to arrays of any dimension by noting that the terms row-major and column-major are equivalent tolexicographic and colexicographic orders, respectively. It is also worth noting that matrices, being commonly represented as collections of row or column vectors, using this approach are effectively stored as consecutive vectors or consecutive vector components. Such ways of storing data are referred to asAoS and SoArespectively. Data layout is critical for correctly passing arrays between programs written in different programming languages. It is also important for performance when traversing an array because modern CPUs process sequential data more efficiently than nonsequential data. This is primarily due toCPU cachingwhich exploits spatiallocality of reference.[1]In addition, contiguous access makes it possible to useSIMDinstructions that operate on vectors of data. In some media such asmagnetic-tape data storage, accessing sequentially isorders of magnitudefaster than nonsequential access.[citation needed] The terms row-major and column-major stem from the terminology related to ordering objects. A general way to order objects with many attributes is to first group and order them by one attribute, and then, within each such group, group and order them by another attribute, etc. If more than one attribute participates in ordering, the first would be calledmajorand the lastminor. If two attributes participate in ordering, it is sufficient to name only the major attribute. In the case of arrays, the attributes are the indices along each dimension. Formatricesin mathematical notation, the first index indicates therow, and the second indicates thecolumn, e.g., given a matrixA{\displaystyle A}, the entrya1,2{\displaystyle a_{1,2}}is in its first row and second column. This convention is carried over to the syntax in programming languages,[2]although often withindexes starting at 0instead of 1.[3] Even though the row is indicated by thefirstindex and the column by thesecondindex, no grouping order between the dimensions is implied by this. The choice of how to group and order the indices, either by row-major or column-major methods, is thus a matter of convention. The same terminology can be applied to even higher dimensional arrays. Row-major grouping starts from theleftmostindex and column-major from therightmostindex, leading tolexicographic and colexicographic (or colex) orders, respectively. For example, the array could be stored in two possible ways: Programming languages handle this in different ways. InC, multidimensional arrays are stored in row-major order, and the array indexes are written row-first (lexicographical access order): On the other hand, inFortran, arrays are stored in column-major order, while the array indexes are still written row-first (colexicographical access order): Note how the use ofA[i][j]with multi-step indexing as in C, as opposed to a neutral notation likeA(i,j)as in Fortran, almost inevitably implies row-major order for syntactic reasons, so to speak, because it can be rewritten as(A[i])[j], and theA[i]row part can even be assigned to an intermediate variable that is then indexed in a separate expression. (No other implications should be assumed, e.g., Fortran is not column-major simplybecauseof its notation, and even the above implication could intentionally be circumvented in a new language.) To use column-major order in a row-major environment, or vice versa, for whatever reason, one workaround is to assign non-conventional roles to the indexes (using the first index for the column and the second index for the row), and another is to bypass language syntax by explicitly computing positions in a one-dimensional array. Of course, deviating from convention probably incurs a cost that increases with the degree of necessary interaction with conventional language features and other code, not only in the form of increased vulnerability to mistakes (forgetting to also invert matrix multiplication order, reverting to convention during code maintenance, etc.), but also in the form of having to actively rearrange elements, all of which have to be weighed against any original purpose such as increasing performance. Running the loop row-wise is preferred in row-major languages like C and vice versa for column-major languages. Programming languages or their standard libraries that support multi-dimensional arrays typically have a native row-major or column-major storage order for these arrays. Row-major order is used inC/C++/Objective-C(for C-style arrays),PL/I,[4]Pascal,[5]Speakeasy,[citation needed]andSAS.[6] Column-major order is used inFortran,[7][8]IDL,[7]MATLAB,[8]GNU Octave,Julia,[9]S,S-PLUS,[10]R,[11]Scilab,[12]Yorick, andRasdaman.[13] A typical alternative for dense array storage is to useIliffe vectors, which typically store pointers to elements in the same row contiguously (like row-major order), but not the rows themselves. They are used in (ordered by age):Java,[14]C#/CLI/.Net,Scala,[15]andSwift. Even less dense is to use lists of lists, e.g., inPython,[16]and in theWolfram LanguageofWolfram Mathematica.[17] An alternative approach uses tables of tables, e.g., inLua.[18] Support for multi-dimensional arrays may also be provided by external libraries, which may even support arbitrary orderings, where each dimension has a stride value, and row-major or column-major are just two possible resulting interpretations. Row-major order is the default inNumPy[19](for Python). Column-major order is the default inEigen[20]andArmadillo(both for C++). A special case would beOpenGL(andOpenGL ES) for graphics processing. Since "recent mathematical treatments of linear algebra and related fields invariably treat vectors as columns," designer Mark Segal decided to substitute this for the convention in predecessorIRIS GL, which was to write vectors as rows; for compatibility, transformation matrices would still be stored in vector-major (=row-major) rather than coordinate-major (=column-major) order, and he then used the trick "[to] say that matrices in OpenGL are stored in column-major order".[21]This was really only relevant for presentation, because matrix multiplication was stack-based and could still be interpreted as post-multiplication, but, worse, reality leaked through the C-basedAPIbecause individual elements would be accessed asM[vector][coordinate]or, effectively,M[column][row], which unfortunately muddled the convention that the designer sought to adopt, and this was even preserved in theOpenGL Shading Languagethat was later added (although this also makes it possible to access coordinates by name instead, e.g.,M[vector].y). As a result, many developers will now simply declare that having the column as the first index is the definition of column-major, even though this is clearly not the case with a real column-major language like Fortran. Torch(for Lua) changed from column-major[22]to row-major[23]default order. As exchanging the indices of an array is the essence ofarray transposition, an array stored as row-major but read as column-major (or vice versa) will appear transposed. As actually performing thisrearrangement in memoryis typically an expensive operation, some systems provide options to specify individual matrices as being stored transposed. The programmer must then decide whether or not to rearrange the elements in memory, based on the actual usage (including the number of times that the array is reused in a computation). For example, theBasic Linear Algebra Subprogramsfunctions are passed flags indicating which arrays are transposed.[24] The concept generalizes to arrays with more than two dimensions. For ad-dimensionalN1×N2×⋯×Nd{\displaystyle N_{1}\times N_{2}\times \cdots \times N_{d}}array with dimensionsNk(k=1...d), a given element of this array is specified by atuple(n1,n2,…,nd){\displaystyle (n_{1},n_{2},\ldots ,n_{d})}ofd(zero-based) indicesnk∈[0,Nk−1]{\displaystyle n_{k}\in [0,N_{k}-1]}. In row-major order, thelastdimension is contiguous, so that the memory-offset of this element is given by:nd+Nd⋅(nd−1+Nd−1⋅(nd−2+Nd−2⋅(⋯+N2n1)⋯))=∑k=1d(∏ℓ=k+1dNℓ)nk{\displaystyle n_{d}+N_{d}\cdot (n_{d-1}+N_{d-1}\cdot (n_{d-2}+N_{d-2}\cdot (\cdots +N_{2}n_{1})\cdots ))=\sum _{k=1}^{d}\left(\prod _{\ell =k+1}^{d}N_{\ell }\right)n_{k}} In column-major order, thefirstdimension is contiguous, so that the memory-offset of this element is given by:n1+N1⋅(n2+N2⋅(n3+N3⋅(⋯+Nd−1nd)⋯))=∑k=1d(∏ℓ=1k−1Nℓ)nk{\displaystyle n_{1}+N_{1}\cdot (n_{2}+N_{2}\cdot (n_{3}+N_{3}\cdot (\cdots +N_{d-1}n_{d})\cdots ))=\sum _{k=1}^{d}\left(\prod _{\ell =1}^{k-1}N_{\ell }\right)n_{k}}where theempty productis the multiplicativeidentity element, i.e.,∏ℓ=10Nℓ=∏ℓ=d+1dNℓ=1{\textstyle \prod _{\ell =1}^{0}N_{\ell }=\prod _{\ell =d+1}^{d}N_{\ell }=1}. For a given order, thestridein dimensionkis given by the multiplication value in parentheses before indexnkin the right-hand side summations above. More generally, there ared!possible orders for a given array, one for eachpermutationof dimensions (with row-major and column-order just 2 special cases), although the lists of stride values are not necessarily permutations of each other, e.g., in the 2-by-3 example above, the strides are (3,1) for row-major and (1,2) for column-major.
https://en.wikipedia.org/wiki/Row-_and_column-major_order
f2cis a program toconvertFortran 77toCcode, developed atBell Laboratories. The standalone f2c program was based on the core of the first complete Fortran 77compilerto be implemented, the "f77" program byFeldmanandWeinberger. Because the f77 compiler was itself written in C and relied on a C compiler back end to complete its final compilation step, it and its derivatives like f2c were much more portable than compilers generatingmachine codedirectly. The f2c program was released asfree softwareand subsequently became one of the most common means to compile Fortran code on many systems where native Fortran compilers were unavailable or expensive. Several large Fortran libraries, such asLAPACK, were made available as C libraries via conversion with f2c. The f2c program also influenced the development of theGNU g77 compiler, which uses a modified version of the f2cruntime libraries.
https://en.wikipedia.org/wiki/F2c
Correspondence analysis(CA) is a multivariatestatistical techniqueproposed[1]byHerman Otto Hartley(Hirschfeld)[2]and later developed byJean-Paul Benzécri.[3]It is conceptually similar toprincipal component analysis, but applies to categorical rather than continuous data. In a similar manner to principal component analysis, it provides a means of displaying or summarising a set of data in two-dimensional graphical form. Its aim is to display in abiplotany structure hidden in the multivariate setting of the data table. As such it is a technique from the field of multivariateordination. Since the variant of CA described here can be applied either with a focus on the rows or on the columns it should in fact be calledsimple (symmetric) correspondence analysis.[4] It is traditionally applied to thecontingency tableof a pair ofnominal variableswhere each cell contains either a count or a zero value. If more than two categorical variables are to be summarized, a variant calledmultiple correspondence analysisshould be chosen instead. CA may also be applied tobinary datagiven the presence/absence coding represents simplified count data i.e. a 1 describes a positive count and 0 stands for a count of zero. Depending on the scores used CA preserves the chi-square distance[5][6]between either the rows or the columns of the table. Because CA is a descriptive technique, it can be applied to tables regardless of a significantchi-squared test.[7][8]Although theχ2{\displaystyle \chi ^{2}}statistic used ininferential statisticsand the chi-square distance are computationally related they should not be confused since the latter works as amultivariatestatistical distancemeasure in CA while theχ2{\displaystyle \chi ^{2}}statistic is in fact ascalarnot ametric.[9] Likeprincipal components analysis, correspondence analysis createsorthogonalcomponents (or axes) and, for each item in a table i.e. for each row, a set of scores (sometimes called factor scores, seeFactor analysis). Correspondence analysis is performed on the data table, conceived as matrixCof sizem×nwheremis the number of rows andnis the number of columns. In the following mathematical description of the method capital letters in italics refer to amatrixwhile letters in italics refer tovectors. Understanding the following computations requires knowledge ofmatrix algebra. Before proceeding to the central computational step of the algorithm, the values in matrixChave to be transformed.[10]First compute a set of weights for the columns and the rows (sometimes calledmasses),[7][11]where row and column weights are given by the row and column vectors, respectively: HerenC=∑i=1n∑j=1mCij{\displaystyle n_{C}=\sum _{i=1}^{n}\sum _{j=1}^{m}C_{ij}}is the sum of all cell values in matrixC, or short the sum ofC, and1{\displaystyle \mathbf {1} }is a columnvectorof ones with the appropriate dimension. Put in simple words,wm{\displaystyle w_{m}}is just a vector whose elements are the row sums ofCdivided by the sum ofC, andwn{\displaystyle w_{n}}is a vector whose elements are the column sums ofCdivided by the sum ofC. The weights are transformed intodiagonal matrices and where the diagonal elements ofWn{\displaystyle W_{n}}are1/wn{\displaystyle 1/{\sqrt {w_{n}}}}and those ofWm{\displaystyle W_{m}}are1/wm{\displaystyle 1/{\sqrt {w_{m}}}}respectively i.e. the vector elements are theinversesof the square roots of the masses. The off-diagonal elements are all 0. Next, compute matrixP{\displaystyle P}by dividingC{\displaystyle C}by its sum In simple words, MatrixP{\displaystyle P}is just the data matrix (contingency table or binary table) transformed into portions i.e. each cell value is just the cell portion of the sum of the whole table. Finally, compute matrixS{\displaystyle S}, sometimes called the matrix ofstandardized residuals,[10]bymatrix multiplicationas Note, the vectorswm{\displaystyle w_{m}}andwn{\displaystyle w_{n}}are combined in anouter productresulting in a matrix of the samedimensionsasP{\displaystyle P}. In words the formula reads: matrixouter⁡(wm,wn){\displaystyle \operatorname {outer} (w_{m},w_{n})}is subtracted from matrixP{\displaystyle P}and the resulting matrix is scaled (weighted) by the diagonal matricesWm{\displaystyle W_{m}}andWn{\displaystyle W_{n}}. Multiplying the resulting matrix by the diagonal matrices is equivalent to multiply the i-th row (or column) of it by the i-th element of the diagonal ofWm{\displaystyle W_{m}}orWn{\displaystyle W_{n}}, respectively[12]. The vectorswm{\displaystyle w_{m}}andwn{\displaystyle w_{n}}are the row and column masses or the marginal probabilities for the rows and columns, respectively. Subtracting matrixouter⁡(wm,wn){\displaystyle \operatorname {outer} (w_{m},w_{n})}from matrixP{\displaystyle P}is the matrix algebra version of doublecenteringthe data. Multiplying this difference by the diagonal weighting matrices results in a matrix containing weighted deviations from theoriginof avector space. This origin is defined by matrixouter⁡(wm,wn){\displaystyle \operatorname {outer} (w_{m},w_{n})}. In fact matrixouter⁡(wm,wn){\displaystyle \operatorname {outer} (w_{m},w_{n})}is identical with the matrix ofexpected frequenciesin thechi-squared test. ThereforeS{\displaystyle S}is computationally related to the independence model used in that test. But since CA isnotan inferential method the term independence model is inappropriate here. The tableS{\displaystyle S}is then decomposed[10]by asingular value decompositionas whereU{\displaystyle U}andV{\displaystyle V}are the left and right singular vectors ofS{\displaystyle S}andΣ{\displaystyle \Sigma }is a square diagonal matrix with the singular valuesσi{\displaystyle \sigma _{i}}ofS{\displaystyle S}on the diagonal.Σ{\displaystyle \Sigma }is of dimensionp≤(min(m,n)−1){\displaystyle p\leq (\min(m,n)-1)}henceU{\displaystyle U}is of dimension m×p andV{\displaystyle V}is of n×p. Asorthonormal vectorsU{\displaystyle U}andV{\displaystyle V}fulfill In other words, the multivariate information that is contained inC{\displaystyle C}as well as inS{\displaystyle S}is now distributed across two (coordinate) matricesU{\displaystyle U}andV{\displaystyle V}and a diagonal (scaling) matrixΣ{\displaystyle \Sigma }. The vector space defined by them has as number of dimensions p, that is the smaller of the two values, number of rows and number of columns, minus 1. While a principal component analysis may be said todecompose the (co)variance, and hence its measure of success is the amount of (co-)variance covered by the first few PCA axes - measured in eigenvalue -, a CA works with a weighted (co-)variance which is calledinertia.[13]The sum of the squared singular values is thetotal inertiaI{\displaystyle \mathrm {I} }of the data table, computed as Thetotal inertiaI{\displaystyle \mathrm {I} }of the data table can also computed directly fromS{\displaystyle S}as The amount of inertia covered by the i-th set of singular vectors isιi{\displaystyle \iota _{i}}, theprincipal inertia.The higher the portion of inertia covered by the first few singular vectors i.e. the larger the sum of the principal inertiae in comparison to the total inertia, the more successful a CA is.[13]Therefore, all principal inertia values are expressed as portionϵi{\displaystyle \epsilon _{i}}of the total inertia and are presented in the form of ascree plot. In fact a scree plot is just abar plotof all principal inertia portionsϵi{\displaystyle \epsilon _{i}}. To transform the singular vectors to coordinates which preserve the chisquare distances between rows or columns an additional weighting step is necessary. The resulting coordinates are calledprincipal coordinates[10]in CA text books. If principal coordinates are used for rows their visualization is called arow isometric[14]scaling in econometrics andscaling 1[15]in ecology. Since the weighting includes the singular valuesΣ{\displaystyle \Sigma }of the matrix of standardized residualsS{\displaystyle S}these coordinates are sometimes referred to assingular value scaled singular vectors, or, a little bit misleading, as eigenvalue scaled eigenvectors. In fact the non-trivial eigenvectors ofSS∗{\displaystyle SS^{*}}are the left singular vectorsU{\displaystyle U}ofS{\displaystyle S}and those ofS∗S{\displaystyle S^{*}S}are the right singular vectorsV{\displaystyle V}ofS{\displaystyle S}while the eigenvalues of either of these matrices are the squares of the singular valuesΣ{\displaystyle \Sigma }. But since all modern algorithms for CA are based on a singular value decomposition this terminology should be avoided. In the French tradition of CA the coordinates are sometimes called (factor)scores. Factor scores orprincipal coordinatesfor the rows of matrixCare computed by i.e. the left singular vectors are scaled by the inverse of the square roots of the row masses and by the singular values. Because principal coordinates are computed using singular values they contain the information about thespreadbetween the rows (or columns) in the original table. Computing the euclidean distances between the entities in principal coordinates results in values that equal their chisquare distances which is the reason why CA is said to"preserve chisquare distances". Compute principal coordinates for the columns by To represent the result of CA in a properbiplot, those categories which arenotplotted in principal coordinates, i.e. in chisquare distance preserving coordinates, should be plotted in so calledstandard coordinates.[10]They are called standard coordinates because each vector of standard coordinates has been standardized to exhibit mean 0 and variance 1.[16]When computing standard coordinates the singular values are omitted which is a direct result of applying thebiplot ruleby which one of the two sets of singular vector matrices must be scaled by singular values raised to the power of zero i.e. multiplied by one i.e. be computed by omitting the singular values if the other set of singular vectors have been scaled by the singular values. This reassures the existence of ainner productbetween the two sets of coordinates i.e. it leads to meaningful interpretations of their spatial relations in a biplot. In practical terms one can think of the standard coordinates as theverticesof the vector space in which the set of principal coordinates (i.e. the respective points) "exists".[17]The standard coordinates for the rows are and those for the columns are Note that ascaling 1[15]biplot in ecology implies the rows to be in principal and the columns to be in standard coordinates whilescaling 2implies the rows to be in standard and the columns to be in principal coordinates. I.e. scaling 1 implies a biplot ofFm{\displaystyle F_{m}}together withGn{\displaystyle G_{n}}while scaling 2 implies a biplot ofFn{\displaystyle F_{n}}together withGm{\displaystyle G_{m}}. The visualization of a CA result always starts with displaying the scree plot of the principal inertia values to evaluate the success of summarizing spread by the first few singular vectors. The actual ordination is presented in a graph which could - at first look - be confused with a complicatedscatter plot. In fact it consists of two scatter plots printed one upon the other, one set of points for the rows and one for the columns. But being a biplot a clear interpretation rule relates the two coordinate matrices used. Usually the first two dimensions of the CA solution are plotted because they encompass the maximum of information about the data table that can be displayed in 2D although other combinations of dimensions may be investigated by a biplot. A biplot is in fact alow dimensionalmappingof a part of the information contained in the original table. As a rule of thumb that set (rows or columns) which should be analysed with respect to its composition as measured by the other set is displayed in principal coordinates while the other set is displayed in standard coordinates. E.g. a table displayingvoting districtsin rows and political parties in columns with the cells containing the countedvotesmay be displayed with the districts (rows) in principal coordinates when the focus is on ordering districts according to similar voting. Traditionally, originating from the French tradition in CA,[18]early CA biplots mapped both entities in the same coordinate version, usually principal coordinates, but this kind of display is misleading insofar as: "Although this is called a biplot, it doesnothave any useful inner product relationship between the row and column scores" asBrian Ripley, maintainer of R package MASS points out correctly.[19]Today that kind of display should be avoided since laymen usually are not aware of the lacking relation between the two point sets. Ascaling 1[15]biplot (rows in principal coordinates, columns in standard coordinates) is interpreted as follows:[20] Several variants of CA are available, includingdetrended correspondence analysis(DCA) andcanonical correspondence analysis(CCA). The latter (CCA) is used when there is information about possible causes for the similarities between the investigated entities. The extension of correspondence analysis to many categorical variables is calledmultiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent ofdiscriminant analysisfor qualitative data) is calleddiscriminant correspondence analysisor barycentric discriminant analysis. In the social sciences, correspondence analysis, and particularly its extensionmultiple correspondence analysis, was made known outside France through French sociologistPierre Bourdieu's application of it.[21]
https://en.wikipedia.org/wiki/Correspondence_analysis
Instatistics,multiple correspondence analysis(MCA) is adata analysistechnique for nominal categorical data, used to detect and represent underlying structures in a data set. It does this by representing data as points in a low-dimensionalEuclidean space. The procedure thus appears to be the counterpart ofprincipal component analysisfor categorical data.[citation needed]MCA can be viewed as an extension of simplecorrespondence analysis(CA) in that it is applicable to a large set ofcategorical variables. MCA is performed by applying the CA algorithm to either an indicator matrix (also calledcomplete disjunctive table– CDT) or aBurt tableformed from these variables.[citation needed]An indicator matrix is an individuals × variables matrix, where the rows represent individuals and the columns are dummy variables representing categories of the variables.[1]Analyzing the indicator matrix allows the direct representation of individuals as points in geometric space. The Burt table is the symmetric matrix of all two-way cross-tabulations between the categorical variables, and has an analogy to thecovariance matrixof continuous variables. Analyzing the Burt table is a more natural generalization of simplecorrespondence analysis, and individuals or the means of groups of individuals can be added as supplementary points to the graphical display. In the indicator matrix approach, associations between variables are uncovered by calculating the chi-square distance between different categories of the variables and between the individuals (or respondents). These associations are then represented graphically as "maps", which eases the interpretation of the structures in the data. Oppositions between rows and columns are then maximized, in order to uncover the underlying dimensions best able to describe the central oppositions in the data. As infactor analysisorprincipal component analysis, the first axis is the most important dimension, the second axis the second most important, and so on, in terms of the amount of variance accounted for. The number of axes to be retained for analysis is determined by calculating modifiedeigenvalues. Since MCA is adapted to draw statistical conclusions from categorical variables (such as multiple choice questions), the first thing one needs to do is to transform quantitative data (such as age, size, weight, day time, etc) into categories (using for instance statistical quantiles). When the dataset is completely represented as categorical variables, one is able to build the corresponding so-called complete disjunctive table. We denote this tableX{\displaystyle X}. IfI{\displaystyle I}persons answered a survey withJ{\displaystyle J}multiple choices questions with 4 answers each,X{\displaystyle X}will haveI{\displaystyle I}rows and4J{\displaystyle 4J}columns. More theoretically,[2]assumeX{\displaystyle X}is the completely disjunctive table ofI{\displaystyle I}observations ofK{\displaystyle K}categorical variables. Assume also that thek{\displaystyle k}-th variable haveJk{\displaystyle J_{k}}different levels (categories) and setJ=∑k=1KJk{\displaystyle J=\sum _{k=1}^{K}J_{k}}. The tableX{\displaystyle X}is then aI×J{\displaystyle I\times J}matrix with all coefficient being0{\displaystyle 0}or1{\displaystyle 1}. Set the sum of all entries ofX{\displaystyle X}to beN{\displaystyle N}and introduceZ=X/N{\displaystyle Z=X/N}. In an MCA, there are also two special vectors: firstr{\displaystyle r}, that contains the sums along the rows ofZ{\displaystyle Z}, andc{\displaystyle c}, that contains the sums along the columns ofZ{\displaystyle Z}. NoteDr=diag(r){\displaystyle D_{r}={\text{diag}}(r)}andDc=diag(c){\displaystyle D_{c}={\text{diag}}(c)}, the diagonal matrices containingr{\displaystyle r}andc{\displaystyle c}respectively as diagonal. With these notations, computing an MCA consists essentially in the singular value decomposition of the matrix: The decomposition ofM{\displaystyle M}gives youP{\displaystyle P},Δ{\displaystyle \Delta }andQ{\displaystyle Q}such thatM=PΔQT{\displaystyle M=P\Delta Q^{T}}with P, Q two unitary matrices andΔ{\displaystyle \Delta }is the generalized diagonal matrix of the singular values (with the same shape asZ{\displaystyle Z}). The positive coefficients ofΔ2{\displaystyle \Delta ^{2}}are the eigenvalues ofZ{\displaystyle Z}. The interest of MCA comes from the way observations (rows) and variables (columns) inZ{\displaystyle Z}can be decomposed. This decomposition is called a factor decomposition. The coordinates of the observations in the factor space are given by Thei{\displaystyle i}-th rows ofF{\displaystyle F}represent thei{\displaystyle i}-th observation in the factor space. And similarly, the coordinates of the variables (in the same factor space as observations!) are given by In recent years, several students ofJean-Paul Benzécrihave refined MCA and incorporated it into a more general framework of data analysis known asgeometric data analysis. This involves the development of direct connections between simplecorrespondence analysis,principal component analysisand MCA with a form ofcluster analysisknown as Euclidean classification.[3] Two extensions have great practical use. In the social sciences, MCA is arguably best known for its application byPierre Bourdieu,[4]notably in his booksLa Distinction,Homo AcademicusandThe State Nobility. Bourdieu argued that there was an internal link between his vision of the social as spatial and relational --– captured by the notion offield, and the geometric properties of MCA.[5]Sociologists following Bourdieu's work most often opt for the analysis of the indicator matrix, rather than the Burt table, largely because of the central importance accorded to the analysis of the 'cloud of individuals'.[6] MCA can also be viewed as a PCA applied to the complete disjunctive table. To do this, the CDT must be transformed as follows. Letyik{\displaystyle y_{ik}}denote the general term of the CDT.yik{\displaystyle y_{ik}}is equal to 1 if individuali{\displaystyle i}possesses the categoryk{\displaystyle k}and 0 if not. Let denotepk{\displaystyle p_{k}}, the proportion of individuals possessing the categoryk{\displaystyle k}. The transformed CDT (TCDT) has as general term: The unstandardized PCA applied to TCDT, the columnk{\displaystyle k}having the weightpk{\displaystyle p_{k}}, leads to the results of MCA. This equivalence is fully explained in a book by Jérôme Pagès.[7]It plays an important theoretical role because it opens the way to the simultaneous treatment of quantitative and qualitative variables. Two methods simultaneously analyze these two types of variables:factor analysis of mixed dataand, when the active variables are partitioned in several groups: multiple factor analysis. This equivalence does not mean that MCA is a particular case of PCA as it is not a particular case of CA. It only means that these methods are closely linked to one another, as they belong to the same family: the factorial methods.[citation needed] There are numerous software of data analysis that include MCA, such as STATA and SPSS. The R packageFactoMineRalso features MCA. This software is related to a book describing the basic methods for performing MCA .[8]There is also a Python package for[1]which works with numpy array matrices; the package has not been implemented yet for Spark dataframes.
https://en.wikipedia.org/wiki/Multiple_correspondence_analysis
Instatistics,factor analysis of mixed dataorfactorial analysis of mixed data(FAMD, in the French original:AFDMorAnalyse Factorielle de Données Mixtes), is thefactorial methoddevoted to data tables in which a group of individuals is described both by quantitative and qualitative variables. It belongs to the exploratory methods developed by the French school calledAnalyse des données(data analysis) founded byJean-Paul Benzécri. The termmixedrefers to the use of both quantitative and qualitative variables. Roughly, we can say that FAMD works as aprincipal components analysis(PCA) for quantitative variables and as amultiple correspondence analysis(MCA) for qualitative variables. When data include both types of variables but the active variables being homogeneous, PCA or MCA can be used. Indeed, it is easy to include supplementary quantitative variables in MCA by thecorrelation coefficientsbetween the variables andfactorson individuals (a factor on individuals is the vector gathering the coordinates of individuals on a factorial axis); the representation obtained is a correlation circle (as in PCA). Similarly, it is easy to include supplementary categorical variables in PCA.[1]For this, each category is represented by the center of gravity of the individuals who have it (as MCA). When the active variables are mixed, the usual practice is to perform discretization on the quantitative variables (e.g. usually in surveys the age is transformed in age classes). Data thus obtained can be processed by MCA. This practice reaches its limits: The data includeK{\displaystyle K}quantitative variablesk=1,…,K{\displaystyle {k=1,\dots ,K}}andQ{\displaystyle Q}qualitative variablesq=1,…,Q{\displaystyle {q=1,\dots ,Q}}. z{\displaystyle z}is a quantitative variable. We note: In the PCA ofK{\displaystyle K}, we look for the function onI{\displaystyle I}(a function onI{\displaystyle I}assigns a value to each individual, it is the case for initial variables and principal components) the most correlated to allK{\displaystyle K}variables in the following sense: In MCA ofQ, we look for the function onI{\displaystyle I}more related to allQ{\displaystyle Q}variables in the following sense: In FAMD{K,Q}{\displaystyle \{K,Q\}}, we look for the function onI{\displaystyle I}the more related to allK+Q{\displaystyle K+Q}variables in the following sense: In this criterion, both types of variables play the same role. The contribution of each variable in this criterion is bounded by 1. The representation of individuals is made directly from factorsI{\displaystyle I}. The representation of quantitative variables is constructed as in PCA (correlation circle). The representation of the categories of qualitative variables is as in MCA : a category is at the centroid of the individuals who possess it. Note that we take the exact centroid and not, as is customary in MCA, the centroid up to a coefficient dependent on the axis (in MCA this coefficient is equal to the inverse of the square root of the eigenvalue; it would be inadequate in FAMD). The representation of variables is calledrelationship square. The coordinate of qualitative variablej{\displaystyle j}along axiss{\displaystyle s}is equal to squared correlation ratio between the variablej{\displaystyle j}and the factor of ranks{\displaystyle s}(denotedη2(j,s){\displaystyle \eta ^{2}(j,s)}). The coordinates of quantitative variablek{\displaystyle k}along axiss{\displaystyle s}is equal to the squared correlation coefficient between the variablek{\displaystyle k}and the factor of ranks{\displaystyle s}(denotedr2(k,s){\displaystyle r^{2}(k,s)}). The relationship indicators between the initial variables are combined in a so-called relationship matrix that contains, at the intersection of rowl{\displaystyle l}and columnc{\displaystyle c}: A very small data set (Table 1) illustrates the operation and outputs of the FAMD . Six individuals are described by three quantitative variables and three qualitatives variables. Data were analyzed using the R package function FAMD FactoMineR . In the relationship matrix, the coefficients are equal toR2{\displaystyle R^{2}}(quantitative variables),ϕ2{\displaystyle \phi ^{2}}(qualitative variables) orη2{\displaystyle \eta ^{2}}(one variable of each type). The matrix shows an entanglement of the relationships between the two types of variables. The representation of individuals (Figure 1) clearly shows three groups of individuals. The first axis opposes individuals 1 and 2 to all others. The second axis opposes individuals 3 and 4 to individuals 5 and 6. The representation of variables (relationship square, Figure 2) shows that the first axis (F1{\displaystyle F1}) is closely linked to variablesk2{\displaystyle k_{2}},k3{\displaystyle k_{3}}andQ3{\displaystyle Q_{3}}. The correlation circle (Figure 3) specifies the sign of the correlation betweenF1{\displaystyle F1},k2{\displaystyle k_{2}}andk3{\displaystyle k_{3}}; the representation of the categories (Figure 4) clarifies the nature of the relationship betweenF1{\displaystyle F1}andQ3{\displaystyle Q_{3}}. Finally individuals 1 and 2, individualized by the first axis, are characterized by high values ofk2{\displaystyle k_{2}}andk3{\displaystyle k_{3}}and by the categoriesc{\displaystyle c}ofQ3{\displaystyle Q_{3}}as well. This example illustrates how the FAMD simultaneously analyses of quantitative and qualitative variables. Thus, it shows, in this example, a first dimension based on the two types of variables. The FAMD's original work is due to Brigitte Escofier[2]and Gilbert Saporta.[3]This work was resumed in 2002 by Jérôme Pagès.[4]A more complete presentation of FAMD in English is included in a book of Jérôme Pagès.[5] The method is implemented in the R packageFactoMineR. The method is implemented in the Python libraryprince.
https://en.wikipedia.org/wiki/Factor_analysis_of_mixed_data
Instatistics,canonical-correlation analysis(CCA), also calledcanonical variates analysis, is a way of inferring information fromcross-covariance matrices. If we have two vectorsX= (X1, ...,Xn) andY= (Y1, ...,Ym) ofrandom variables, and there arecorrelationsamong the variables, then canonical-correlation analysis will findlinear combinationsofXandYthat have a maximum correlation with each other.[1]T. R. Knapp notes that "virtually all of the commonly encounteredparametric testsof significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables."[2]The method was first introduced byHarold Hotellingin 1936,[3]although in the context ofangles between flatsthe mathematical concept was published byCamille Jordanin 1875.[4] CCA is now a cornerstone of multivariate statistics and multi-view learning, and a great number of interpretations and extensions have been proposed, such as probabilistic CCA, sparse CCA, multi-view CCA, deep CCA,[5]and DeepGeoCCA.[6]Unfortunately, perhaps because of its popularity, the literature can be inconsistent with notation, we attempt to highlight such inconsistencies in this article to help the reader make best use of the existing literature and techniques available. Like its sister methodPCA, CCA can be viewed inpopulationform (corresponding to random vectors and their covariance matrices) or insampleform (corresponding to datasets and their sample covariance matrices). These two forms are almost exact analogues of each other, which is why their distinction is often overlooked, but they can behave very differently in high dimensional settings.[7]We next give explicit mathematical definitions for the population problem and highlight the different objects in the so-calledcanonical decomposition- understanding the differences between these objects is crucial for interpretation of the technique. Given twocolumn vectorsX=(x1,…,xn)T{\displaystyle X=(x_{1},\dots ,x_{n})^{T}}andY=(y1,…,ym)T{\displaystyle Y=(y_{1},\dots ,y_{m})^{T}}ofrandom variableswithfinitesecond moments, one may define thecross-covarianceΣXY=cov⁡(X,Y){\displaystyle \Sigma _{XY}=\operatorname {cov} (X,Y)}to be then×m{\displaystyle n\times m}matrixwhose(i,j){\displaystyle (i,j)}entry is thecovariancecov⁡(xi,yj){\displaystyle \operatorname {cov} (x_{i},y_{j})}. In practice, we would estimate the covariance matrix based on sampled data fromX{\displaystyle X}andY{\displaystyle Y}(i.e. from a pair of data matrices). Canonical-correlation analysis seeks a sequence of vectorsak{\displaystyle a_{k}}(ak∈Rn{\displaystyle a_{k}\in \mathbb {R} ^{n}}) andbk{\displaystyle b_{k}}(bk∈Rm{\displaystyle b_{k}\in \mathbb {R} ^{m}}) such that the random variablesakTX{\displaystyle a_{k}^{T}X}andbkTY{\displaystyle b_{k}^{T}Y}maximize thecorrelationρ=corr⁡(akTX,bkTY){\displaystyle \rho =\operatorname {corr} (a_{k}^{T}X,b_{k}^{T}Y)}. The (scalar) random variablesU=a1TX{\displaystyle U=a_{1}^{T}X}andV=b1TY{\displaystyle V=b_{1}^{T}Y}are thefirst pair of canonical variables. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives thesecond pair of canonical variables. This procedure may be continued up tomin{m,n}{\displaystyle \min\{m,n\}}times. The sets of vectorsak,bk{\displaystyle a_{k},b_{k}}are calledcanonical directionsorweight vectorsor simplyweights. The 'dual' sets of vectorsΣXXak,ΣYYbk{\displaystyle \Sigma _{XX}a_{k},\Sigma _{YY}b_{k}}are calledcanonical loading vectorsor simplyloadings; these are often more straightforward to interpret than the weights.[8] LetΣXY{\displaystyle \Sigma _{XY}}be thecross-covariance matrixfor any pair of (vector-shaped) random variablesX{\displaystyle X}andY{\displaystyle Y}. The target function to maximize is The first step is to define achange of basisand define whereΣXX1/2{\displaystyle \Sigma _{XX}^{1/2}}andΣYY1/2{\displaystyle \Sigma _{YY}^{1/2}}can be obtained from the eigen-decomposition (or bydiagonalization): and Thus By theCauchy–Schwarz inequality, ...can someone check the this, particularly the term to the right of "(d) leq"? There is equality if the vectorsd{\displaystyle d}andΣYY−1/2ΣYXΣXX−1/2c{\displaystyle \Sigma _{YY}^{-1/2}\Sigma _{YX}\Sigma _{XX}^{-1/2}c}are collinear. In addition, the maximum of correlation is attained ifc{\displaystyle c}is theeigenvectorwith the maximum eigenvalue for the matrixΣXX−1/2ΣXYΣYY−1ΣYXΣXX−1/2{\displaystyle \Sigma _{XX}^{-1/2}\Sigma _{XY}\Sigma _{YY}^{-1}\Sigma _{YX}\Sigma _{XX}^{-1/2}}(seeRayleigh quotient). The subsequent pairs are found by usingeigenvaluesof decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices. Another way of viewing this computation is thatc{\displaystyle c}andd{\displaystyle d}are the left and rightsingular vectorsof the correlation matrix of X and Y corresponding to the highest singular value. The solution is therefore: Reciprocally, there is also: Reversing the change of coordinates, we have that The canonical variables are defined by: CCA can be computed usingsingular value decompositionon a correlation matrix.[9]It is available as a function in[10] CCA computation usingsingular value decompositionon a correlation matrix is related to thecosineof theangles between flats. Thecosinefunction isill-conditionedfor small angles, leading to very inaccurate computation of highly correlated principal vectors in finiteprecisioncomputer arithmetic. Tofix this trouble, alternative algorithms[12]are available in Each row can be tested for significance with the following method. Since the correlations are sorted, saying that rowi{\displaystyle i}is zero implies all further correlations are also zero. If we havep{\displaystyle p}independent observations in a sample andρ^i{\displaystyle {\widehat {\rho }}_{i}}is the estimated correlation fori=1,…,min{m,n}{\displaystyle i=1,\dots ,\min\{m,n\}}. For thei{\displaystyle i}th row, the test statistic is: which is asymptotically distributed as achi-squaredwith(m−i+1)(n−i+1){\displaystyle (m-i+1)(n-i+1)}degrees of freedomfor largep{\displaystyle p}.[13]Since all the correlations frommin{m,n}{\displaystyle \min\{m,n\}}top{\displaystyle p}are logically zero (and estimated that way also) the product for the terms after this point is irrelevant. Note that in the small sample size limit withp<n+m{\displaystyle p<n+m}then we are guaranteed that the topm+n−p{\displaystyle m+n-p}correlations will be identically 1 and hence the test is meaningless.[14] A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common among the two sets.[15]For example, in psychological testing, one could take two well established multidimensionalpersonality testssuch as theMinnesota Multiphasic Personality Inventory(MMPI-2) and theNEO. By seeing how the MMPI-2 factors relate to the NEO factors, one could gain insight into what dimensions were common between the tests and how much variance was shared. For example, one might find that anextraversionorneuroticismdimension accounted for a substantial amount of shared variance between the two tests. One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model.[16] Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.[17] LetX=x1{\displaystyle X=x_{1}}with zeroexpected value, i.e.,E⁡(X)=0{\displaystyle \operatorname {E} (X)=0}. We notice that in both casesU=V{\displaystyle U=V}, which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly. Assuming thatX=(x1,…,xn)T{\displaystyle X=(x_{1},\dots ,x_{n})^{T}}andY=(y1,…,ym)T{\displaystyle Y=(y_{1},\dots ,y_{m})^{T}}have zeroexpected values, i.e.,E⁡(X)=E⁡(Y)=0{\displaystyle \operatorname {E} (X)=\operatorname {E} (Y)=0}, theircovariancematricesΣXX=Cov⁡(X,X)=E⁡[XXT]{\displaystyle \Sigma _{XX}=\operatorname {Cov} (X,X)=\operatorname {E} [XX^{T}]}andΣYY=Cov⁡(Y,Y)=E⁡[YYT]{\displaystyle \Sigma _{YY}=\operatorname {Cov} (Y,Y)=\operatorname {E} [YY^{T}]}can be viewed asGram matricesin aninner productfor the entries ofX{\displaystyle X}andY{\displaystyle Y}, correspondingly. In this interpretation, the random variables, entriesxi{\displaystyle x_{i}}ofX{\displaystyle X}andyj{\displaystyle y_{j}}ofY{\displaystyle Y}are treated as elements of a vector space with an inner product given by thecovariancecov⁡(xi,yj){\displaystyle \operatorname {cov} (x_{i},y_{j})}; seeCovariance#Relationship to inner products. The definition of the canonical variablesU{\displaystyle U}andV{\displaystyle V}is then equivalent to the definition ofprincipal vectorsfor the pair of subspaces spanned by the entries ofX{\displaystyle X}andY{\displaystyle Y}with respect to thisinner product. The canonical correlationscorr⁡(U,V){\displaystyle \operatorname {corr} (U,V)}is equal to thecosineofprincipal angles. CCA can also be viewed as a specialwhitening transformationwhere the random vectorsX{\displaystyle X}andY{\displaystyle Y}are simultaneously transformed in such a way that the cross-correlation between the whitened vectorsXCCA{\displaystyle X^{CCA}}andYCCA{\displaystyle Y^{CCA}}is diagonal.[18]The canonical correlations are then interpreted as regression coefficients linkingXCCA{\displaystyle X^{CCA}}andYCCA{\displaystyle Y^{CCA}}and may also be negative. The regression view of CCA also provides a way to construct a latent variable probabilistic generative model for CCA, with uncorrelated hidden variables representing shared and non-shared variability.[19]
https://en.wikipedia.org/wiki/Canonical_correlation
ACUR matrix approximationis a set of threematricesthat, when multiplied together, closely approximate a given matrix.[1][2][3]A CUR approximation can be used in the same way as thelow-rank approximationof thesingular value decomposition(SVD). CUR approximations are less accurate than the SVD, but they offer two key advantages, both stemming from the fact that the rows and columns come from the original matrix (rather than left and right singular vectors): Formally, a CUR matrix approximation of a matrixAis three matricesC,U, andRsuch thatCis made from columns ofA,Ris made from rows ofA, and that the productCURclosely approximatesA. Usually the CUR is selected to be arank-kapproximation, which means thatCcontainskcolumns ofA,Rcontainskrows ofA, andUis ak-by-kmatrix. There are many possible CUR matrix approximations, and many CUR matrix approximations for a given rank. The CUR matrix approximation is often[citation needed]used in place of the low-rank approximation of the SVD inprincipal component analysis. The CUR is less accurate, but the columns of the matrixCare taken fromAand the rows ofRare taken fromA. In PCA, each column ofAcontains a data sample; thus, the matrixCis made of a subset of data samples. This is much easier to interpret than the SVD's left singular vectors, which represent the data in a rotated space. Similarly, the matrixRis made of a subset of variables measured for each data sample. This is easier to comprehend than the SVD's right singular vectors, which are another rotations of the data in space. Hamm[4]and Aldroubi et al.[5]describe the following theorem, which outlines a CUR decomposition of a matrixL{\displaystyle L}with rankr{\displaystyle r}: Theorem: Consider row and column indicesI,J⊆[n]{\displaystyle I,J\subseteq [n]}with|I|,|J|≥r{\displaystyle |I|,|J|\geq r}. Denote submatricesC=L:,J,{\displaystyle C=L_{:,J},}U=LI,J{\displaystyle U=L_{I,J}}andR=LI,:{\displaystyle R=L_{I,:}}. If rank(U{\displaystyle U}) = rank(L{\displaystyle L}), thenL=CU+R{\displaystyle L=CU^{+}R}, where(⋅)+{\displaystyle (\cdot )^{+}}denotes theMoore–Penrose pseudoinverse. In other words, ifL{\displaystyle L}has low rank, we can take a sub-matrixU=LI,J{\displaystyle U=L_{I,J}}of the same rank, together with some rowsR{\displaystyle R}and columnsC{\displaystyle C}ofL{\displaystyle L}and use them to reconstructL{\displaystyle L}. Tensor-CURT decomposition[6]is a generalization of matrix-CUR decomposition. Formally, a CURT tensor approximation of a tensorAis three matrices and a (core-)tensorC,R,TandUsuch thatCis made from columns ofA,Ris made from rows ofA,Tis made from tubes ofAand that the productU(C,R,T)(where thei,j,l{\displaystyle i,j,l}-th entry of it is∑i′,j′,l′Ui′,j′,l′Ci,i′Rj,j′Tl,l′{\displaystyle \sum _{i',j',l'}U_{i',j',l'}C_{i,i'}R_{j,j'}T_{l,l'}}) closely approximatesA. Usually the CURT is selected to be arank-kapproximation, which means thatCcontainskcolumns ofA,Rcontainskrows ofA,Tcontains tubes ofAandUis ak-by-k-by-k(core-)tensor. The CUR matrix approximation is not unique and there are multiple algorithms for computing one. One is ALGORITHMCUR.[1] The "Linear Time CUR" algorithm[7]simply picksJby sampling columns randomly (with replacement) with probability proportional to the squared column norms,‖L:,j‖22{\displaystyle \|L_{:,j}\|_{2}^{2}}; and similarly samplingIproportional to the squared row norms,‖Li‖22{\displaystyle \|L_{i}\|_{2}^{2}}. The authors show that taking|J|≈k/ε4{\displaystyle |J|\approx k/\varepsilon ^{4}}and|I|≈k/ε2{\displaystyle |I|\approx k/\varepsilon ^{2}}where0≤ε{\displaystyle 0\leq \varepsilon }, the algorithm achieves Frobenius error bound‖A−CUR‖F≤‖A−Ak‖F+ε‖A‖F{\displaystyle \|A-CUR\|_{F}\leq \|A-A_{k}\|_{F}+\varepsilon \|A\|_{F}}, whereAk{\displaystyle A_{k}}is the optimal rankkapproximation.
https://en.wikipedia.org/wiki/CUR_matrix_approximation
Detrended correspondence analysis (DCA)is a multivariatestatisticaltechnique widely used byecologiststo find the main factors or gradients in large, species-rich but usually sparse data matrices that typifyecological communitydata. DCA is frequently used to suppress artifacts inherent in most othermultivariate analyseswhen applied togradientdata.[1] DCA was created in 1979 by Mark Hill of theUnited Kingdom'sInstitute for Terrestrial Ecology (now merged intoCentre for Ecology and Hydrology) and implemented inFORTRANcode package called DECORANA (Detrended Correspondence Analysis), acorrespondence analysismethod. DCA is sometimes erroneously referred to as DECORANA; however, DCA is the underlying algorithm, while DECORANA is a tool implementing it. According to Hill and Gauch,[1]DCA suppresses two artifacts inherent in most other multivariate analyses when applied togradientdata. An example is a time-series of plant species colonising a new habitat; earlysuccessional speciesare replaced by mid-successional species, then by late successional ones (see example below). When such data are analysed by a standardordinationsuch as a correspondence analysis: Outside ecology, the same artifacts occur when gradient data are analysed (e.g. soil properties along a transect running between 2 different geologies, or behavioural data over the lifespan of an individual) because the curved projection is an accurate representation of the shape of the data in multivariate space. Ter Braak and Prentice (1987, p. 121) cite asimulationstudy analysing two-dimensional species packing models resulting in a better performance of DCA compared to CA. DCA is aniterative algorithmthat has shown itself to be a highly reliable and useful tool for data exploration and summary in community ecology (Shaw 2003). It starts by running a standard ordination (CA or reciprocal averaging) on the data, to produce the initial horse-shoe curve in which the 1st ordination axis distorts into the 2nd axis. It then divides the first axis into segments (default = 26), and rescales each segment to have mean value of zero on the 2nd axis - this effectively squashes the curve flat. It also rescales the axis so that the ends are no longer compressed relative to the middle, so that 1 DCA unit approximates to the same rate of turnover all the way through the data: the rule of thumb is that 4 DCA units mean that there has been a total turnover in the community. Ter Braak and Prentice (1987, p. 122) warn against the non-linear rescaling of the axes due to robustness issues and recommend using detrending-by-polynomials only. Nosignificance testsare available with DCA, although there is a constrained (canonical) version called DCCA in which the axes are forced byMultiple linear regressionto correlate optimally with alinear combinationof other (usually environmental) variables; this allows testing of a null model by Monte-Carlopermutationanalysis. The example shows an ideal data set: The species data is in rows, samples in columns. For each sample along the gradient, a new species is introduced but another species is no longer present. The result is a sparse matrix. Ones indicate the presence of a species in a sample. Except at the edges each sample contains five species. The plot of the first two axes of the correspondence analysis result on the right hand side clearly shows the disadvantages of this procedure: the edge effect, i.e. the points are clustered at the edges of the first axis, and the arch effect. An open source implementation of DCA, based on the original FORTRAN code, is available[2]in the vegan R-package.[3]
https://en.wikipedia.org/wiki/Detrended_correspondence_analysis
Directional component analysis(DCA)[1][2][3]is a statistical method used in climate science for identifying representative patterns of variability in space-time data-sets such as historical climate observations,[1]weather prediction ensembles[2]orclimate ensembles.[3] The first DCA pattern is a pattern of weather or climate variability that is both likely to occur (measured usinglikelihood) and has a large impact (for a specified linear impact function, and given certain mathematical conditions: see below). The first DCA pattern contrasts with the firstPCApattern, which is likely to occur, but may not have a large impact, and with a pattern derived from thegradientof the impact function, which has a large impact, but may not be likely to occur. DCA differs from other pattern identification methods used in climate research, such asEOFs,[4]rotated EOFs[5]and extended EOFs[6]in that it takes into account an external vector, the gradient of the impact. DCA provides a way to reduce large ensembles fromweather forecasts[2]orclimate models[3]to just two patterns. The first pattern is the ensemble mean, and the second pattern is the DCA pattern, which represents variability around the ensemble mean in a way that takes impact into account. DCA contrasts with other methods that have been proposed for the reduction of ensembles[7][8]in that it takes impact into account in addition to the structure of the ensemble. DCA is calculated from two inputs:[1][2][3] Consider a space-time data setX{\displaystyle X}, containing individual spatial pattern vectorsx{\displaystyle x}, where the individual patterns are each considered as single samples from amultivariate normal distributionwith mean zero andcovariance matrixC{\displaystyle C}. We define a linear impact function of a spatial pattern asrtx{\displaystyle r^{t}x}, wherer{\displaystyle r}is a vector of spatial weights. The first DCA pattern is given in terms the covariance matrixC{\displaystyle C}and the weightsr{\displaystyle r}by the proportional expressionx∝Cr{\displaystyle x\propto Cr}.[1][2][3] The pattern can then be normalized to any length as required.[1] If the weather or climate data is elliptically distributed (e.g., is distributed as amultivariate normal distributionor amultivariate t-distribution) then the first DCA pattern (DCA1) is defined as the spatial pattern with the following mathematical properties: For instance, in a rainfall anomaly dataset, using an impact metric defined as the total rainfall anomaly, the first DCA pattern is the spatial pattern that has the highest probability density for a given total rainfall anomaly. If the given total rainfall anomaly is chosen to have a large value, then this pattern combines being extreme in terms of the metric (i.e., representing large amounts of total rainfall) with being likely in terms of the pattern, and so is well suited as a representative extreme pattern. The main differences betweenPrincipal component analysis(PCA) and DCA are[1] As a result, for unit vector spatial patterns: The degenerate cases occur when the PCA and DCA patterns are equal. Also, given the first PCA pattern, the DCA pattern can be scaled so that: Source:[1] Figure 1 gives an example, which can be understood as follows: From this diagram, the DCA pattern can be seen to possess the following properties: In this case the total rainfall anomaly of the PCA pattern is quite small, because of anticorrelations between the rainfall anomalies at the two locations. As a result, the first PCA pattern is not a good representative example of a pattern with large total rainfall anomaly, while the first DCA pattern is. Inn{\displaystyle n}dimensions the ellipse becomes an ellipsoid, the diagonal line becomes ann−1{\displaystyle n-1}dimensional plane, and the PCA and DCA patterns are vectors inn{\displaystyle n}dimensions. DCA has been applied to theCRUdata-set of historical rainfall variability[9]in order to understand the most likely patterns of rainfall extremes in the US and China.[1] DCA has been applied toECMWFmedium-range weather forecast ensembles in order to identify the most likely patterns of extreme temperatures in the ensemble forecast.[2] DCA has been applied to ensemble climate model projections in order to identify the most likely patterns of extreme future rainfall.[3] Source:[1] Consider a space-time data-setX{\displaystyle X}, containing individual spatial pattern vectorsx{\displaystyle x}, where the individual patterns are each considered as single samples from a multivariate normal distribution with mean zero and covariance matrixC{\displaystyle C}. As a function ofx{\displaystyle x}, the log probability density is proportional to−xtC−1x{\displaystyle -x^{t}C^{-1}x}. We define a linear impact function of a spatial pattern asrtx{\displaystyle r^{t}x}, wherer{\displaystyle r}is a vector of spatial weights. We then seek to find the spatial pattern that maximises the probability density for a given value of the linear impact function. This is equivalent to finding the spatial pattern that maximises thelogprobability density for a given value of the linear impact function, which is slightly easier to solve. This is a constrained maximisation problem, and can be solved using the method ofLagrange multipliers. The Lagrangian function is given by L(x,λ)=−xtC−1x−λ(rtx−1){\displaystyle L(x,\lambda )=-x^{t}C^{-1}x-\lambda (r^{t}x-1)} Differentiating byx{\displaystyle x}and setting to zero gives the solution x∝Cr{\displaystyle x\propto Cr} Normalising so thatx{\displaystyle x}is unit vector gives x=Cr/(rtCCr)1/2{\displaystyle x=Cr/(r^{t}CCr)^{1/2}} This is the first DCA pattern. Subsequent patterns can be derived which are orthogonal to the first, to form an orthonormal set and a method for matrix factorisation.
https://en.wikipedia.org/wiki/Directional_component_analysis
Indata science,dynamic mode decomposition(DMD) is adimensionality reductionalgorithm developed by Peter J. Schmid and Joern Sesterhenn in 2008.[1][2]Given a time series of data, DMD computes a set of modes, each of which is associated with a fixed oscillationfrequencyand decay/growth rate. Forlinear systemsin particular, these modes and frequencies are analogous to thenormal modesof the system, but more generally, they are approximations of the modes andeigenvaluesof thecomposition operator(also called the Koopman operator). Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such asprincipal component analysis(PCA), which computesorthogonalmodes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with adamped(or driven) sinusoidal behavior in time. Dynamic mode decomposition was first introduced by Schmid as a numerical procedure for extracting dynamical features from flow data.[3] The data takes the form of a snapshot sequence wherevi∈RM{\displaystyle v_{i}\in \mathbb {R} ^{M}}is thei{\displaystyle i}-th snapshot of the flow field, andV1N∈RM×N{\displaystyle V_{1}^{N}\in \mathbb {R} ^{M\times N}}is a data matrix whose columns are the individual snapshots. These snapshots are assumed to be related via a linear mapping that defines alinear dynamical system that remains approximately the same over the duration of the sampling period. Written in matrix form, this implies that wherer{\displaystyle r}is the vector of residuals that accounts for behaviors that cannot be described completely byA{\displaystyle A},eN−1={0,0,…,1}∈RN−1{\displaystyle e_{N-1}=\{0,0,\ldots ,1\}\in \mathbb {R} ^{N-1}},V1N−1={v1,v2,…,vN−1}{\displaystyle V_{1}^{N-1}=\{v_{1},v_{2},\dots ,v_{N-1}\}}, andV2N={v2,v3,…,vN}{\displaystyle V_{2}^{N}=\{v_{2},v_{3},\dots ,v_{N}\}}. Regardless of the approach, the output of DMD is the eigenvalues and eigenvectors ofA{\displaystyle A}, which are referred to as theDMD eigenvaluesandDMD modesrespectively. There are two methods for obtaining these eigenvalues and modes. The first isArnoldi-like, which is useful for theoretical analysis due to its connection withKrylov methods. The second is asingular value decomposition(SVD) based approach that is more robust to noise in the data and to numerical errors. In fluids applications, the size of a snapshot,M{\displaystyle M}, is assumed to be much larger than the number of snapshotsN{\displaystyle N}, so there are many equally valid choices ofA{\displaystyle A}. The original DMD algorithm picksA{\displaystyle A}so that each of the snapshots inV2N{\displaystyle V_{2}^{N}}can be expressed as linear combinations of the snapshots inV1N−1{\displaystyle V_{1}^{N-1}}. Because most of the snapshots appear in both data sets, this representation is error free for all snapshots exceptvN{\displaystyle v_{N}}, which is written as wherea=a1,a2,…,aN−1{\displaystyle a={a_{1},a_{2},\dots ,a_{N-1}}}is a set of coefficients DMD must identify andr{\displaystyle r}is the residual. In total, whereS{\displaystyle S}is thecompanion matrix The vectora{\displaystyle a}can be computed by solving a least squares problem, which minimizes the overall residual. In particular if we take the QR decomposition ofV1N−1=QR{\displaystyle V_{1}^{N-1}=QR}, thena=R−1QTvN{\displaystyle a=R^{-1}Q^{T}v_{N}}. In this form, DMD is a type ofArnoldi method, and therefore the eigenvalues ofS{\displaystyle S}are approximations of the eigenvalues ofA{\displaystyle A}. Furthermore, ify{\displaystyle y}is an eigenvector ofS{\displaystyle S}, thenV1N−1y{\displaystyle V_{1}^{N-1}y}is an approximate eigenvector ofA{\displaystyle A}. The reason aneigendecompositionis performed onS{\displaystyle S}rather thanA{\displaystyle A}is becauseS{\displaystyle S}is much smaller thanA{\displaystyle A}, so the computational cost of DMD is determined by the number of snapshots rather than the size of a snapshot. Instead of computing the companion matrixS{\displaystyle S}, the SVD-based approach yields the matrixS~{\displaystyle {\tilde {S}}}that is related toA{\displaystyle A}via a similarity transform. To do this, assume we have the SVD ofV1N−1=UΣWT{\displaystyle V_{1}^{N-1}=U\Sigma W^{T}}. Then Equivalent to the assumption made by the Arnoldi-based approach, we chooseA{\displaystyle A}such that the snapshots inV2N{\displaystyle V_{2}^{N}}can be written as the linear superposition of the columns inU{\displaystyle U}, which is equivalent to requiring that they can be written as the superposition ofPOD modes. With this restriction, minimizing the residual requires that it is orthogonal to the POD basis (i.e.,UTr=0{\displaystyle U^{T}r=0}). Then multiplying both sides of the equation above byUT{\displaystyle U^{T}}yieldsUTV2N=UTAUΣWT{\displaystyle U^{T}V_{2}^{N}=U^{T}AU\Sigma W^{T}}, which can be manipulated to obtain BecauseA{\displaystyle A}andS~{\displaystyle {\tilde {S}}}are related via similarity transform, the eigenvalues ofS~{\displaystyle {\tilde {S}}}are the eigenvalues ofA{\displaystyle A}, and ify{\displaystyle y}is an eigenvector ofS~{\displaystyle {\tilde {S}}}, thenUy{\displaystyle Uy}is an eigenvector ofA{\displaystyle A}. In summary, the SVD-based approach is as follows: The advantage of the SVD-based approach over the Arnoldi-like approach is that noise in the data and numerical truncation issues can be compensated for by truncating the SVD ofV1N−1{\displaystyle V_{1}^{N-1}}. As noted in[3]accurately computing more than the first couple modes and eigenvalues can be difficult on experimental data sets without this truncation step. Since its inception in 2010, a considerable amount of work has focused on understanding and improving DMD. One of the first analyses of DMD by Rowley et al.[4]established the connection between DMD and the Koopman operator, and helped to explain the output of DMD when applied to nonlinear systems. Since then, a number of modifications have been developed that either strengthen this connection further or enhance the robustness and applicability of the approach. In addition to the algorithms listed here, similar application-specific techniques have been developed. For example, like DMD,Prony's methodrepresents a signal as the superposition ofdamped sinusoids. In climate science, linear inverse modeling is also strongly connected with DMD.[20]For a more comprehensive list, see Tu et al.[7] The wake of an obstacle in the flow may develop aKármán vortex street. The Fig.1 shows the shedding of a vortex behind the trailing edge of a profile. The DMD-analysis was applied to 90 sequential Entropy fields(animated gif (1.9MB))and yield an approximated eigenvalue-spectrum as depicted below. The analysis was applied to the numerical results, without referring to the governing equations. The profile is seen in white. The white arcs are the processor boundaries since the computation was performed on a parallel computer using different computational blocks. Roughly a third of the spectrum was highly damped (large, negativeλr{\displaystyle \lambda _{r}}) and is not shown. The dominant shedding mode is shown in the following pictures. The image to the left is the real part, the image to the right, the imaginary part of the eigenvector. Again, the entropy-eigenvector is shown in this picture. The acoustic contents of the same mode is seen in the bottom half of the next plot. The top half corresponds to the entropy mode as above. The DMD analysis assumes a pattern of the formq(x1,x2,x3,…)=ecx1q^(x2,x3,…){\displaystyle q(x_{1},x_{2},x_{3},\ldots )=e^{cx_{1}}{\hat {q}}(x_{2},x_{3},\ldots )}wherex1{\displaystyle x_{1}}is any of the independent variables of the problem, but has to be selected in advance. Take for example the pattern With the time as the preselected exponential factor. A sample is given in the following figure withω=2π/0.1{\displaystyle \omega =2\pi /0.1},b=0.02{\displaystyle b=0.02}andk=2π/b{\displaystyle k=2\pi /b}. The left picture shows the pattern without, the right with noise added. The amplitude of the random noise is the same as that of the pattern. A DMD analysis is performed with 21 synthetically generated fields using a time intervalΔt=1/90s{\displaystyle \Delta t=1/90{\text{ s}}}, limiting the analysis tof=45Hz{\displaystyle f=45{\text{ Hz}}}. The spectrum is symmetric and shows three almost undamped modes (small negative real part), whereas the other modes are heavily damped. Their numerical values areω1=−0.201,ω2/3=−0.223±i62.768{\displaystyle \omega _{1}=-0.201,\omega _{2/3}=-0.223\pm i62.768}respectively. The real one corresponds to the mean of the field, whereasω2/3{\displaystyle \omega _{2/3}}corresponds to the imposed pattern withf=10Hz{\displaystyle f=10{\text{ Hz}}}. Yielding a relative error of −1/1000. Increasing the noise to 10 times the signal value yields about the same error. The real and imaginary part of one of the latter two eigenmodes is depicted in the following figure. Several other decompositions of experimental data exist. If the governing equations are available, an eigenvalue decomposition might be feasible.
https://en.wikipedia.org/wiki/Dynamic_mode_decomposition
Aneigenface(/ˈaɪɡən-/EYE-gən-) is the name given to a set ofeigenvectorswhen used in thecomputer visionproblem of humanface recognition.[1]The approach of using eigenfaces forrecognitionwas developed by Sirovich and Kirby and used byMatthew TurkandAlex Pentlandin face classification.[2][3]The eigenvectors are derived from thecovariance matrixof theprobability distributionover the high-dimensionalvector spaceof face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set. The eigenface approach began with a search for a low-dimensional representation of face images. Sirovich and Kirby showed thatprincipal component analysiscould be used on a collection of face images to form a set of basis features.[2]These basis images, known as eigenpictures, could be linearly combined to reconstruct images in the original training set. If the training set consists ofMimages, principal component analysis could form a basis set ofNimages, whereN < M. The reconstruction error is reduced by increasing the number of eigenpictures; however, the number needed is always chosen less thanM. For example, if you need to generate a number ofNeigenfaces for a training set ofMface images, you can say that each face image can be made up of "proportions" of all theK"features" or eigenfaces: Face image1= (23% of E1) + (2% of E2) + (51% of E3) + ... + (1% En). In 1991 M. Turk and A. Pentland expanded these results and presented the eigenface method of face recognition.[3]In addition to designing a system for automated face recognition using eigenfaces, they showed a way of calculating theeigenvectorsof acovariance matrixsuch that computers of the time could perform eigen-decomposition on a large number of face images. Face images usually occupy a high-dimensional space and conventional principal component analysis was intractable on such data sets. Turk and Pentland's paper demonstrated ways to extract the eigenvectors based on matrices sized by the number of images rather than the number of pixels. Once established, the eigenface method was expanded to include methods of preprocessing to improve accuracy.[4]Multiple manifold approaches were also used to build sets of eigenfaces for different subjects[5][6]and different features, such as the eyes.[7] Aset of eigenfacescan be generated by performing a mathematical process calledprincipal component analysis(PCA) on a large set of images depicting different human faces. Informally, eigenfaces can be considered a set of "standardized face ingredients", derived fromstatistical analysisof many pictures of faces. Any human face can be considered to be a combination of these standard faces. For example, one's face might be composed of the average face plus 10% from eigenface 1, 55% from eigenface 2, and even −3% from eigenface 3. Remarkably, it does not take many eigenfaces combined together to achieve a fair approximation of most faces. Also, because a person's face is not recorded by adigital photograph, but instead as just a list of values (one value for each eigenface in the database used), much less space is taken for each person's face. The eigenfaces that are created will appear as light and dark areas that are arranged in a specific pattern. This pattern is how different features of a face are singled out to be evaluated and scored. There will be a pattern to evaluatesymmetry, whether there is any style of facial hair, where the hairline is, or an evaluation of the size of the nose or mouth. Other eigenfaces have patterns that are less simple to identify, and the image of the eigenface may look very little like a face. The technique used in creating eigenfaces and using them for recognition is also used outside of face recognition:handwriting recognition,lip reading,voice recognition,sign language/handgesturesinterpretation andmedical imaginganalysis. Therefore, some do not use the term eigenface, but prefer to use 'eigenimage'. To create a set of eigenfaces, one must: These eigenfaces can now be used to represent both existing and new faces: we can project a new (mean-subtracted) image on the eigenfaces and thereby record how that new face differs from the mean face. The eigenvalues associated with each eigenface represent how much the images in the training set vary from the mean image in that direction. Information is lost by projecting the image on a subset of the eigenvectors, but losses are minimized by keeping those eigenfaces with the largest eigenvalues. For instance, working with a 100 × 100 image will produce 10,000 eigenvectors. In practical applications, most faces can typically be identified using a projection on between 100 and 150 eigenfaces, so that most of the 10,000 eigenvectors can be discarded. Here is an example of calculating eigenfaces with Extended Yale Face Database B. To evade computational and storage bottleneck, the face images are sampled down by a factor 4×4=16. Note that although the covariance matrix S generates many eigenfaces, only a fraction of those are needed to represent the majority of the faces. For example, to represent 95% of the total variation of all face images, only the first 43 eigenfaces are needed. To calculate this result, implement the following code: Performing PCA directly on the covariance matrix of the images is often computationally infeasible. If small images are used, say 100 × 100 pixels, each image is a point in a 10,000-dimensional space and the covariance matrixSis a matrix of 10,000 × 10,000 = 108elements. However therankof the covariance matrix is limited by the number of training examples: if there areNtraining examples, there will be at mostN− 1 eigenvectors with non-zero eigenvalues. If the number of training examples is smaller than the dimensionality of the images, the principal components can be computed more easily as follows. LetTbe the matrix of preprocessed training examples, where each column contains one mean-subtracted image. The covariance matrix can then be computed asS=TTTand the eigenvector decomposition ofSis given by HoweverTTTis a large matrix, and if instead we take the eigenvalue decomposition of then we notice that by pre-multiplying both sides of the equation withT, we obtain Meaning that, ifuiis an eigenvector ofTTT, thenvi=Tuiis an eigenvector ofS. If we have a training set of 300 images of 100 × 100 pixels, the matrixTTTis a 300 × 300 matrix, which is much more manageable than the 10,000 × 10,000 covariance matrix. Notice however that the resulting vectorsviare not normalised; if normalisation is required it should be applied as an extra step. LetXdenote the⁠d×n{\displaystyle d\times n}⁠data matrix with column⁠xi{\displaystyle x_{i}}⁠as the image vector with mean subtracted. Then, Let thesingular value decomposition(SVD) ofXbe: Then the eigenvalue decomposition forXXT{\displaystyle XX^{T}}is: Thus we can see easily that: Using SVD on data matrixX, it is unnecessary to calculate the actual covariance matrix to get eigenfaces. Facial recognition was the motivation for the creation of eigenfaces. For this use, eigenfaces have advantages over other techniques available, such as the system's speed and efficiency. As eigenface is primarily a dimension reduction method, a system can represent many subjects with a relatively small set of data. As a face-recognition system it is also fairly invariant to large reductions in image sizing; however, it begins to fail considerably when the variation between the seen images and probe image is large. To recognise faces, gallery images – those seen by the system – are saved as collections of weights describing the contribution each eigenface has to that image. When a new face is presented to the system for classification, its own weights are found by projecting the image onto the collection of eigenfaces. This provides a set of weights describing the probe face. These weights are then classified against all weights in the gallery set to find the closest match. A nearest-neighbour method is a simple approach for finding theEuclidean distancebetween two vectors, where the minimum can be classified as the closest subject.[3]: 590 Intuitively, the recognition process with the eigenface method is to project query images into the face-space spanned by eigenfaces calculated, and to find the closest match to a face class in that face-space. The weights of each gallery image only convey information describing that image, not that subject. An image of one subject under frontal lighting may have very different weights to those of the same subject under strong left lighting. This limits the application of such a system. Experiments in the original Eigenface paper presented the following results: an average of 96% with light variation, 85% with orientation variation, and 64% with size variation.[3]: 590 Various extensions have been made to the eigenface method. Theeigenfeaturesmethod combinesfacial metrics(measuring distance between facial features) with the eigenface representation.Fisherfaceuseslinear discriminant analysis[9]and is less sensitive to variation in lighting and pose of the face. Fisherface uses labelled data to retain more of the class-specific information during the dimension reduction stage. A further alternative to eigenfaces and Fisherfaces is theactive appearance model. This approach uses anactive shape modelto describe the outline of a face. By collecting many face outlines, principal component analysis can be used to form a basis set of models that encapsulate the variation of different faces. Many modern approaches still use principal component analysis as a means of dimension reduction or to form basis images for different modes of variation. Eigenface provides an easy and cheap way to realize face recognition in that: However, the deficiencies of the eigenface method are also obvious: To cope with illumination distraction in practice, the eigenface method usually discards the first three eigenfaces from the dataset. Since illumination is usually the cause behind the largest variations in face images, the first three eigenfaces will mainly capture the information of 3-dimensional lighting changes, which has little contribution to face recognition. By discarding those three eigenfaces, there will be a decent amount of boost in accuracy of face recognition, but other methods such as fisherface and linear space still have the advantage.
https://en.wikipedia.org/wiki/Eigenface
Most real world data sets consist of data vectors whose individual components are notstatistically independent. In other words, knowing the value of an element will provide information about the value of elements in the data vector. When this occurs, it can be desirable to create afactorial codeof the data, i.e., a new vector-valuedrepresentationof each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent. Latersupervised learningusually works much better when the raw input data is first translated into such a factorial code. For example, suppose the final goal is to classify images with highly redundant pixels. Anaive Bayes classifierwill assume the pixels arestatistically independentrandom variablesand therefore fail to produce good results. If the data are first encoded in a factorial way, however, then the naive Bayes classifier will achieve itsoptimalperformance (compare Schmidhuber et al. 1996). To create factorial codes,Horace Barlowand co-workers suggested to minimize the sum of thebitentropies of the code components ofbinarycodes (1989).Jürgen Schmidhuber(1992) re-formulated the problem in terms of predictors and binaryfeaturedetectors, each receiving the raw data as an input. For each detector there is a predictor that sees the other detectors and learns to predict the output of its own detector in response to the various input vectors or images. But each detector uses amachine learningalgorithm to become as unpredictable as possible. Theglobal optimumof thisobjective functioncorresponds to a factorial code represented in a distributed fashion across the outputs of the feature detectors. Painsky, Rosset and Feder (2016, 2017) further studied this problem in the context ofindependent component analysisover finite alphabet sizes. Through a series of theorems they show that the factorial coding problem can be accurately solved with a branch and bound search tree algorithm, or tightly approximated with a series of linear problems. In addition, they introduce a simple transformation (namely, order permutation) which provides a greedy yet very effective approximation of the optimal solution. Practically, they show that with a careful implementation, the favorable properties of the order permutation may be achieved in an asymptotically optimal computational complexity. Importantly, they provide theoretical guarantees, showing that while not every random vector can be efficiently decomposed into independent components, the majority of vectors do decompose very well (that is, with a small constant cost), as the dimension increases. In addition, they demonstrate the use of factorial codes to data compression in multiple setups (2017).
https://en.wikipedia.org/wiki/Factorial_code
Functional principal component analysis(FPCA) is astatistical methodfor investigating the dominantmodes of variationoffunctional data. Using this method, arandom functionis represented in the eigenbasis, which is anorthonormalbasis of theHilbert spaceL2that consists of the eigenfunctions of theautocovariance operator. FPCA represents functional data in the most parsimonious way, in the sense that when using a fixed number ofbasis functions, the eigenfunction basis explains more variation than any other basis expansion. FPCA can be applied for representing random functions,[1]or in functional regression[2]and classification. For asquare-integrablestochastic processX(t),t∈ 𝒯, let and whereλ1≥λ2≥...≥0{\displaystyle \lambda _{1}\geq \lambda _{2}\geq ...\geq 0}are the eigenvalues andφ1{\displaystyle \varphi _{1}},φ2{\displaystyle \varphi _{2}}, ... are theorthonormaleigenfunctions of the linearHilbert–Schmidt operator By theKarhunen–Loève theorem, one can express the centered process in the eigenbasis, where is the principal component associated with thek-th eigenfunctionφk{\displaystyle \varphi _{k}}, with the properties The centered process is then equivalent toξ1,ξ2, .... A common assumption is thatXcan be represented by only the first few eigenfunctions (after subtracting the mean function), i.e. where The first eigenfunctionφ1{\displaystyle \varphi _{1}}depicts the dominant mode of variation ofX. where Thek-th eigenfunctionφk{\displaystyle \varphi _{k}}is the dominant mode of variation orthogonal toφ1{\displaystyle \varphi _{1}},φ2{\displaystyle \varphi _{2}}, ... ,φk−1{\displaystyle \varphi _{k-1}}, where LetYij=Xi(tij) + εijbe the observations made at locations (usually time points)tij, whereXiis thei-th realization of the smooth stochastic process that generates the data, and εijare identically and independently distributed normal random variable with mean 0 and variance σ2,j= 1, 2, ...,mi. To obtain an estimate of the mean functionμ(tij), if a dense sample on a regular grid is available, one may take the average at each locationtij: If the observations are sparse, one needs to smooth the data pooled from all observations to obtain the mean estimate,[3]using smoothing methods likelocal linear smoothingorspline smoothing. Then the estimate of the covariance functionG^(s,t){\displaystyle {\hat {G}}(s,t)}is obtained by averaging (in the dense case) or smoothing (in the sparse case) the raw covariances Note that the diagonal elements ofGishould be removed because they contain measurement error.[4] In practice,G^(s,t){\displaystyle {\hat {G}}(s,t)}is discretized to an equal-spaced dense grid, and the estimation of eigenvaluesλkand eigenvectorsvkis carried out by numerical linear algebra.[5]The eigenfunction estimatesφ^k{\displaystyle {\hat {\varphi }}_{k}}can then be obtained byinterpolatingthe eigenvectorsvk^.{\displaystyle {\hat {v_{k}}}.} The fitted covariance should bepositive definiteandsymmetricand is then obtained as LetV^(t){\displaystyle {\hat {V}}(t)}be a smoothed version of the diagonal elementsGi(tij, tij) of the raw covariance matrices. ThenV^(t){\displaystyle {\hat {V}}(t)}is an estimate of (G(t,t) +σ2). An estimate ofσ2is obtained by If the observationsXij,j=1, 2, ...,miare dense in 𝒯, then thek-th FPCξkcan be estimated bynumerical integration, implementing However, if the observations are sparse, this method will not work. Instead, one can usebest linear unbiased predictors,[3]yielding where andG~{\displaystyle {\tilde {G}}}is evaluated at the grid points generated bytij,j= 1, 2, ...,mi. The algorithm, PACE, has an available Matlab package[6]and R package[7] Asymptotic convergence properties of these estimates have been investigated.[3][8][9] FPCA can be applied for displaying themodes of functional variation,[1][10]in scatterplots of FPCs against each other or of responses against FPCs, for modeling sparselongitudinal data,[3]or for functional regression and classification (e.g., functional linear regression).[2]Scree plotsand other methods can be used to determine the number of components included. Functional Principal component analysis has varied applications in time series analysis. At present, this method is being adapted from traditional multivariate techniques to analyze financial data sets such as stock market indices and generate implied volatility graphs.[11]A good example of advantages of the functional approach is the Smoothed FPCA (SPCA), developed by Silverman [1996] and studied by Pezzulli and Silverman [1993], that enables direct combination of FPCA along with a general smoothing approach that makes using the information stored in some linear differential operators possible. An important application of the FPCA already known from multivariate PCA is motivated by the Karhunen-Loève decomposition of a random function to the set of functional parameters – factor functions and corresponding factor loadings (scalar random variables). This application is much more important than in the standard multivariate PCA since the distribution of the random function is in general too complex to be directly analyzed and the Karhunen-Loève decomposition reduces the analysis to the interpretation of the factor functions and the distribution of scalar random variables. Due to dimensionality reduction as well as its accuracy to represent data, there is a wide scope for further developments of functional principal component techniques in the financial field. Applications of PCA in automotive engineering.[12][13][14][15] The following table shows a comparison of various elements ofprincipal component analysis(PCA) and FPCA. The two methods are both used fordimensionality reduction. In implementations, FPCA uses a PCA step. However, PCA and FPCA differ in some critical aspects. First, the order of multivariate data in PCA can bepermuted, which has no effect on the analysis, but the order of functional data carries time or space information and cannot be reordered. Second, the spacing of observations in FPCA matters, while there is no spacing issue in PCA. Third, regular PCA does not work for high-dimensional data withoutregularization, while FPCA has a built-in regularization due to the smoothness of the functional data and the truncation to a finite number of included components.
https://en.wikipedia.org/wiki/Functional_principal_component_analysis
Geometric data analysiscomprisesgeometricaspects ofimage analysis,pattern analysis, andshape analysis, and the approach ofmultivariate statistics, which treat arbitrary data sets asclouds of pointsin a space that isn-dimensional. This includestopological data analysis,cluster analysis, inductive data analysis,correspondence analysis,multiple correspondence analysis,principal components analysisandiconography of correlations.
https://en.wikipedia.org/wiki/Geometric_data_analysis
Insignal processing,independent component analysis(ICA) is a computational method for separating amultivariatesignal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and that the subcomponents arestatistically independentfrom each other.[1]ICA was invented by Jeanny Hérault and Christian Jutten in 1985.[2]ICA is a special case ofblind source separation. A common example application of ICA is the "cocktail party problem" of listening in on one person's speech in a noisy room.[3] Independent component analysis attempts to decompose a multivariate signal into independent non-Gaussian signals. As an example, sound is usually a signal that is composed of the numerical addition, at each time t, of signals from several sources. The question then is whether it is possible to separate these contributing sources from the observed total signal. When the statistical independence assumption is correct, blind ICA separation of a mixed signal gives very good results.[5]It is also used for signals that are not supposed to be generated by mixing for analysis purposes. A simple application of ICA is the "cocktail party problem", where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays or echoes. Note that a filtered and delayed signal is a copy of a dependent component, and thus the statistical independence assumption is not violated. Mixing weights for constructing theM{\textstyle M}observed signals from theN{\textstyle N}components can be placed in anM×N{\textstyle M\times N}matrix. An important thing to consider is that ifN{\textstyle N}sources are present, at leastN{\textstyle N}observations (e.g. microphones if the observed signal is audio) are needed to recover the original signals. When there are an equal number of observations and source signals, the mixing matrix is square (M=N{\textstyle M=N}). Other cases of underdetermined (M<N{\textstyle M<N}) and overdetermined (M>N{\textstyle M>N}) have been investigated. The success of ICA separation of mixed signals relies on two assumptions and three effects of mixing source signals. Two assumptions: Three effects of mixing source signals: Those principles contribute to the basic establishment of ICA. If the signals extracted from a set of mixtures are independent and have non-Gaussian distributions or have low complexity, then they must be source signals.[6][7] Another common example is imagesteganography, where ICA is used to embed one image within another. For instance, two grayscale images can be linearly combined to create mixed images in which the hidden content is visually imperceptible. ICA can then be used to recover the original source images from the mixtures. This technique underlies digital watermarking, which allows the embedding of ownership information into images, as well as more covert applications such as undetected information transmission. The method has even been linked to real-world cyberespionage cases. In such applications, ICA serves to unmix the data based on statistical independence, making it possible to extract hidden components that are not apparent in the observed data. Steganographic techniques, including those potentially involving ICA-based analysis, have been used in real-world cyberespionage cases. In 2010, the FBI uncovered a Russian spy network known as the "Illegals Program" (Operation Ghost Stories), where agents used custom-built steganography tools to conceal encrypted text messages within image files shared online.[8] In another case, a former General Electric engineer, Xiaoqing Zheng, was convicted in 2022 for economic espionage. Zheng used steganography to exfiltrate sensitive turbine technology by embedding proprietary data within image files for transfer to entities in China.[9] ICA finds the independent components (also called factors, latent variables or sources) by maximizing the statistical independence of the estimated components. We may choose one of many ways to define a proxy for independence, and this choice governs the form of the ICA algorithm. The two broadest definitions of independence for ICA are The Minimization-of-Mutual information(MMI) family of ICA algorithms uses measures likeKullback-Leibler Divergenceandmaximum entropy. The non-Gaussianity family of ICA algorithms, motivated by thecentral limit theorem, useskurtosisandnegentropy.[10] Typical algorithms for ICA use centering (subtract the mean to create a zero mean signal),whitening(usually with theeigenvalue decomposition),[11]anddimensionality reductionas preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case. In the classical ICA model, it is assumed that the observed dataxi∈Rm{\displaystyle \mathbf {x} _{i}\in \mathbb {R} ^{m}}at timeti{\displaystyle t_{i}}is generated from source signalssi∈Rm{\displaystyle \mathbf {s} _{i}\in \mathbb {R} ^{m}}via a linear transformationxi=Asi{\displaystyle \mathbf {x} _{i}=A\mathbf {s} _{i}}, whereA{\displaystyle A}is an unknown, invertible mixing matrix. To recover the source signals, the data is first centered (zero mean), and then whitened so that the transformed data has unit covariance. This whitening reduces the problem from estimating a general matrixA{\displaystyle A}to estimating an orthogonal matrixV{\displaystyle V}, significantly simplifying the search for independent components. If the covariance matrix of the centered data isΣx=AA⊤{\displaystyle \Sigma _{x}=AA^{\top }}, then using the eigen-decompositionΣx=QDQ⊤{\displaystyle \Sigma _{x}=QDQ^{\top }}, the whitening transformation can be taken asD−1/2Q⊤{\displaystyle D^{-1/2}Q^{\top }}. This step ensures that the recovered sources are uncorrelated and of unit variance, leaving only the task of rotating the whitened data to maximize statistical independence. This general derivation underlies many ICA algorithms and is foundational in understanding the ICA model.[12] Independent component analysis(ICA) addresses the problem of recovering a set of unobserved source signalssi=(si1,si2,…,sim)T{\displaystyle s_{i}=(s_{i1},s_{i2},\dots ,s_{im})^{T}}from observed mixed signalsxi=(xi1,xi2,…,xim)T{\displaystyle x_{i}=(x_{i1},x_{i2},\dots ,x_{im})^{T}}, based on the linear mixing model: xi=Asi,{\displaystyle x_{i}=A\,s_{i},} where theA{\displaystyle A}is anm×m{\displaystyle m\times m}invertible matrix called themixing matrix,si{\displaystyle s_{i}}represents the m‑dimensional vector containing the values of the sources at timeti{\displaystyle t_{i}}, andxi{\displaystyle x_{i}}is the corresponding vector of observed values at timeti{\displaystyle t_{i}}. The goal is to estimate bothA{\displaystyle A}and the source signals{si}{\displaystyle \{s_{i}\}}solely from the observed data{xi}{\displaystyle \{x_{i}\}}. After centering, the Gram matrix is computed as:(X∗)TX∗=QDQT,{\displaystyle (X^{*})^{T}X^{*}=Q\,D\,Q^{T},}where D is a diagonal matrix with positive entries (assumingX∗{\displaystyle X^{*}}has maximum rank), and Q is an orthogonal matrix.[13]Writing the SVD of the mixing matrixA=UΣVT{\displaystyle A=U\Sigma V^{T}}and comparing withAAT=UΣ2UT{\displaystyle AA^{T}=U\Sigma ^{2}U^{T}}the mixing A has the formA=QD1/2VT.{\displaystyle A=Q\,D^{1/2}\,V^{T}.}So, the normalized source values satisfysi∗=Vyi∗{\displaystyle s_{i}^{*}=V\,y_{i}^{*}}, whereyi∗=D−12QTxi∗.{\displaystyle y_{i}^{*}=D^{-{\tfrac {1}{2}}}Q^{T}x_{i}^{*}.}Thus, ICA reduces to finding the orthogonal matrixV{\displaystyle V}. This matrix can be computed using optimization techniques via projection pursuit methods (seeProjection Pursuit).[14] Well-known algorithms for ICA includeinfomax,FastICA,JADE, andkernel-independent component analysis, among others. In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals. ICA is important toblind signal separationand has many practical applications. It is closely related to (or even a special case of) the search for afactorial codeof the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent. The componentsxi{\displaystyle x_{i}}of the observed random vectorx=(x1,…,xm)T{\displaystyle {\boldsymbol {x}}=(x_{1},\ldots ,x_{m})^{T}}are generated as a sum of the independent componentssk{\displaystyle s_{k}},k=1,…,n{\displaystyle k=1,\ldots ,n}: xi=ai,1s1+⋯+ai,ksk+⋯+ai,nsn{\displaystyle x_{i}=a_{i,1}s_{1}+\cdots +a_{i,k}s_{k}+\cdots +a_{i,n}s_{n}} weighted by the mixing weightsai,k{\displaystyle a_{i,k}}. The same generative model can be written in vector form asx=∑k=1nskak{\displaystyle {\boldsymbol {x}}=\sum _{k=1}^{n}s_{k}{\boldsymbol {a}}_{k}}, where the observed random vectorx{\displaystyle {\boldsymbol {x}}}is represented by the basis vectorsak=(a1,k,…,am,k)T{\displaystyle {\boldsymbol {a}}_{k}=({\boldsymbol {a}}_{1,k},\ldots ,{\boldsymbol {a}}_{m,k})^{T}}. The basis vectorsak{\displaystyle {\boldsymbol {a}}_{k}}form the columns of the mixing matrixA=(a1,…,an){\displaystyle {\boldsymbol {A}}=({\boldsymbol {a}}_{1},\ldots ,{\boldsymbol {a}}_{n})}and the generative formula can be written asx=As{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}}, wheres=(s1,…,sn)T{\displaystyle {\boldsymbol {s}}=(s_{1},\ldots ,s_{n})^{T}}. Given the model and realizations (samples)x1,…,xN{\displaystyle {\boldsymbol {x}}_{1},\ldots ,{\boldsymbol {x}}_{N}}of the random vectorx{\displaystyle {\boldsymbol {x}}}, the task is to estimate both the mixing matrixA{\displaystyle {\boldsymbol {A}}}and the sourcess{\displaystyle {\boldsymbol {s}}}. This is done by adaptively calculating thew{\displaystyle {\boldsymbol {w}}}vectors and setting up a cost function which either maximizes the non-gaussianity of the calculatedsk=wTx{\displaystyle s_{k}={\boldsymbol {w}}^{T}{\boldsymbol {x}}}or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function. The original sourcess{\displaystyle {\boldsymbol {s}}}can be recovered by multiplying the observed signalsx{\displaystyle {\boldsymbol {x}}}with the inverse of the mixing matrixW=A−1{\displaystyle {\boldsymbol {W}}={\boldsymbol {A}}^{-1}}, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (n=m{\displaystyle n=m}). If the number of basis vectors is greater than the dimensionality of the observed vectors,n>m{\displaystyle n>m}, the task is overcomplete but is still solvable with thepseudo inverse. With the added assumption of zero-mean and uncorrelated Gaussian noisen∼N(0,diag⁡(Σ)){\displaystyle n\sim N(0,\operatorname {diag} (\Sigma ))}, the ICA model takes the formx=As+n{\displaystyle {\boldsymbol {x}}={\boldsymbol {A}}{\boldsymbol {s}}+n}. The mixing of the sources does not need to be linear. Using a nonlinear mixing functionf(⋅|θ){\displaystyle f(\cdot |\theta )}with parametersθ{\displaystyle \theta }thenonlinear ICAmodel isx=f(s|θ)+n{\displaystyle x=f(s|\theta )+n}. The independent components are identifiable up to a permutation and scaling of the sources.[15]This identifiability requires that: A special variant of ICA is binary ICA in which both signal sources and monitors are in binary form and observations from monitors are disjunctive mixtures of binary independent sources. The problem was shown to have applications in many domains includingmedical diagnosis,multi-cluster assignment,network tomographyandinternet resource management. Letx1,x2,…,xm{\displaystyle {x_{1},x_{2},\ldots ,x_{m}}}be the set of binary variables fromm{\displaystyle m}monitors andy1,y2,…,yn{\displaystyle {y_{1},y_{2},\ldots ,y_{n}}}be the set of binary variables fromn{\displaystyle n}sources. Source-monitor connections are represented by the (unknown) mixing matrixG{\textstyle {\boldsymbol {G}}}, wheregij=1{\displaystyle g_{ij}=1}indicates that signal from thei-th source can be observed by thej-th monitor. The system works as follows: at any time, if a sourcei{\displaystyle i}is active (yi=1{\displaystyle y_{i}=1}) and it is connected to the monitorj{\displaystyle j}(gij=1{\displaystyle g_{ij}=1}) then the monitorj{\displaystyle j}will observe some activity (xj=1{\displaystyle x_{j}=1}). Formally we have: where∧{\displaystyle \wedge }is Boolean AND and∨{\displaystyle \vee }is Boolean OR. Noise is not explicitly modelled, rather, can be treated as independent sources. The above problem can be heuristically solved[16]by assuming variables are continuous and runningFastICAon binary observation data to get the mixing matrixG{\textstyle {\boldsymbol {G}}}(real values), then applyround numbertechniques onG{\textstyle {\boldsymbol {G}}}to obtain the binary values. This approach has been shown to produce a highly inaccurate result.[citation needed] Another method is to usedynamic programming: recursively breaking the observation matrixX{\textstyle {\boldsymbol {X}}}into its sub-matrices and run the inference algorithm on these sub-matrices. The key observation which leads to this algorithm is the sub-matrixX0{\textstyle {\boldsymbol {X}}^{0}}ofX{\textstyle {\boldsymbol {X}}}wherexij=0,∀j{\textstyle x_{ij}=0,\forall j}corresponds to the unbiased observation matrix of hidden components that do not have connection to thei{\displaystyle i}-th monitor. Experimental results from[17]show that this approach is accurate under moderate noise levels. The Generalized Binary ICA framework[18]introduces a broader problem formulation which does not necessitate any knowledge on the generative model. In other words, this method attempts to decompose a source into its independent components (as much as possible, and without losing any information) with no prior assumption on the way it was generated. Although this problem appears quite complex, it can be accurately solved with abranch and boundsearch tree algorithm or tightly upper bounded with a single multiplication of a matrix with a vector. Signal mixtures tend to have Gaussian probability density functions, and source signals tend to have non-Gaussian probability density functions. Each source signal can be extracted from a set of signal mixtures by taking the inner product of a weight vector and those signal mixtures where this inner product provides an orthogonal projection of the signal mixtures. The remaining challenge is finding such a weight vector. One type of method for doing so isprojection pursuit.[19][20] Projection pursuit seeks one projection at a time such that the extracted signal is as non-Gaussian as possible. This contrasts with ICA, which typically extractsMsignals simultaneously fromMsignal mixtures, which requires estimating aM×Munmixing matrix. One practical advantage of projection pursuit over ICA is that fewer thanMsignals can be extracted if required, where each source signal is extracted fromMsignal mixtures using anM-element weight vector. We can usekurtosisto recover the multiple source signal by finding the correct weight vectors with the use of projection pursuit. The kurtosis of the probability density function of a signal, for a finite sample, is computed as wherey¯{\displaystyle \mathbf {\overline {y}} }is thesample meanofy{\displaystyle \mathbf {y} }, the extracted signals. The constant 3 ensures that Gaussian signals have zero kurtosis, Super-Gaussian signals have positive kurtosis, and Sub-Gaussian signals have negative kurtosis. The denominator is thevarianceofy{\displaystyle \mathbf {y} }, and ensures that the measured kurtosis takes account of signal variance. The goal of projection pursuit is to maximize the kurtosis, and make the extracted signal as non-normal as possible. Using kurtosis as a measure of non-normality, we can now examine how the kurtosis of a signaly=wTx{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {x} }extracted from a set ofMmixturesx=(x1,x2,…,xM)T{\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{M})^{T}}varies as the weight vectorw{\displaystyle \mathbf {w} }is rotated around the origin. Given our assumption that each source signals{\displaystyle \mathbf {s} }is super-gaussian we would expect: For multiple source mixture signals, we can use kurtosis andGram-SchmidtOrthogonalization (GSO) to recover the signals. GivenMsignal mixtures in anM-dimensional space, GSO project these data points onto an (M-1)-dimensional space by using the weight vector. We can guarantee the independence of the extracted signals with the use of GSO. In order to find the correct value ofw{\displaystyle \mathbf {w} }, we can usegradient descentmethod. We first of all whiten the data, and transformx{\displaystyle \mathbf {x} }into a new mixturez{\displaystyle \mathbf {z} }, which has unit variance, andz=(z1,z2,…,zM)T{\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{M})^{T}}. This process can be achieved by applyingSingular value decompositiontox{\displaystyle \mathbf {x} }, Rescaling each vectorUi=Ui/E⁡(Ui2){\displaystyle U_{i}=U_{i}/\operatorname {E} (U_{i}^{2})}, and letz=U{\displaystyle \mathbf {z} =\mathbf {U} }. The signal extracted by a weighted vectorw{\displaystyle \mathbf {w} }isy=wTz{\displaystyle \mathbf {y} =\mathbf {w} ^{T}\mathbf {z} }. If the weight vectorwhas unit length, then the variance ofyis also 1, that isE⁡[(wTz)2]=1{\displaystyle \operatorname {E} [(\mathbf {w} ^{T}\mathbf {z} )^{2}]=1}. The kurtosis can thus be written as: The updating process forw{\displaystyle \mathbf {w} }is: whereη{\displaystyle \eta }is a small constant to guarantee thatw{\displaystyle \mathbf {w} }converges to the optimal solution. After each update, we normalizewnew=wnew|wnew|{\displaystyle \mathbf {w} _{new}={\frac {\mathbf {w} _{new}}{|\mathbf {w} _{new}|}}}, and setwold=wnew{\displaystyle \mathbf {w} _{old}=\mathbf {w} _{new}}, and repeat the updating process until convergence. We can also use another algorithm to update the weight vectorw{\displaystyle \mathbf {w} }. Another approach is usingnegentropy[10][21]instead of kurtosis. Using negentropy is a more robust method than kurtosis, as kurtosis is very sensitive to outliers. The negentropy methods are based on an important property of Gaussian distribution: a Gaussian variable has the largest entropy among all continuous random variables of equal variance. This is also the reason why we want to find the most nongaussian variables. A simple proof can be found inDifferential entropy. y is a Gaussian random variable of the same covariance matrix as x An approximation for negentropy is A proof can be found in the original papers of Comon;[22][10]it has been reproduced in the bookIndependent Component Analysisby Aapo Hyvärinen, Juha Karhunen, andErkki Oja[23]This approximation also suffers from the same problem as kurtosis (sensitivity to outliers). Other approaches have been developed.[24] A choice ofG1{\displaystyle G_{1}}andG2{\displaystyle G_{2}}are Infomax ICA[25]is essentially a multivariate, parallel version of projection pursuit. Whereas projection pursuit extracts a series of signals one at a time from a set ofMsignal mixtures, ICA extractsMsignals in parallel. This tends to make ICA more robust than projection pursuit.[26] The projection pursuit method usesGram-Schmidtorthogonalization to ensure the independence of the extracted signal, while ICA useinfomaxandmaximum likelihoodestimate to ensure the independence of the extracted signal. The Non-Normality of the extracted signal is achieved by assigning an appropriate model, or prior, for the signal. The process of ICA based oninfomaxin short is: given a set of signal mixturesx{\displaystyle \mathbf {x} }and a set of identical independent modelcumulative distribution functions(cdfs)g{\displaystyle g}, we seek the unmixing matrixW{\displaystyle \mathbf {W} }which maximizes the jointentropyof the signalsY=g(y){\displaystyle \mathbf {Y} =g(\mathbf {y} )}, wherey=Wx{\displaystyle \mathbf {y} =\mathbf {Wx} }are the signals extracted byW{\displaystyle \mathbf {W} }. Given the optimalW{\displaystyle \mathbf {W} }, the signalsY{\displaystyle \mathbf {Y} }have maximum entropy and are therefore independent, which ensures that the extracted signalsy=g−1(Y){\displaystyle \mathbf {y} =g^{-1}(\mathbf {Y} )}are also independent.g{\displaystyle g}is an invertible function, and is the signal model. Note that if the source signal modelprobability density functionps{\displaystyle p_{s}}matches theprobability density functionof the extracted signalpy{\displaystyle p_{\mathbf {y} }}, then maximizing the joint entropy ofY{\displaystyle Y}also maximizes the amount ofmutual informationbetweenx{\displaystyle \mathbf {x} }andY{\displaystyle \mathbf {Y} }. For this reason, using entropy to extract independent signals is known asinfomax. Consider the entropy of the vector variableY=g(y){\displaystyle \mathbf {Y} =g(\mathbf {y} )}, wherey=Wx{\displaystyle \mathbf {y} =\mathbf {Wx} }is the set of signals extracted by the unmixing matrixW{\displaystyle \mathbf {W} }. For a finite set of values sampled from a distribution with pdfpy{\displaystyle p_{\mathbf {y} }}, the entropy ofY{\displaystyle \mathbf {Y} }can be estimated as: The joint pdfpY{\displaystyle p_{\mathbf {Y} }}can be shown to be related to the joint pdfpy{\displaystyle p_{\mathbf {y} }}of the extracted signals by the multivariate form: whereJ=∂Y∂y{\displaystyle \mathbf {J} ={\frac {\partial \mathbf {Y} }{\partial \mathbf {y} }}}is theJacobian matrix. We have|J|=g′(y){\displaystyle |\mathbf {J} |=g'(\mathbf {y} )}, andg′{\displaystyle g'}is the pdf assumed for source signalsg′=ps{\displaystyle g'=p_{s}}, therefore, therefore, We know that whenpy=ps{\displaystyle p_{\mathbf {y} }=p_{s}},pY{\displaystyle p_{\mathbf {Y} }}is of uniform distribution, andH(Y){\displaystyle H({\mathbf {Y} })}is maximized. Since where|W|{\displaystyle |\mathbf {W} |}is the absolute value of the determinant of the unmixing matrixW{\displaystyle \mathbf {W} }. Therefore, so, sinceH(x)=−1N∑t=1Nln⁡px(xt){\displaystyle H(\mathbf {x} )=-{\frac {1}{N}}\sum _{t=1}^{N}\ln p_{\mathbf {x} }(\mathbf {x} ^{t})}, and maximizingW{\displaystyle \mathbf {W} }does not affectHx{\displaystyle H_{\mathbf {x} }}, so we can maximize the function to achieve the independence of the extracted signal. If there areMmarginal pdfs of the model joint pdfps{\displaystyle p_{\mathbf {s} }}are independent and use the commonly super-gaussian model pdf for the source signalsps=(1−tanh⁡(s)2){\displaystyle p_{\mathbf {s} }=(1-\tanh(\mathbf {s} )^{2})}, then we have In the sum, given an observed signal mixturex{\displaystyle \mathbf {x} }, the corresponding set of extracted signalsy{\displaystyle \mathbf {y} }and source signal modelps=g′{\displaystyle p_{\mathbf {s} }=g'}, we can find the optimal unmixing matrixW{\displaystyle \mathbf {W} }, and make the extracted signals independent and non-gaussian. Like the projection pursuit situation, we can use gradient descent method to find the optimal solution of the unmixing matrix. Maximum likelihoodestimation (MLE)is a standard statistical tool for finding parameter values (e.g. the unmixing matrixW{\displaystyle \mathbf {W} }) that provide the best fit of some data (e.g., the extracted signalsy{\displaystyle y}) to a given a model (e.g., the assumed joint probability density function (pdf)ps{\displaystyle p_{s}}of source signals).[26] TheML"model" includes a specification of a pdf, which in this case is the pdfps{\displaystyle p_{s}}of the unknown source signalss{\displaystyle s}. UsingML ICA, the objective is to find an unmixing matrix that yields extracted signalsy=Wx{\displaystyle y=\mathbf {W} x}with a joint pdf as similar as possible to the joint pdfps{\displaystyle p_{s}}of the unknown source signalss{\displaystyle s}. MLEis thus based on the assumption that if the model pdfps{\displaystyle p_{s}}and the model parametersA{\displaystyle \mathbf {A} }are correct then a high probability should be obtained for the datax{\displaystyle x}that were actually observed. Conversely, ifA{\displaystyle \mathbf {A} }is far from the correct parameter values then a low probability of the observed data would be expected. UsingMLE, we call the probability of the observed data for a given set of model parameter values (e.g., a pdfps{\displaystyle p_{s}}and a matrixA{\displaystyle \mathbf {A} }) thelikelihoodof the model parameter values given the observed data. We define alikelihoodfunctionL(W){\displaystyle \mathbf {L(W)} }ofW{\displaystyle \mathbf {W} }: L(W)=ps(Wx)|detW|.{\displaystyle \mathbf {L(W)} =p_{s}(\mathbf {W} x)|\det \mathbf {W} |.} This equals to the probability density atx{\displaystyle x}, sinces=Wx{\displaystyle s=\mathbf {W} x}. Thus, if we wish to find aW{\displaystyle \mathbf {W} }that is most likely to have generated the observed mixturesx{\displaystyle x}from the unknown source signalss{\displaystyle s}with pdfps{\displaystyle p_{s}}then we need only find thatW{\displaystyle \mathbf {W} }which maximizes thelikelihoodL(W){\displaystyle \mathbf {L(W)} }. The unmixing matrix that maximizes equation is known as theMLEof the optimal unmixing matrix. It is common practice to use the loglikelihood, because this is easier to evaluate. As the logarithm is a monotonic function, theW{\displaystyle \mathbf {W} }that maximizes the functionL(W){\displaystyle \mathbf {L(W)} }also maximizes its logarithmln⁡L(W){\displaystyle \ln \mathbf {L(W)} }. This allows us to take the logarithm of equation above, which yields the loglikelihoodfunction ln⁡L(W)=∑i∑tln⁡ps(wiTxt)+Nln⁡|detW|{\displaystyle \ln \mathbf {L(W)} =\sum _{i}\sum _{t}\ln p_{s}(w_{i}^{T}x_{t})+N\ln |\det \mathbf {W} |} If we substitute a commonly used high-Kurtosismodel pdf for the source signalsps=(1−tanh⁡(s)2){\displaystyle p_{s}=(1-\tanh(s)^{2})}then we have ln⁡L(W)=1N∑iM∑tNln⁡(1−tanh⁡(wiTxt)2)+ln⁡|detW|{\displaystyle \ln \mathbf {L(W)} ={1 \over N}\sum _{i}^{M}\sum _{t}^{N}\ln(1-\tanh(w_{i}^{T}x_{t})^{2})+\ln |\det \mathbf {W} |} This matrixW{\displaystyle \mathbf {W} }that maximizes this function is themaximum likelihoodestimation. The early general framework for independent component analysis was introduced by Jeanny Hérault and Bernard Ans from 1984,[27]further developed by Christian Jutten in 1985 and 1986,[2][28][29]and refined by Pierre Comon in 1991,[22]and popularized in his paper of 1994.[10]In 1995, Tony Bell andTerry Sejnowskiintroduced a fast and efficient ICA algorithm based oninfomax, a principle introduced by Ralph Linsker in 1987. A link exists between maximum-likelihood estimation and Infomax approaches.[30]A quite comprehensive tutorial on the maximum-likelihood approach to ICA has been published by J-F. Cardoso in 1998.[31] There are many algorithms available in the literature which do ICA. A largely used one, including in industrial applications, is the FastICA algorithm, developed by Hyvärinen and Oja,[32]which uses thenegentropyas cost function, already proposed 7 years before by Pierre Comon in this context.[10]Other examples are rather related toblind source separationwhere a more general approach is used. For example, one can drop the independence assumption and separate mutually correlated signals, thus, statistically "dependent" signals. Sepp Hochreiter andJürgen Schmidhubershowed how to obtain non-linear ICA or source separation as a by-product ofregularization(1999).[33]Their method does not require a priori knowledge about the number of independent sources. ICA can be extended to analyze non-physical signals. For instance, ICA has been applied to discover discussion topics on a bag of news list archives. Some ICA applications are listed below:[6] ICA can be applied through the following software:
https://en.wikipedia.org/wiki/Independent_component_analysis
In the field ofmultivariate statistics,kernel principal component analysis (kernel PCA)[1]is an extension ofprincipal component analysis(PCA) using techniques ofkernel methods. Using a kernel, the originally linear operations of PCA are performed in areproducing kernel Hilbert space. Recall that conventional PCA operates on zero-centered data; that is, wherexi{\displaystyle \mathbf {x} _{i}}is one of theN{\displaystyle N}multivariate observations. It operates by diagonalizing thecovariance matrix, in other words, it gives aneigendecompositionof the covariance matrix: which can be rewritten as (See also:Covariance matrix as a linear operator) To understand the utility of kernel PCA, particularly for clustering, observe that, whileNpoints cannot, in general, belinearly separatedind<N{\displaystyle d<N}dimensions, they canalmost alwaysbe linearly separated ind≥N{\displaystyle d\geq N}dimensions. That is, givenNpoints,xi{\displaystyle \mathbf {x} _{i}}, if we map them to anN-dimensional space with it is easy to construct ahyperplanethat divides the points into arbitrary clusters. Of course, thisΦ{\displaystyle \Phi }creates linearly independent vectors, so there is no covariance on which to perform eigendecompositionexplicitlyas we would in linear PCA. Instead, in kernel PCA, a non-trivial, arbitraryΦ{\displaystyle \Phi }function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensionalΦ{\displaystyle \Phi }'s if we never have to actually evaluate the data in that space. Since we generally try to avoid working in theΦ{\displaystyle \Phi }-space, which we will call the 'feature space', we can create the N-by-N kernel which represents the inner product space (seeGramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in theΦ(x){\displaystyle \Phi (\mathbf {x} )}-space (seeKernel trick). The N-elements in each column ofKrepresent thedot productof one point of the transformed data with respect to all the transformed points (N points). Some well-known kernels are shown in the example below. Because we are never working directly in the feature space, the kernel-formulation of PCA is restricted in that it computes not the principal components themselves, but the projections of our data onto those components. To evaluate the projection from a point in the feature spaceΦ(x){\displaystyle \Phi (\mathbf {x} )}onto the kth principal componentVk{\displaystyle V^{k}}(where superscript k means the component k, not powers of k) We note thatΦ(xi)TΦ(x){\displaystyle \Phi (\mathbf {x} _{i})^{T}\Phi (\mathbf {x} )}denotes dot product, which is simply the elements of the kernelK{\displaystyle K}. It seems all that's left is to calculate and normalize theaik{\displaystyle \mathbf {a} _{i}^{k}}, which can be done by solving the eigenvector equation whereN{\displaystyle N}is the number of data points in the set, andλ{\displaystyle \lambda }anda{\displaystyle \mathbf {a} }are the eigenvalues and eigenvectors ofK{\displaystyle K}. Then to normalize the eigenvectorsak{\displaystyle \mathbf {a} ^{k}}, we require that Care must be taken regarding the fact that, whether or notx{\displaystyle x}has zero-mean in its original space, it is not guaranteed to be centered in the feature space (which we never compute explicitly). Since centered data is required to perform an effective principal component analysis, we 'centralize'K{\displaystyle K}to becomeK′{\displaystyle K'} where1N{\displaystyle \mathbf {1_{N}} }denotes a N-by-N matrix for which each element takes value1/N{\displaystyle 1/N}. We useK′{\displaystyle K'}to perform the kernel PCA algorithm described above. One caveat of kernel PCA should be illustrated here. In linear PCA, we can use the eigenvalues to rank the eigenvectors based on how much of the variation of the data is captured by each principal component. This is useful for data dimensionality reduction and it could also be applied to KPCA. However, in practice there are cases that all variations of the data are same. This is typically caused by a wrong choice of kernel scale. In practice, a large data set leads to a large K, and storing K may become a problem. One way to deal with this is to perform clustering on the dataset, and populate the kernel with the means of those clusters. Since even this method may yield a relatively large K, it is common to compute only the top P eigenvalues and eigenvectors of the eigenvalues are calculated in this way. Consider three concentric clouds of points (shown); we wish to use kernel PCA to identify these groups. The color of the points does not represent information involved in the algorithm, but only shows how the transformation relocates the data points. First, consider the kernel Applying this to kernel PCA yields the next image. Now consider aGaussian kernel: That is, this kernel is a measure of closeness, equal to 1 when the points coincide and equal to 0 at infinity. Note in particular that the first principal component is enough to distinguish the three different groups, which is impossible using only linear PCA, because linear PCA operates only in the given (in this case two-dimensional) space, in which these concentric point clouds are not linearly separable. Kernel PCA has been demonstrated to be useful for novelty detection[3]and image de-noising.[4]
https://en.wikipedia.org/wiki/Kernel_PCA
L1-norm principal component analysis (L1-PCA)is a general method for multivariate data analysis.[1]L1-PCA is often preferred over standard L2-normprincipal component analysis(PCA) when the analyzed data may containoutliers(faulty values or corruptions), as it is believed to berobust.[2][3][4] Both L1-PCA and standard PCA seek a collection oforthogonaldirections (principal components) that define asubspacewherein data representation is maximized according to the selected criterion.[5][6][7]Standard PCA quantifies data representation as the aggregate of theL2-normof the data pointprojectionsinto the subspace, or equivalently the aggregateEuclidean distanceof the original points from their subspace-projected representations. L1-PCA uses instead the aggregate of the L1-norm of the data point projections into the subspace.[8]In PCA and L1-PCA, the number of principal components (PCs) is lower than therankof the analyzed matrix, which coincides with the dimensionality of the space defined by the original data points. Therefore, PCA or L1-PCA are commonly employed fordimensionality reductionfor the purpose of data denoising or compression. Among the advantages of standard PCA that contributed to its high popularity arelow-costcomputational implementation by means ofsingular-value decomposition(SVD)[9]and statistical optimality when the data set is generated by a truemultivariate normaldata source. However, in modern big data sets, data often include corrupted, faulty points, commonly referred to as outliers.[10]Standard PCA is known to be sensitive to outliers, even when they appear as a small fraction of the processed data.[11]The reason is that the L2-norm formulation of L2-PCA places squared emphasis on the magnitude of each coordinate of each data point, ultimately overemphasizing peripheral points, such as outliers. On the other hand, following an L1-norm formulation, L1-PCA places linear emphasis on the coordinates of each data point, effectively restraining outliers.[12] Consider anymatrixX=[x1,x2,…,xN]∈RD×N{\displaystyle \mathbf {X} =[\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{N}]\in \mathbb {R} ^{D\times N}}consisting ofN{\displaystyle N}D{\displaystyle D}-dimensional data points. Definer=rank(X){\displaystyle r=rank(\mathbf {X} )}. For integerK{\displaystyle K}such that1≤K<r{\displaystyle 1\leq K<r}, L1-PCA is formulated as:[1] maxQ=[q1,q2,…,qK]∈RD×K‖X⊤Q‖1subject toQ⊤Q=IK.{\displaystyle {\begin{aligned}&{\underset {\mathbf {Q} =[\mathbf {q} _{1},\mathbf {q} _{2},\ldots ,\mathbf {q} _{K}]\in \mathbb {R} ^{D\times K}}{\max }}~~\|\mathbf {X} ^{\top }\mathbf {Q} \|_{1}\\&{\text{subject to}}~~\mathbf {Q} ^{\top }\mathbf {Q} =\mathbf {I} _{K}.\end{aligned}}} ForK=1{\displaystyle K=1}, (1) simplifies to finding the L1-norm principal component (L1-PC) ofX{\displaystyle \mathbf {X} }by maxq∈RD×1‖X⊤q‖1subject to‖q‖2=1.{\displaystyle {\begin{aligned}&{\underset {\mathbf {q} \in \mathbb {R} ^{D\times 1}}{\max }}~~\|\mathbf {X} ^{\top }\mathbf {q} \|_{1}\\&{\text{subject to}}~~\|\mathbf {q} \|_{2}=1.\end{aligned}}} In (1)-(2),L1-norm‖⋅‖1{\displaystyle \|\cdot \|_{1}}returns the sum of the absolute entries of its argument and L2-norm‖⋅‖2{\displaystyle \|\cdot \|_{2}}returns the sum of the squared entries of its argument. If one substitutes‖⋅‖1{\displaystyle \|\cdot \|_{1}}in (2) by theFrobenius/L2-norm‖⋅‖F{\displaystyle \|\cdot \|_{F}}, then the problem becomes standard PCA and it is solved by the matrixQ{\displaystyle \mathbf {Q} }that contains theK{\displaystyle K}dominant singular vectors ofX{\displaystyle \mathbf {X} }(i.e., the singular vectors that correspond to theK{\displaystyle K}highestsingular values). The maximization metric in (2) can be expanded as ‖X⊤Q‖1=∑k=1K∑n=1N|xn⊤qk|.{\displaystyle \|\mathbf {X} ^{\top }\mathbf {Q} \|_{1}=\sum _{k=1}^{K}\sum _{n=1}^{N}|\mathbf {x} _{n}^{\top }\mathbf {q} _{k}|.} For any matrixA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}withm≥n{\displaystyle m\geq n}, defineΦ(A){\displaystyle \Phi (\mathbf {A} )}as the nearest (in the L2-norm sense) matrix toA{\displaystyle \mathbf {A} }that has orthonormal columns. That is, define Φ(A)=argminQ∈Rm×n‖A−Q‖Fsubject toQ⊤Q=In.{\displaystyle {\begin{aligned}\Phi (\mathbf {A} )=&{\underset {\mathbf {Q} \in \mathbb {R} ^{m\times n}}{\text{argmin}}}~~\|\mathbf {A} -\mathbf {Q} \|_{F}\\&{\text{subject to}}~~\mathbf {Q} ^{\top }\mathbf {Q} =\mathbf {I} _{n}.\end{aligned}}} Procrustes Theorem[13][14]states that ifA{\displaystyle \mathbf {A} }has SVDUm×nΣn×nVn×n⊤{\displaystyle \mathbf {U} _{m\times n}{\boldsymbol {\Sigma }}_{n\times n}\mathbf {V} _{n\times n}^{\top }}, thenΦ(A)=UV⊤{\displaystyle \Phi (\mathbf {A} )=\mathbf {U} \mathbf {V} ^{\top }}. Markopoulos, Karystinos, and Pados[1]showed that, ifBBNM{\displaystyle \mathbf {B} _{\text{BNM}}}is the exact solution to the binary nuclear-norm maximization (BNM) problem maxB∈{±1}N×K‖XB‖∗2,{\displaystyle {\begin{aligned}{\underset {\mathbf {B} \in \{\pm 1\}^{N\times K}}{\text{max}}}~~\|\mathbf {X} \mathbf {B} \|_{*}^{2},\end{aligned}}} then QL1=Φ(XBBNM){\displaystyle {\begin{aligned}\mathbf {Q} _{\text{L1}}=\Phi (\mathbf {X} \mathbf {B} _{\text{BNM}})\end{aligned}}} is the exact solution to L1-PCA in (2). Thenuclear-norm‖⋅‖∗{\displaystyle \|\cdot \|_{*}}in (2) returns the summation of the singular values of its matrix argument and can be calculated by means of standard SVD. Moreover, it holds that, given the solution to L1-PCA,QL1{\displaystyle \mathbf {Q} _{\text{L1}}}, the solution to BNM can be obtained as BBNM=sgn(X⊤QL1){\displaystyle {\begin{aligned}\mathbf {B} _{\text{BNM}}={\text{sgn}}(\mathbf {X} ^{\top }\mathbf {Q} _{\text{L1}})\end{aligned}}} wheresgn(⋅){\displaystyle {\text{sgn}}(\cdot )}returns the{±1}{\displaystyle \{\pm 1\}}-sign matrix of its matrix argument (with no loss of generality, we can considersgn(0)=1{\displaystyle {\text{sgn}}(0)=1}). In addition, it follows that‖X⊤QL1‖1=‖XBBNM‖∗{\displaystyle \|\mathbf {X} ^{\top }\mathbf {Q} _{\text{L1}}\|_{1}=\|\mathbf {X} \mathbf {B} _{\text{BNM}}\|_{*}}. BNM in (5) is acombinatorialproblem over antipodal binary variables. Therefore, its exact solution can be found through exhaustive evaluation of all2NK{\displaystyle 2^{NK}}elements of its feasibility set, withasymptotic costO(2NK){\displaystyle {\mathcal {O}}(2^{NK})}. Therefore, L1-PCA can also be solved, through BNM, with costO(2NK){\displaystyle {\mathcal {O}}(2^{NK})}(exponential in the product of the number of data points with the number of the sought-after components). It turns out that L1-PCA can be solved optimally (exactly) with polynomial complexity inN{\displaystyle N}for fixed data dimensionD{\displaystyle D},O(NrK−K+1){\displaystyle {\mathcal {O}}(N^{rK-K+1})}.[1] For the special case ofK=1{\displaystyle K=1}(single L1-PC ofX{\displaystyle \mathbf {X} }), BNM takes the binary-quadratic-maximization (BQM) form maxb∈{±1}N×1b⊤X⊤Xb.{\displaystyle {\begin{aligned}&{\underset {\mathbf {b} \in \{\pm 1\}^{N\times 1}}{\text{max}}}~~\mathbf {b} ^{\top }\mathbf {X} ^{\top }\mathbf {X} \mathbf {b} .\end{aligned}}} The transition from (5) to (8) forK=1{\displaystyle K=1}holds true, since the unique singular value ofXb{\displaystyle \mathbf {X} \mathbf {b} }is equal to‖Xb‖2=b⊤X⊤Xb{\displaystyle \|\mathbf {X} \mathbf {b} \|_{2}={\sqrt {\mathbf {b} ^{\top }\mathbf {X} ^{\top }\mathbf {X} \mathbf {b} }}}, for everyb{\displaystyle \mathbf {b} }. Then, ifbBNM{\displaystyle \mathbf {b} _{\text{BNM}}}is the solution to BQM in (7), it holds that qL1=Φ(XbBNM)=XbBNM‖XbBNM‖2{\displaystyle {\begin{aligned}\mathbf {q} _{\text{L1}}=\Phi (\mathbf {X} \mathbf {b} _{\text{BNM}})={\frac {\mathbf {X} \mathbf {b} _{\text{BNM}}}{\|\mathbf {X} \mathbf {b} _{\text{BNM}}\|_{2}}}\end{aligned}}} is the exact L1-PC ofX{\displaystyle \mathbf {X} }, as defined in (1). In addition, it holds thatbBNM=sgn(X⊤qL1){\displaystyle \mathbf {b} _{\text{BNM}}={\text{sgn}}(\mathbf {X} ^{\top }\mathbf {q} _{\text{L1}})}and‖X⊤qL1‖1=‖XbBNM‖2{\displaystyle \|\mathbf {X} ^{\top }\mathbf {q} _{\text{L1}}\|_{1}=\|\mathbf {X} \mathbf {b} _{\text{BNM}}\|_{2}}. As shown above, the exact solution to L1-PCA can be obtained by the following two-step process: BNM in (5) can be solved by exhaustive search over the domain ofB{\displaystyle \mathbf {B} }with costO(2NK){\displaystyle {\mathcal {O}}(2^{NK})}. Also, L1-PCA can be solved optimally with costO(NrK−K+1){\displaystyle {\mathcal {O}}(N^{rK-K+1})}, whenr=rank(X){\displaystyle r=rank(\mathbf {X} )}is constant with respect toN{\displaystyle N}(always true for finite data dimensionD{\displaystyle D}).[1][15] In 2008, Kwak[12]proposed an iterative algorithm for the approximate solution of L1-PCA forK=1{\displaystyle K=1}. This iterative method was later generalized forK>1{\displaystyle K>1}components.[16]Another approximate efficient solver was proposed by McCoy and Tropp[17]by means of semi-definite programming (SDP). Most recently, L1-PCA (and BNM in (5)) were solved efficiently by means of bit-flipping iterations (L1-BF algorithm).[8] The computational cost of L1-BF isO(NDmin{N,D}+N2K2(K2+r)){\displaystyle {\mathcal {O}}(NDmin\{N,D\}+N^{2}K^{2}(K^{2}+r))}.[8] L1-PCA has also been generalized to processcomplexdata. For complex L1-PCA, two efficient algorithms were proposed in 2018.[18] L1-PCA has also been extended for the analysis oftensordata, in the form of L1-Tucker, the L1-norm robust analogous of standardTucker decomposition.[19]Two algorithms for the solution of L1-Tucker are L1-HOSVD and L1-HOOI.[19][20][21] MATLABcode for L1-PCA is available at MathWorks.[22]
https://en.wikipedia.org/wiki/L1-norm_principal_component_analysis
In mathematics,low-rank approximationrefers to the process of approximating a given matrix by a matrix of lower rank. More precisely, it is aminimizationproblem, in which thecost functionmeasures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating matrix has reducedrank. The problem is used formathematical modelinganddata compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g.,non-negativityandHankel structure. Low-rank approximation is closely related to numerous other techniques, includingprincipal component analysis,factor analysis,total least squares,latent semantic analysis,orthogonal regression, anddynamic mode decomposition. Given The unstructured problem with fit measured by theFrobenius norm, i.e., has an analytic solution in terms of thesingular value decompositionof the data matrix. The result is referred to as the matrix approximation lemma orEckart–Young–Mirsky theorem. This problem was originally solved byErhard Schmidt[1]in the infinite dimensional context of integral operators (although his methods easily generalize to arbitrary compact operators on Hilbert spaces) and later rediscovered byC. EckartandG. Young.[2]L. Mirskygeneralized the result to arbitrary unitarily invariant norms.[3]Let be the singular value decomposition ofD{\displaystyle D}, whereΣ=:diag⁡(σ1,…,σr){\displaystyle \Sigma =:\operatorname {diag} (\sigma _{1},\ldots ,\sigma _{r})}, wherer≤min{m,n}=n{\displaystyle r\leq min\{m,n\}=n}, is them×n{\displaystyle m\times n}rectangular diagonal matrix withr{\displaystyle r}non-zero singular valuesσ1≥…≥σr>σr+1=…=σn=0{\displaystyle \sigma _{1}\geq \ldots \geq \sigma _{r}>\sigma _{r+1}=\ldots =\sigma _{n}=0}. For a givenk∈{1,…,r}{\displaystyle k\in \{1,\dots ,r\}}, partitionU{\displaystyle U},Σ{\displaystyle \Sigma }, andV{\displaystyle V}as follows: whereU1{\displaystyle U_{1}}ism×k{\displaystyle m\times k},Σ1{\displaystyle \Sigma _{1}}isk×k{\displaystyle k\times k}, andV1{\displaystyle V_{1}}isn×k{\displaystyle n\times k}. Then the rank-k{\displaystyle k}matrix, obtained from the truncated singular value decomposition is such that The minimizerD^∗{\displaystyle {\widehat {D}}^{*}}is unique if and only ifσk>σk+1{\displaystyle \sigma _{k}>\sigma _{k+1}}. LetA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}be a real (possibly rectangular) matrix withm≤n{\displaystyle m\leq n}. Suppose that is thesingular value decompositionofA{\displaystyle A}. Recall thatU{\displaystyle U}andV{\displaystyle V}areorthogonalmatrices, andΣ{\displaystyle \Sigma }is anm×n{\displaystyle m\times n}diagonalmatrix with entries(σ1,σ2,⋯,σm){\displaystyle (\sigma _{1},\sigma _{2},\cdots ,\sigma _{m})}such thatσ1≥σ2≥⋯≥σm≥0{\displaystyle \sigma _{1}\geq \sigma _{2}\geq \cdots \geq \sigma _{m}\geq 0}. We claim that the best rank-k{\displaystyle k}approximation toA{\displaystyle A}in the spectral norm, denoted by‖⋅‖2{\displaystyle \|\cdot \|_{2}}, is given by whereui{\displaystyle u_{i}}andvi{\displaystyle v_{i}}denote thei{\displaystyle i}th column ofU{\displaystyle U}andV{\displaystyle V}, respectively. First, note that we have Therefore, we need to show that ifBk=XY⊤{\displaystyle B_{k}=XY^{\top }}whereX{\displaystyle X}andY{\displaystyle Y}havek{\displaystyle k}columns then‖A−Ak‖2=σk+1≤‖A−Bk‖2{\displaystyle \|A-A_{k}\|_{2}=\sigma _{k+1}\leq \|A-B_{k}\|_{2}}. SinceY{\displaystyle Y}hask{\displaystyle k}columns, then there must be a nontrivial linear combination of the firstk+1{\displaystyle k+1}columns ofV{\displaystyle V}, i.e., such thatY⊤w=0{\displaystyle Y^{\top }w=0}. Without loss of generality, we can scalew{\displaystyle w}so that‖w‖2=1{\displaystyle \|w\|_{2}=1}or (equivalently)γ12+⋯+γk+12=1{\displaystyle \gamma _{1}^{2}+\cdots +\gamma _{k+1}^{2}=1}. Therefore, The result follows by taking the square root of both sides of the above inequality. LetA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}be a real (possibly rectangular) matrix withm≤n{\displaystyle m\leq n}. Suppose that is thesingular value decompositionofA{\displaystyle A}. We claim that the best rankk{\displaystyle k}approximation toA{\displaystyle A}in the Frobenius norm, denoted by‖⋅‖F{\displaystyle \|\cdot \|_{F}}, is given by whereui{\displaystyle u_{i}}andvi{\displaystyle v_{i}}denote thei{\displaystyle i}th column ofU{\displaystyle U}andV{\displaystyle V}, respectively. First, note that we have Therefore, we need to show that ifBk=XY⊤{\displaystyle B_{k}=XY^{\top }}whereX{\displaystyle X}andY{\displaystyle Y}havek{\displaystyle k}columns then By the triangle inequality with the spectral norm, ifA=A′+A″{\displaystyle A=A'+A''}thenσ1(A)≤σ1(A′)+σ1(A″){\displaystyle \sigma _{1}(A)\leq \sigma _{1}(A')+\sigma _{1}(A'')}. SupposeAk′{\displaystyle A'_{k}}andAk″{\displaystyle A''_{k}}respectively denote the rankk{\displaystyle k}approximation toA′{\displaystyle A'}andA″{\displaystyle A''}by SVD method described above. Then, for anyi,j≥1{\displaystyle i,j\geq 1} Sinceσk+1(Bk)=0{\displaystyle \sigma _{k+1}(B_{k})=0}, whenA′=A−Bk{\displaystyle A'=A-B_{k}}andA″=Bk{\displaystyle A''=B_{k}}we conclude that fori≥1,j=k+1{\displaystyle i\geq 1,j=k+1} Therefore, as required. The Frobenius norm weights uniformly all elements of the approximation errorD−D^{\displaystyle D-{\widehat {D}}}. Prior knowledge about distribution of the errors can be taken into account by considering the weighted low-rank approximation problem wherevec(A){\displaystyle {\text{vec}}(A)}vectorizesthe matrixA{\displaystyle A}column wise andW{\displaystyle W}is a given positive (semi)definite weight matrix. The general weighted low-rank approximation problem does not admit an analytic solution in terms of the singular value decomposition and is solved by local optimization methods, which provide no guarantee that a globally optimal solution is found. In case of uncorrelated weights, weighted low-rank approximation problem also can be formulated in this way:[4][5]for a non-negative matrixW{\displaystyle W}and a matrixA{\displaystyle A}we want to minimize∑i,j(Wi,j(Ai,j−Bi,j))2{\displaystyle \sum _{i,j}(W_{i,j}(A_{i,j}-B_{i,j}))^{2}}over matrices,B{\displaystyle B}, of rank at mostr{\displaystyle r}. Let‖A‖p=(∑i,j|Ai,jp|)1/p{\displaystyle \|A\|_{p}=\left(\sum _{i,j}|A_{i,j}^{p}|\right)^{1/p}}. Forp=2{\displaystyle p=2}, the fastest algorithm runs innnz(A)+n⋅poly(k/ϵ){\displaystyle nnz(A)+n\cdot poly(k/\epsilon )}time.[6][7]One of the important ideas been used is called Oblivious Subspace Embedding (OSE), it is first proposed by Sarlos.[8] Forp=1{\displaystyle p=1}, it is known that this entry-wise L1 norm is more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. It is natural to seek to minimize‖B−A‖1{\displaystyle \|B-A\|_{1}}.[9]Forp=0{\displaystyle p=0}andp≥1{\displaystyle p\geq 1}, there are some algorithms with provable guarantees.[10][11] LetP={p1,…,pm}{\displaystyle P=\{p_{1},\ldots ,p_{m}\}}andQ={q1,…,qn}{\displaystyle Q=\{q_{1},\ldots ,q_{n}\}}be two point sets in an arbitrary metric space. LetA{\displaystyle A}represent them×n{\displaystyle m\times n}matrix whereAi,j=dist(pi,qi){\displaystyle A_{i,j}=dist(p_{i},q_{i})}. Such distances matrices are commonly computed in software packages and have applications to learning image manifolds,handwriting recognition, and multi-dimensional unfolding. In an attempt to reduce their description size,[12][13]one can study low rank approximation of such matrices. The low-rank approximation problems in the distributed and streaming setting has been considered in.[14] Using the equivalences and the weighted low-rank approximation problem becomes equivalent to the parameter optimization problems and whereIr{\displaystyle I_{r}}is theidentity matrixof sizer{\displaystyle r}. The image representation of the rank constraint suggests a parameter optimization method in which the cost function is minimized alternatively over one of the variables (P{\displaystyle P}orL{\displaystyle L}) with the other one fixed. Although simultaneous minimization over bothP{\displaystyle P}andL{\displaystyle L}is a difficultbiconvex optimizationproblem, minimization over one of the variables alone is alinear least squaresproblem and can be solved globally and efficiently. The resulting optimization algorithm (called alternating projections) is globally convergent with a linear convergence rate to a locally optimal solution of the weighted low-rank approximation problem. Starting value for theP{\displaystyle P}(orL{\displaystyle L}) parameter should be given. The iteration is stopped when a user defined convergence condition is satisfied. Matlabimplementation of the alternating projections algorithm for weighted low-rank approximation: The alternating projections algorithm exploits the fact that the low rank approximation problem, parameterized in the image form, is bilinear in the variablesP{\displaystyle P}orL{\displaystyle L}. The bilinear nature of the problem is effectively used in an alternative approach, called variable projections.[15] Consider again the weighted low rank approximation problem, parameterized in the image form. Minimization with respect to theL{\displaystyle L}variable (a linear least squares problem) leads to the closed form expression of the approximation error as a function ofP{\displaystyle P} The original problem is therefore equivalent to thenonlinear least squares problemof minimizingf(P){\displaystyle f(P)}with respect toP{\displaystyle P}. For this purpose standard optimization methods, e.g. theLevenberg-Marquardt algorithmcan be used. Matlabimplementation of the variable projections algorithm for weighted low-rank approximation: The variable projections approach can be applied also to low rank approximation problems parameterized in the kernel form. The method is effective when the number of eliminated variables is much larger than the number of optimization variables left at the stage of the nonlinear least squares minimization. Such problems occur in system identification, parameterized in the kernel form, where the eliminated variables are the approximating trajectory and the remaining variables are the model parameters. In the context oflinear time-invariant systems, the elimination step is equivalent toKalman smoothing. Usually, we want our new solution not only to be of low rank, but also satisfy other convex constraints due to application requirements. Our interested problem would be as follows, This problem has many real world applications, including to recover a good solution from an inexact (semidefinite programming) relaxation. If additional constraintg(p^)≤0{\displaystyle g({\widehat {p}})\leq 0}is linear, like we require all elements to be nonnegative, the problem is called structured low rank approximation.[16]The more general form is named convex-restricted low rank approximation. This problem is helpful in solving many problems. However, it is challenging due to the combination of the convex and nonconvex (low-rank) constraints. Different techniques were developed based on different realizations ofg(p^)≤0{\displaystyle g({\widehat {p}})\leq 0}. However, the Alternating Direction Method of Multipliers (ADMM) can be applied to solve the nonconvex problem with convex objective function, rank constraints and other convex constraints,[17]and is thus suitable to solve our above problem. Moreover, unlike the general nonconvex problems, ADMM will guarantee to converge a feasible solution as long as its dual variable converges in the iterations.
https://en.wikipedia.org/wiki/Low-rank_approximation
In themathematicaldiscipline oflinear algebra, amatrix decompositionormatrix factorizationis afactorizationof amatrixinto a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. Innumerical analysis, different decompositions are used to implement efficient matrixalgorithms. For example, when solving asystem of linear equationsAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, the matrixAcan be decomposed via theLU decomposition. The LU decomposition factorizes a matrix into alower triangular matrixLand anupper triangular matrixU. The systemsL(Ux)=b{\displaystyle L(U\mathbf {x} )=\mathbf {b} }andUx=L−1b{\displaystyle U\mathbf {x} =L^{-1}\mathbf {b} }require fewer additions and multiplications to solve, compared with the original systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, though one might require significantly more digits in inexact arithmetic such asfloating point. Similarly, theQR decompositionexpressesAasQRwithQanorthogonal matrixandRan upper triangular matrix. The systemQ(Rx) =bis solved byRx=QTb=c, and the systemRx=cis solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition isnumerically stable. TheJordan normal formand theJordan–Chevalley decomposition Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with respect to diagonal scaling. Analogous scale-invariant decompositions can be derived from other matrix decompositions; for example, to obtain scale-invariant eigenvalues.[3][4] There exist analogues of the SVD, QR, LU and Cholesky factorizations forquasimatricesandcmatricesorcontinuous matrices.[13]A ‘quasimatrix’ is, like a matrix, a rectangular scheme whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise, a ‘cmatrix’, is continuous in both indices. As an example of a cmatrix, one can think of the kernel of anintegral operator. These factorizations are based on early work byFredholm (1903),Hilbert (1904)andSchmidt (1907). For an account, and a translation to English of the seminal papers, seeStewart (2011).
https://en.wikipedia.org/wiki/Matrix_decomposition