text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Non-negative matrix factorization(NMForNNMF), alsonon-negative matrix approximation[1][2]is a group ofalgorithmsinmultivariate analysisandlinear algebrawhere amatrixVisfactorizedinto (usually) two matricesWandH, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.
NMF finds applications in such fields asastronomy,[3][4]computer vision,document clustering,[1]missing data imputation,[5]chemometrics,audio signal processing,recommender systems,[6][7]andbioinformatics.[8]
Inchemometricsnon-negative matrix factorization has a long history under the name "self modeling curve resolution".[9]In this framework the vectors in the right matrix are continuous curves rather than discrete vectors.
Also early work on non-negative matrix factorizations was performed by a Finnish group of researchers in the 1990s under the namepositive matrix factorization.[10][11][12]It became more widely known asnon-negative matrix factorizationafter Lee andSeunginvestigated the properties of the algorithm and published some simple and useful
algorithms for two types of factorizations.[13][14]
Let matrixVbe the product of the matricesWandH,
Matrix multiplication can be implemented as computing the column vectors ofVas linear combinations of the column vectors inWusing coefficients supplied by columns ofH. That is, each column ofVcan be computed as follows:
whereviis thei-th column vector of the product matrixVandhiis thei-th column vector of the matrixH.
When multiplying matrices, the dimensions of the factor matrices may be significantly lower than those of the product matrix and it is this property that forms the basis of NMF. NMF generates factors with significantly reduced dimensions compared to the original matrix. For example, ifVis anm×nmatrix,Wis anm×pmatrix, andHis ap×nmatrix thenpcan be significantly less than bothmandn.
Here is an example based on a text-mining application:
This last point is the basis of NMF because we can consider each original document in our example as being built from a small set of hidden features. NMF generates these features.
It is useful to think of each feature (column vector) in the features matrixWas a document archetype comprising a set of words where each word's cell value defines the word's rank in the feature: The higher a word's cell value the higher the word's rank in the feature. A column in the coefficients matrixHrepresents an original document with a cell value defining the document's rank for a feature. We can now reconstruct a document (column vector) from our input matrix by a linear combination of our features (column vectors inW) where each feature is weighted by the feature's cell value from the document's column inH.
NMF has an inherent clustering property,[15]i.e., it automatically clusters the columns of input dataV=(v1,…,vn){\displaystyle \mathbf {V} =(v_{1},\dots ,v_{n})}.
More specifically, the approximation ofV{\displaystyle \mathbf {V} }byV≃WH{\displaystyle \mathbf {V} \simeq \mathbf {W} \mathbf {H} }is achieved by findingW{\displaystyle W}andH{\displaystyle H}that minimize the error function (using theFrobenius norm)
‖V−WH‖F,{\displaystyle \left\|V-WH\right\|_{F},}subject toW≥0,H≥0.{\displaystyle W\geq 0,H\geq 0.},
If we furthermore impose an orthogonality constraint onH{\displaystyle \mathbf {H} },
i.e.HHT=I{\displaystyle \mathbf {H} \mathbf {H} ^{T}=I}, then the above minimization is mathematically equivalent to the minimization ofK-means clustering.[15]
Furthermore, the computedH{\displaystyle H}gives the cluster membership, i.e., ifHkj>Hij{\displaystyle \mathbf {H} _{kj}>\mathbf {H} _{ij}}for alli≠k, this suggests that the input datavj{\displaystyle v_{j}}belongs tok{\displaystyle k}-th cluster. The computedW{\displaystyle W}gives the cluster centroids, i.e., thek{\displaystyle k}-th column gives the cluster centroid ofk{\displaystyle k}-th cluster. This centroid's representation can be significantly enhanced by convex NMF.
When the orthogonality constraintHHT=I{\displaystyle \mathbf {H} \mathbf {H} ^{T}=I}is not explicitly imposed, the orthogonality holds to a large extent, and the clustering property holds too.
When the error function to be used isKullback–Leibler divergence, NMF is identical to theprobabilistic latent semantic analysis(PLSA), a popular document clustering method.[16]
Usually the number of columns ofWand the number of rows ofHin NMF are selected so the productWHwill become an approximation toV. The full decomposition ofVthen amounts to the two non-negative matricesWandHas well as a residualU, such that:V=WH+U. The elements of the residual matrix can either be negative or positive.
WhenWandHare smaller thanVthey become easier to store and manipulate. Another reason for factorizingVinto smaller matricesWandH, is that if one's goal is to approximately represent the elements ofVby significantly less data, then one has to infer some latent structure in the data.
In standard NMF, matrix factorW∈R+m×k, i.e.,Wcan be anything in that space. Convex NMF[17]restricts the columns ofWtoconvex combinationsof the input data vectors(v1,…,vn){\displaystyle (v_{1},\dots ,v_{n})}. This greatly improves the quality of data representation ofW. Furthermore, the resulting matrix factorHbecomes more sparse and orthogonal.
In case thenonnegative rankofVis equal to its actual rank,V=WHis called a nonnegative rank factorization (NRF).[18][19][20]The problem of finding the NRF ofV, if it exists, is known to be NP-hard.[21]
There are different types of non-negative matrix factorizations.
The different types arise from using differentcost functionsfor measuring the divergence betweenVandWHand possibly byregularizationof theWand/orHmatrices.[1]
Two simple divergence functions studied by Lee and Seung are the squared error (orFrobenius norm) and an extension of the Kullback–Leibler divergence to positive matrices (the originalKullback–Leibler divergenceis defined on probability distributions).
Each divergence leads to a different NMF algorithm, usually minimizing the divergence using iterative update rules.
The factorization problem in the squared error version of NMF may be stated as:
Given a matrixV{\displaystyle \mathbf {V} }find nonnegative matrices W and H that minimize the function
Another type of NMF for images is based on thetotal variation norm.[22]
WhenL1 regularization(akin toLasso) is added to NMF with the mean squared error cost function, the resulting problem may be callednon-negative sparse codingdue to the similarity to thesparse codingproblem,[23][24]although it may also still be referred to as NMF.[25]
Many standard NMF algorithms analyze all the data together; i.e., the whole matrix is available from the start. This may be unsatisfactory in applications where there are too many data to fit into memory or where the data are provided instreamingfashion. One such use is forcollaborative filteringinrecommendation systems, where there may be many users and many items to recommend, and it would be inefficient to recalculate everything when one user or one item is added to the system. The cost function for optimization in these cases may or may not be the same as for standard NMF, but the algorithms need to be rather different.[26][27]
If the columns ofVrepresent data sampled over spatial or temporal dimensions, e.g. time signals, images, or video, features that are equivariant w.r.t. shifts along these dimensions can be learned by Convolutional NMF. In this case,Wis sparse with columns having local non-zero weight windows that are shared across shifts along the spatio-temporal dimensions ofV, representingconvolution kernels. By spatio-temporal pooling ofHand repeatedly using the resulting representation as input to convolutional NMF, deep feature hierarchies can be learned.[28]
There are several ways in which theWandHmay be found: Lee and Seung'smultiplicative update rule[14]has been a popular method due to the simplicity of implementation. This algorithm is:
Note that the updates are done on an element by element basis not matrix multiplication.
We note that the multiplicative factors forWandH, i.e. theWTVWTWH{\textstyle {\frac {\mathbf {W} ^{\mathsf {T}}\mathbf {V} }{\mathbf {W} ^{\mathsf {T}}\mathbf {W} \mathbf {H} }}}andVHTWHHT{\textstyle {\textstyle {\frac {\mathbf {V} \mathbf {H} ^{\mathsf {T}}}{\mathbf {W} \mathbf {H} \mathbf {H} ^{\mathsf {T}}}}}}terms, arematrices of oneswhenV=WH{\displaystyle \mathbf {V} =\mathbf {W} \mathbf {H} }.
More recently other algorithms have been developed.
Some approaches are based on alternatingnon-negative least squares: in each step of such an algorithm, firstHis fixed andWfound by a non-negative least squares solver, thenWis fixed andHis found analogously. The procedures used to solve forWandHmay be the same[29]or different, as some NMF variants regularize one ofWandH.[23]Specific approaches include the projectedgradient descentmethods,[29][30]theactive setmethod,[6][31]the optimal gradient method,[32]and the block principal pivoting method[33]among several others.[34]
Current algorithms are sub-optimal in that they only guarantee finding a local minimum, rather than a global minimum of the cost function. A provably optimal algorithm is unlikely in the near future as the problem has been shown to generalize the k-means clustering problem which is known to beNP-complete.[35]However, as in many other data mining applications, a local minimum may still prove to be useful.
In addition to the optimization step, initialization has a significant effect on NMF. The initial values chosen forWandHmay affect not only the rate of convergence, but also the overall error at convergence. Some options for initialization include complete randomization,SVD, k-means clustering, and more advanced strategies based on these and other paradigms.[36]
The sequential construction of NMF components (WandH) was firstly used to relate NMF withPrincipal Component Analysis(PCA) in astronomy.[37]The contribution from the PCA components are ranked by the magnitude of their corresponding eigenvalues; for NMF, its components can be ranked empirically when they are constructed one by one (sequentially), i.e., learn the(n+1){\displaystyle (n+1)}-th component with the firstn{\displaystyle n}components constructed.
The contribution of the sequential NMF components can be compared with theKarhunen–Loève theorem, an application of PCA, using the plot of eigenvalues. A typical choice of the number of components with PCA is based on the "elbow" point, then the existence of the flat plateau is indicating that PCA is not capturing the data efficiently, and at last there exists a sudden drop reflecting the capture of random noise and falls into the regime of overfitting.[38][39]For sequential NMF, the plot of eigenvalues is approximated by the plot of the fractional residual variance curves, where the curves decreases continuously, and converge to a higher level than PCA,[4]which is the indication of less over-fitting of sequential NMF.
Exact solutions for the variants of NMF can be expected (in polynomial time) when additional constraints hold for matrixV. A polynomial time algorithm for solving nonnegative rank factorization ifVcontains a monomial sub matrix of rank equal to its rank was given by Campbell and Poole in 1981.[40]Kalofolias and Gallopoulos (2012)[41]solved the symmetric counterpart of this problem, whereVis symmetric and contains a diagonal principal sub matrix of rank r. Their algorithm runs inO(rm2)time in the dense case. Arora, Ge, Halpern, Mimno, Moitra, Sontag, Wu, & Zhu (2013) give a polynomial time algorithm for exact NMF that works for the case where one of the factors W satisfies a separability condition.[42]
InLearning the parts of objects by non-negative matrix factorizationLee and Seung[43]proposed NMF mainly for parts-based decomposition of images. It compares NMF tovector quantizationandprincipal component analysis, and shows that although the three techniques may be written as factorizations, they implement different constraints and therefore produce different results.
It was later shown that some types of NMF are an instance of a more general probabilistic model called "multinomial PCA".[44]When NMF is obtained by minimizing theKullback–Leibler divergence, it is in fact equivalent to another instance of multinomial PCA,probabilistic latent semantic analysis,[45]trained bymaximum likelihoodestimation.
That method is commonly used for analyzing and clustering textual data and is also related to thelatent class model.
NMF with the least-squares objective is equivalent to a relaxed form ofK-means clustering: the matrix factorWcontains cluster centroids andHcontains cluster membership indicators.[15][46]This provides a theoretical foundation for using NMF for data clustering. However, k-means does not enforce non-negativity on its centroids, so the closest analogy is in fact with "semi-NMF".[17]
NMF can be seen as a two-layerdirected graphicalmodel with one layer of observed random variables and one layer of hidden random variables.[47]
NMF extends beyond matrices to tensors of arbitrary order.[48][49][50]This extension may be viewed as a non-negative counterpart to, e.g., thePARAFACmodel.
Other extensions of NMF include joint factorization of several data matrices and tensors where some factors are shared. Such models are useful for sensor fusion and relational learning.[51]
NMF is an instance of nonnegativequadratic programming, just like thesupport vector machine(SVM). However, SVM and NMF are related at a more intimate level than that of NQP, which allows direct application of the solution algorithms developed for either of the two methods to problems in both domains.[52]
The factorization is not unique: A matrix and itsinversecan be used to transform the two factorization matrices by, e.g.,[53]
If the two new matricesW~=WB{\displaystyle \mathbf {{\tilde {W}}=WB} }andH~=B−1H{\displaystyle \mathbf {\tilde {H}} =\mathbf {B} ^{-1}\mathbf {H} }arenon-negativethey form another parametrization of the factorization.
The non-negativity ofW~{\displaystyle \mathbf {\tilde {W}} }andH~{\displaystyle \mathbf {\tilde {H}} }applies at least ifBis a non-negativemonomial matrix.
In this simple case it will just correspond to a scaling and apermutation.
More control over the non-uniqueness of NMF is obtained with sparsity constraints.[54]
In astronomy, NMF is a promising method fordimension reductionin the sense that astrophysical signals are non-negative. NMF has been applied to the spectroscopic observations[55][3]and the direct imaging observations[4]as a method to study the common properties of astronomical objects and post-process the astronomical observations. The advances in the spectroscopic observations by Blanton & Roweis (2007)[3]takes into account of the uncertainties of astronomical observations, which is later improved by Zhu (2016)[37]where missing data are also considered andparallel computingis enabled. Their method is then adopted by Ren et al. (2018)[4]to the direct imaging field as one of themethods of detecting exoplanets, especially for the direct imaging ofcircumstellar disks.
Ren et al. (2018)[4]are able to prove the stability of NMF components when they are constructed sequentially (i.e., one by one), which enables thelinearityof the NMF modeling process; thelinearityproperty is used to separate the stellar light and the light scattered from theexoplanetsandcircumstellar disks.
In direct imaging, to reveal the faint exoplanets and circumstellar disks from bright the surrounding stellar lights, which has a typical contrast from 10⁵ to 10¹⁰, various statistical methods have been adopted,[56][57][38]however the light from the exoplanets or circumstellar disks are usually over-fitted, where forward modeling have to be adopted to recover the true flux.[58][39]Forward modeling is currently optimized for point sources,[39]however not for extended sources, especially for irregularly shaped structures such as circumstellar disks. In this situation, NMF has been an excellent method, being less over-fitting in the sense of the non-negativity andsparsityof the NMF modeling coefficients, therefore forward modeling can be performed with a few scaling factors,[4]rather than a computationally intensive data re-reduction on generated models.
To impute missing data in statistics, NMF can take missing data while minimizing its cost function, rather than treating these missing data as zeros.[5]This makes it a mathematically proven method fordata imputationin statistics.[5]By first proving that the missing data are ignored in the cost function, then proving that the impact from missing data can be as small as a second order effect, Ren et al. (2020)[5]studied and applied such an approach for the field of astronomy. Their work focuses on two-dimensional matrices, specifically, it includes mathematical derivation, simulated data imputation, and application to on-sky data.
The data imputation procedure with NMF can be composed of two steps. First, when the NMF components are known, Ren et al. (2020) proved that impact from missing data during data imputation ("target modeling" in their study) is a second order effect. Second, when the NMF components are unknown, the authors proved that the impact from missing data during component construction is a first-to-second order effect.
Depending on the way that the NMF components are obtained, the former step above can be either independent or dependent from the latter. In addition, the imputation quality can be increased when the more NMF components are used, see Figure 4 of Ren et al. (2020) for their illustration.[5]
NMF can be used fortext miningapplications.
In this process, adocument-termmatrixis constructed with the weights of various terms (typically weighted word frequency information) from a set of documents.
This matrix is factored into aterm-featureand afeature-documentmatrix.
The features are derived from the contents of the documents, and the feature-document matrix describesdata clustersof related documents.
One specific application used hierarchical NMF on a small subset of scientific abstracts fromPubMed.[59]Another research group clustered parts of theEnronemail dataset[60]with 65,033 messages and 91,133 terms into 50 clusters.[61]NMF has also been applied to citations data, with one example clusteringEnglish Wikipediaarticles andscientific journalsbased on the outbound scientific citations in English Wikipedia.[62]
Arora, Ge, Halpern, Mimno, Moitra, Sontag, Wu, & Zhu (2013) have given polynomial-time algorithms to learn topic models using NMF. The algorithm assumes that the topic matrix satisfies a separability condition that is often found to hold in these settings.[42]
Hassani, Iranmanesh and Mansouri (2019) proposed a feature agglomeration method for term-document matrices which operates using NMF. The algorithm reduces the term-document matrix into a smaller matrix more suitable for text clustering.[63]
NMF is also used to analyze spectral data; one such use is in the classification of space objects and debris.[64]
NMF is applied in scalable Internet distance (round-trip time) prediction. For a network withN{\displaystyle N}hosts, with the help of NMF, the distances of all theN2{\displaystyle N^{2}}end-to-end links can be predicted after conducting onlyO(N){\displaystyle O(N)}measurements. This kind of method was firstly introduced in Internet
Distance Estimation Service (IDES).[65]Afterwards, as a fully decentralized approach, Phoenix network coordinate system[66]is proposed. It achieves better overall prediction accuracy by introducing the concept of weight.
Speech denoising has been a long lasting problem inaudio signal processing. There are many algorithms for denoising if the noise is stationary. For example, theWiener filteris suitable for additiveGaussian noise. However, if the noise is non-stationary, the classical denoising algorithms usually have poor performance because the statistical information of the non-stationary noise is difficult to estimate. Schmidt et al.[67]use NMF to do speech denoising under non-stationary noise, which is completely different from classical statistical approaches. The key idea is that clean speech signal can be sparsely represented by a speech dictionary, but non-stationary noise cannot. Similarly, non-stationary noise can also be sparsely represented by a noise dictionary, but speech cannot.
The algorithm for NMF denoising goes as follows. Two dictionaries, one for speech and one for noise, need to be trained offline. Once a noisy speech is given, we first calculate the magnitude of the Short-Time-Fourier-Transform. Second, separate it into two parts via NMF, one can be sparsely represented by the speech dictionary, and the other part can be sparsely represented by the noise dictionary. Third, the part that is represented by the speech dictionary will be the estimated clean speech.
Sparse NMF is used inPopulation geneticsfor estimating individual admixture coefficients, detecting genetic clusters of individuals in a population sample or evaluatinggenetic admixturein sampled genomes. In human genetic clustering, NMF algorithms provide estimates similar to those of the computer program STRUCTURE, but the algorithms are more efficient computationally and allow analysis of large population genomic data sets.[68]
NMF has been successfully applied inbioinformaticsfor clusteringgene expressionandDNA methylationdata and finding the genes most representative of the clusters.[24][69][70][71]In the analysis of cancer mutations it has been used to identify common patterns of mutations that occur in many cancers and that probably have distinct causes.[72]NMF techniques can identify sources of variation such as cell types, disease subtypes, population stratification, tissue composition, and tumor clonality.[73]
A particular variant of NMF, namely Non-Negative Matrix Tri-Factorization (NMTF),[74]has been use fordrug repurposingtasks in order to predict novel protein targets and therapeutic indications for approved drugs[75]and to infer pair of synergic anticancer drugs.[76]
NMF, also referred in this field as factor analysis, has been used since the 1980s[77]to analyze sequences of images inSPECTandPETdynamic medical imaging. Non-uniqueness of NMF was addressed using sparsity constraints.[78][79][80]
Current research (since 2010) in nonnegative matrix factorization includes, but is not limited to,
|
https://en.wikipedia.org/wiki/Non-negative_matrix_factorization
|
Nonlinear dimensionality reduction, also known asmanifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear decomposition methods, onto lower-dimensionallatent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa) itself.[1][2]The techniques described below can be understood as generalizations of linear decomposition methods used fordimensionality reduction, such assingular value decompositionandprincipal component analysis.
High dimensional data can be hard for machines to work with, requiring significant time and space for analysis. It also presents a challenge for humans, since it's hard to visualize or understand data in more than three dimensions. Reducing the dimensionality of a data set, while keep its essential features relatively intact, can make algorithms more efficient and allow analysts to visualize trends and patterns.
The reduced-dimensional representations of data are often referred to as "intrinsic variables". This description implies that these are the values from which the data was produced. For example, consider a dataset that contains images of a letter 'A', which has been scaled and rotated by varying amounts. Each image has 32×32 pixels. Each image can be represented as a vector of 1024 pixel values. Each row is a sample on a two-dimensional manifold in 1024-dimensional space (aHamming space). Theintrinsic dimensionalityis two, because two variables (rotation and scale) were varied in order to produce the data. Information about the shape or look of a letter 'A' is not part of the intrinsic variables because it is the same in every instance. Nonlinear dimensionality reduction will discard the correlated information (the letter 'A') and recover only the varying information (rotation and scale). The image to the right shows sample images from this dataset (to save space, not all input images are shown), and a plot of the two-dimensional points that results from using a NLDR algorithm (in this case, Manifold Sculpting was used) to reduce the data into just two dimensions.
By comparison, ifprincipal component analysis, which is a linear dimensionality reduction algorithm, is used to reduce this same dataset into two dimensions, the resulting values are not so well organized. This demonstrates that the high-dimensional vectors (each representing a letter 'A') that sample this manifold vary in a non-linear manner.
It should be apparent, therefore, that NLDR has several applications in the field of computer-vision. For example, consider a robot that uses a camera to navigate in a closed static environment. The images obtained by that camera can be considered to be samples on a manifold in high-dimensional space, and the intrinsic variables of that manifold will represent the robot's position and orientation.
Invariant manifoldsare of general interest for model order reduction indynamical systems. In particular, if there is an attracting invariant manifold in the phase space, nearby trajectories will converge onto it and stay on it indefinitely, rendering it a candidate for dimensionality reduction of the dynamical system. While such manifolds are not guaranteed to exist in general, the theory ofspectral submanifolds (SSM)gives conditions for the existence of unique attracting invariant objects in a broad class of dynamical systems.[3]Active research in NLDR seeks to unfold the observation manifolds associated with dynamical systems to develop modeling techniques.[4]
Some of the more prominentnonlinear dimensionality reductiontechniques are listed below.
Sammon's mappingis one of the first and most popular NLDR techniques.
Theself-organizing map(SOM, also calledKohonen map) and its probabilistic variantgenerative topographic mapping(GTM) use a point representation in the embedded space to form alatent variable modelbased on a non-linear mapping from the embedded space to the high-dimensional space.[6]These techniques are related to work ondensity networks, which also are based around the same probabilistic model.
Perhaps the most widely used algorithm for dimensional reduction iskernel PCA.[7]PCA begins by computing the covariance matrix of them×n{\displaystyle m\times n}matrixX{\displaystyle \mathbf {X} }
It then projects the data onto the firstkeigenvectors of that matrix. By comparison, KPCA begins by computing the covariance matrix of the data after being transformed into a higher-dimensional space,
It then projects the transformed data onto the firstkeigenvectors of that matrix, just like PCA. It uses thekernel trickto factor away much of the computation, such that the entire process can be performed without actually computingΦ(x){\displaystyle \Phi (\mathbf {x} )}. Of courseΦ{\displaystyle \Phi }must be chosen such that it has a known corresponding kernel. Unfortunately, it is not trivial to find a good kernel for a given problem, so KPCA does not yield good results with some problems when using standard kernels. For example, it is known to perform poorly with these kernels on theSwiss rollmanifold. However, one can view certain other methods that perform well in such settings (e.g., Laplacian Eigenmaps, LLE) as special cases of kernel PCA by constructing a data-dependent kernel matrix.[8]
KPCA has an internal model, so it can be used to map points onto its embedding that were not available at training time.
Principal curvesand manifoldsgive the natural geometric framework for nonlinear dimensionality reduction and extend the geometric interpretation of PCA by explicitly constructing an embedded manifold, and by encoding using standard geometric projection onto the manifold. This approach was originally proposed byTrevor Hastiein his 1984 thesis,[11]which he formally introduced in 1989.[12]This idea has been explored further by many authors.[13]How to define the "simplicity" of the manifold is problem-dependent, however, it is commonly measured by the intrinsic dimensionality and/or the smoothness of the manifold. Usually, the principal manifold is defined as a solution to an optimization problem. The objective function includes a quality of data approximation and some penalty terms for the bending of the manifold. The popular initial approximations are generated by linear PCA and Kohonen's SOM.
Laplacian eigenmaps uses spectral techniques to perform dimensionality reduction.[14]This technique relies on the basic assumption that the data lies in a low-dimensional manifold in a high-dimensional space.[15]This algorithm cannot embed out-of-sample points, but techniques based onReproducing kernel Hilbert spaceregularization exist for adding this capability.[16]Such techniques can be applied to other nonlinear dimensionality reduction algorithms as well.
Traditional techniques like principal component analysis do not consider the intrinsic geometry of the data. Laplacian eigenmaps builds a graph from neighborhood information of the data set. Each data point serves as a node on the graph and connectivity between nodes is governed by the proximity of neighboring points (using e.g. thek-nearest neighbor algorithm). The graph thus generated can be considered as a discrete approximation of the low-dimensional manifold in the high-dimensional space. Minimization of a cost function based on the graph ensures that points close to each other on the manifold are mapped close to each other in the low-dimensional space, preserving local distances. The eigenfunctions of theLaplace–Beltrami operatoron the manifold serve as the embedding dimensions, since under mild conditions this operator has a countable spectrum that is a basis for square integrable functions on the manifold (compare toFourier serieson the unit circle manifold). Attempts to place Laplacian eigenmaps on solid theoretical ground have met with some success, as under certain nonrestrictive assumptions, the graph Laplacian matrix has been shown to converge to the Laplace–Beltrami operator as the number of points goes to infinity.[15]
Isomap[17]is a combination of theFloyd–Warshall algorithmwith classicMultidimensional Scaling(MDS). Classic MDS takes a matrix of pair-wise distances between all points and computes a position for each point. Isomap assumes that the pair-wise distances are only known between neighboring points, and uses the Floyd–Warshall algorithm to compute the pair-wise distances between all other points. This effectively estimates the full matrix of pair-wisegeodesic distancesbetween all of the points. Isomap then uses classic MDS to compute the reduced-dimensional positions of all the points. Landmark-Isomap is a variant of this algorithm that uses landmarks to increase speed, at the cost of some accuracy.
In manifold learning, the input data is assumed to be sampled from a low dimensionalmanifoldthat is embedded inside of a higher-dimensional vector space. The main intuition behind MVU is to exploit the local linearity of manifolds and create a mapping that preserves local neighbourhoods at every point of the underlying manifold.
Locally-linear Embedding(LLE) was presented at approximately the same time as Isomap.[18]It has several advantages over Isomap, including faster optimization when implemented to take advantage ofsparse matrixalgorithms, and better results with many problems. LLE also begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an eigenvector-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model.
The original data pointsXi∈RD{\displaystyle X_{i}\in \mathbb {R} ^{D}}, and the goal of LLE is to embed each pointXi{\displaystyle X_{i}}to some low-dimensional pointYi∈Rd{\displaystyle Y_{i}\in \mathbb {R} ^{d}}, whered≪D{\displaystyle d\ll D}.
LLE has two steps. In thefirststep, it computes, for each pointXi, the best approximation ofXibased on barycentric coordinates of its neighborsXj. The original point is approximately reconstructed by a linear combination, given by the weight matrixWij, of its neighbors. The reconstruction error is:
The weightsWijrefer to the amount of contribution the pointXjhas while reconstructing the pointXi. The cost function is minimized under two constraints:
These two constraints ensure thatW{\displaystyle W}is unaffected by rotation and translation.
In thesecondstep, a neighborhood preserving map is created based the weights. Each pointXi∈RD{\displaystyle X_{i}\in \mathbb {R} ^{D}}is mapped onto a pointYi∈Rd{\displaystyle Y_{i}\in \mathbb {R} ^{d}}by minimizing another cost:
Unlike in the previous cost function, the weights Wijare kept fixed and the minimization is done on the points Yito optimize the coordinates. This minimization problem can be solved by solving a sparseNXNeigenvalue problem(Nbeing the number of data points), whose bottomdnonzero eigen vectors provide an orthogonal set of coordinates.
The only hyperparameter in the algorithm is what counts as a "neighbor" of a point. Generally the data points are reconstructed fromKnearest neighbors, as measured byEuclidean distance. In this case, the algorithm has only one integer-valued hyperparameterK,which can be chosen by cross validation.
Like LLE,Hessian LLEis also based on sparse matrix techniques.[19]It tends to yield results of a much higher quality than LLE. Unfortunately, it has a very costly computational complexity, so it is not well-suited for heavily sampled manifolds. It has no internal model.
Modified LLE (MLLE)[20]is another LLE variant which uses multiple weights in each neighborhood to address the local weight matrix conditioning problem which leads to distortions in LLE maps. Loosely speaking the multiple weights are the localorthogonal projectionof the original weights produced by LLE. The creators of this regularised variant are also the authors of Local Tangent Space Alignment (LTSA), which is implicit in the MLLE formulation when realising that the global optimisation of the orthogonal projections of each weight vector, in-essence, aligns the local tangent spaces of every data point. The theoretical and empirical implications from the correct application of this algorithm are far-reaching.[21]
LTSA[22]is based on the intuition that when a manifold is correctly unfolded, all of the tangent hyperplanes to the manifold will become aligned. It begins by computing thek-nearest neighbors of every point. It computes the tangent space at every point by computing thed-first principal components in each local neighborhood. It then optimizes to find an embedding that aligns the tangent spaces.
Maximum Variance Unfolding, Isomap and Locally Linear Embedding share a common intuition relying on the notion that if a manifold is properly unfolded, then variance over the points is maximized. Its initial step, like Isomap and Locally Linear Embedding, is finding thek-nearest neighbors of every point. It then seeks to solve the problem of maximizing the distance between all non-neighboring points, constrained such that the distances between neighboring points are preserved. The primary contribution of this algorithm is a technique for casting this problem as a semidefinite programming problem. Unfortunately, semidefinite programming solvers have a high computational cost. Like Locally Linear Embedding, it has no internal model.
Anautoencoderis a feed-forwardneural networkwhich is trained to approximate the identity function. That is, it is trained to map from a vector of values to the same vector. When used for dimensionality reduction purposes, one of the hidden layers in the network is limited to contain only a small number of network units. Thus, the network must learn to encode the vector into a small number of dimensions and then decode it back into the original space. Thus, the first half of the network is a model which maps from high to low-dimensional space, and the second half maps from low to high-dimensional space. Although the idea of autoencoders is quite old,[23]training of deep autoencoders has only recently become possible through the use ofrestricted Boltzmann machinesand stacked denoising autoencoders. Related to autoencoders is theNeuroScalealgorithm, which uses stress functions inspired bymultidimensional scalingandSammon mappings(see above) to learn a non-linear mapping from the high-dimensional to the embedded space. The mappings in NeuroScale are based onradial basis function networks.
Gaussian process latent variable models(GPLVM)[24]are probabilistic dimensionality reduction methods that use Gaussian Processes (GPs) to find a lower dimensional non-linear embedding of high dimensional data. They are an extension of the Probabilistic formulation of PCA. The model is defined probabilistically and the latent variables are then marginalized and parameters are obtained by maximizing the likelihood. Like kernel PCA they use a kernel function to form a non linear mapping (in the form of aGaussian process). However, in the GPLVM the mapping is from the embedded(latent) space to the data space (like density networks and GTM) whereas in kernel PCA it is in the opposite direction. It was originally proposed for visualization of high dimensional data but has been extended to construct a shared manifold model between two observation spaces.
GPLVM and its many variants have been proposed specially for human motion modeling, e.g., back constrained GPLVM, GP dynamic model (GPDM), balanced GPDM (B-GPDM) and topologically constrained GPDM. To capture the coupling effect of the pose and gait manifolds in the gait analysis, a multi-layer joint gait-pose manifolds was proposed.[25]
t-distributed stochastic neighbor embedding(t-SNE)[26]is widely used. It is one of a family of stochastic neighbor embedding methods. The algorithm computes the probability that pairs of datapoints in the high-dimensional space are related, and then chooses low-dimensional embeddings which produce a similar distribution.
Relational perspective map is amultidimensional scalingalgorithm. The algorithm finds a configuration of data points on a manifold by simulating a multi-particle dynamic system on a closed manifold, where data points are mapped to particles and distances (or dissimilarity) between data points represent a repulsive force. As the manifold gradually grows in size the multi-particle system cools down gradually and converges to a configuration that reflects the distance information of the data points.
Relational perspective map was inspired by a physical model in which positively charged particles move freely on the surface of a ball. Guided by theCoulombforcebetween particles, the minimal energy configuration of the particles will reflect the strength of repulsive forces between the particles.
The Relational perspective map was introduced in.[27]The algorithm firstly used the flattorusas the image manifold, then it has been extended (in the softwareVisuMapto use other types of closed manifolds, like thesphere,projective space, andKlein bottle, as image manifolds.
Contagion maps use multiple contagions on a network to map the nodes as a point cloud.[28]In the case of theGlobal cascades modelthe speed of the spread can be adjusted with the threshold parametert∈[0,1]{\displaystyle t\in [0,1]}. Fort=0{\displaystyle t=0}the contagion map is equivalent to theIsomapalgorithm.
Curvilinear component analysis(CCA) looks for the configuration of points in the output space that preserves original distances as much as possible while focusing on small distances in the output space (conversely toSammon's mappingwhich focus on small distances in original space).[29]
It should be noticed that CCA, as an iterative learning algorithm, actually starts with focus on large distances (like the Sammon algorithm), then gradually change focus to small distances. The small distance information will overwrite the large distance information, if compromises between the two have to be made.
The stress function of CCA is related to a sum of right Bregman divergences.[30]
CDA[29]trains a self-organizing neural network to fit the manifold and seeks to preservegeodesic distancesin its embedding. It is based on Curvilinear Component Analysis (which extended Sammon's mapping), but uses geodesic distances instead.
DiffeomorphicDimensionality Reduction orDiffeomap[31]learns a smooth diffeomorphic mapping which transports the data onto a lower-dimensional linear subspace. The methods solves for a smooth time indexed vector field such that flows along the field which start at the data points will end at a lower-dimensional linear subspace, thereby attempting to preserve pairwise differences under both the forward and inverse mapping.
Manifold alignmenttakes advantage of the assumption that disparate data sets produced by similar generating processes will share a similar underlying manifold representation. By learning projections from each original space to the shared manifold, correspondences are recovered and knowledge from one domain can be transferred to another. Most manifold alignment techniques consider only two data sets, but the concept extends to arbitrarily many initial data sets.[32]
Diffusion mapsleverages the relationship between heatdiffusionand arandom walk(Markov Chain); an analogy is drawn between the diffusion operator on a manifold and a Markov transition matrix operating on functions defined on the graph whose nodes were sampled from the manifold.[33]In particular, let a data set be represented byX=[x1,x2,…,xn]∈Ω⊂RD{\displaystyle \mathbf {X} =[x_{1},x_{2},\ldots ,x_{n}]\in \Omega \subset \mathbf {R^{D}} }. The underlying assumption of diffusion map is that the high-dimensional data lies on a low-dimensional manifold of dimensiond{\displaystyle \mathbf {d} }. LetXrepresent the data set andμ{\displaystyle \mu }represent the distribution of the data points onX. Further, define akernelwhich represents some notion of affinity of the points inX. The kernelk{\displaystyle {\mathit {k}}}has the following properties[34]
kis symmetric
kis positivity preserving
Thus one can think of the individual data points as the nodes of a graph and the kernelkas defining some sort of affinity on that graph. The graph is symmetric by construction since the kernel is symmetric. It is easy to see here that from the tuple (X,k) one can construct a reversibleMarkov Chain. This technique is common to a variety of fields and is known as the graph Laplacian.
For example, the graphK= (X,E) can be constructed using a Gaussian kernel.
In the above equation,xi∼xj{\displaystyle x_{i}\sim x_{j}}denotes thatxi{\displaystyle x_{i}}is a nearest neighbor ofxj{\displaystyle x_{j}}. Properly,Geodesicdistance should be used to actually measure distances on themanifold. Since the exact structure of the manifold is not available, for the nearest neighbors the geodesic distance is approximated by euclidean distance. The choiceσ{\displaystyle \sigma }modulates our notion of proximity in the sense that if‖xi−xj‖2≫σ{\displaystyle \|x_{i}-x_{j}\|_{2}\gg \sigma }thenKij=0{\displaystyle K_{ij}=0}and if‖xi−xj‖2≪σ{\displaystyle \|x_{i}-x_{j}\|_{2}\ll \sigma }thenKij=1{\displaystyle K_{ij}=1}. The former means that very little diffusion has taken place while the latter implies that the diffusion process is nearly complete. Different strategies to chooseσ{\displaystyle \sigma }can be found in.[35]
In order to faithfully represent a Markov matrix,K{\displaystyle K}must be normalized by the correspondingdegree matrixD{\displaystyle D}:
P{\displaystyle P}now represents a Markov chain.P(xi,xj){\displaystyle P(x_{i},x_{j})}is the probability of transitioning fromxi{\displaystyle x_{i}}toxj{\displaystyle x_{j}}in one time step. Similarly the probability of transitioning fromxi{\displaystyle x_{i}}toxj{\displaystyle x_{j}}inttime steps is given byPt(xi,xj){\displaystyle P^{t}(x_{i},x_{j})}. HerePt{\displaystyle P^{t}}is the matrixP{\displaystyle P}multiplied by itselfttimes.
The Markov matrixP{\displaystyle P}constitutes some notion of local geometry of the data setX. The major difference between diffusion maps andprincipal component analysisis that only local features of the data are considered in diffusion maps as opposed to taking correlations of the entire data set.
K{\displaystyle K}defines a random walk on the data set which means that the kernel captures some local geometry of data set. The Markov chain defines fast and slow directions of propagation through the kernel values. As the walk propagates forward in time, the local geometry information aggregates in the same way as local transitions (defined by differential equations) of the dynamical system.[34]The metaphor of diffusion arises from the definition of a family diffusion distance{Dt}t∈N{\displaystyle \{D_{t}\}_{t\in N}}
For fixed t,Dt{\displaystyle D_{t}}defines a distance between any two points of the data set based on path connectivity: the value ofDt(x,y){\displaystyle D_{t}(x,y)}will be smaller the more paths that connectxtoyand vice versa. Because the quantityDt(x,y){\displaystyle D_{t}(x,y)}involves a sum over of all paths of length t,Dt{\displaystyle D_{t}}is much more robust to noise in the data than geodesic distance.Dt{\displaystyle D_{t}}takes into account all the relation between points x and y while calculating the distance and serves as a better notion of proximity than justEuclidean distanceor even geodesic distance.
Local Multidimensional Scaling performsmultidimensional scalingin local regions, and then uses convex optimization to fit all the pieces together.[36]
Nonlinear PCA (NLPCA) usesbackpropagationto train a multi-layer perceptron (MLP) to fit to a manifold.[37]Unlike typical MLP training, which only updates the weights, NLPCA updates both the weights and the inputs. That is, both the weights and inputs are treated as latent values. After training, the latent inputs are a low-dimensional representation of the observed vectors, and the MLP maps from that low-dimensional representation to the high-dimensional observation space.
Data-driven high-dimensional scaling (DD-HDS)[38]is closely related toSammon's mappingand curvilinear component analysis except that (1) it simultaneously penalizes false neighborhoods and tears by focusing on small distances in both original and output space, and that (2) it accounts forconcentration of measurephenomenon by adapting the weighting function to the distance distribution.
Manifold Sculpting[39]usesgraduated optimizationto find an embedding. Like other algorithms, it computes thek-nearest neighbors and tries to seek an embedding that preserves relationships in local neighborhoods. It slowly scales variance out of higher dimensions, while simultaneously adjusting points in lower dimensions to preserve those relationships. If the rate of scaling is small, it can find very precise embeddings. It boasts higher empirical accuracy than other algorithms with several problems. It can also be used to refine the results from other manifold learning algorithms. It struggles to unfold some manifolds, however, unless a very slow scaling rate is used. It has no model.
RankVisu[40]is designed to preserve rank of neighborhood rather than distance. RankVisu is especially useful on difficult tasks (when the preservation of distance cannot be achieved satisfyingly). Indeed, the rank of neighborhood is less informative than distance (ranks can be deduced from distances but distances cannot be deduced from ranks) and its preservation is thus easier.
Topologically constrained isometric embedding(TCIE)[41]is an algorithm based on approximating geodesic distances after filtering geodesics inconsistent with the Euclidean metric. Aimed at correcting the distortions caused when Isomap is used to map intrinsically non-convex data, TCIE uses weight least-squares MDS in order to obtain a more accurate mapping. The TCIE algorithm first detects possible boundary points in the data, and during computation of the geodesic length marks inconsistent geodesics, to be given a small weight in the weightedstress majorizationthat follows.
Uniform manifold approximation and projection (UMAP) is a nonlinear dimensionality reduction technique.[42]It is similar tot-SNE.[43]
A method based on proximity matrices is one where the data is presented to the algorithm in the form of asimilarity matrixor adistance matrix. These methods all fall under the broader class ofmetric multidimensional scaling. The variations tend to be differences in how the proximity data is computed; for example,isomap,locally linear embeddings,maximum variance unfolding, andSammon mapping(which is not in fact a mapping) are examples of metric multidimensional scaling methods.
|
https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction
|
Oja's learning rule, or simplyOja's rule, named after Finnish computer scientistErkki Oja(Finnish pronunciation:[ˈojɑ],AW-yuh), is a model of how neurons in the brain or inartificial neural networkschange connection strength, or learn, over time. It is a modification of the standardHebb's Rulethat, through multiplicative normalization, solves all stability problems and generates an algorithm forprincipal components analysis. This is a computational form of an effect which is believed to happen in biological neurons.
Oja's rule requires a number of simplifications to derive, but in its final form it is demonstrably stable, unlike Hebb's rule. It is a single-neuron special case of theGeneralized Hebbian Algorithm. However, Oja's rule can also be generalized in other ways to varying degrees of stability and success.
Consider a simplified model of a neurony{\displaystyle y}that returns a linear combination of its inputsxusing presynaptic weightsw:
y(x)=∑j=1mxjwj{\displaystyle \,y(\mathbf {x} )~=~\sum _{j=1}^{m}x_{j}w_{j}}
Oja's rule defines the change in presynaptic weightswgiven the output responsey{\displaystyle y}of a neuron to its inputsxto be
whereηis thelearning ratewhich can also change with time. Note that the bold symbols arevectorsandndefines a discrete time iteration. The rule can also be made for continuous iterations as
The simplestlearning ruleknown is Hebb's rule, which states in conceptual terms thatneurons that fire together, wire together. In component form as a difference equation, it is written
or in scalar form with implicitn-dependence,
wherey(xn)is again the output, this time explicitly dependent on its input vectorx.
Hebb's rule has synaptic weights approaching infinity with a positive learning rate. We can stop this by normalizing the weights so that each weight's magnitude is restricted between 0, corresponding to no weight, and 1, corresponding to being the only input neuron with any weight. We do this by normalizing the weight vector to be of length one:
Note that in Oja's original paper,[1]p=2, corresponding to quadrature (root sum of squares), which is the familiarCartesiannormalization rule. However, any type of normalization, even linear, will give the same resultwithout loss of generality.
For a small learning rate|η|≪1{\displaystyle |\eta |\ll 1}the equation can be expanded as aPower seriesinη{\displaystyle \eta }.[1]
For smallη, ourhigher-order termsO(η2)go to zero. We again make the specification of a linear neuron, that is, the output of the neuron is equal to the sum of the product of each input and its synaptic weight to the power ofp-1, which in the case ofp=2is synaptic weight itself, or
We also specify that our weights normalize to1, which will be a necessary condition for stability, so
which, when substituted into our expansion, gives Oja's rule, or
In analyzing the convergence of a single neuron evolving by Oja's rule, one extracts the firstprincipal component, or feature, of a data set. Furthermore, with extensions using theGeneralized Hebbian Algorithm, one can create a multi-Oja neural network that can extract as many features as desired, allowing forprincipal components analysis.
A principal componentajis extracted from a datasetxthrough some associated vectorqj, oraj=qj⋅x, and we can restore our original dataset by taking
In the case of a single neuron trained by Oja's rule, we find the weight vector converges toq1, or the first principal component, as time or number of iterations approaches infinity. We can also define, given a set of input vectorsXi, that its correlation matrixRij=XiXjhas an associatedeigenvectorgiven byqjwitheigenvalueλj. Thevarianceof outputs of our Oja neuronσ2(n) = ⟨y2(n)⟩then converges with time iterations to the principal eigenvalue, or
These results are derived usingLyapunov functionanalysis, and they show that Oja's neuron necessarily converges on strictly the first principal component if certain conditions are met in our original learning rule. Most importantly, our learning rateηis allowed to vary with time, but only such that its sum isdivergentbut its power sum isconvergent, that is
Our outputactivation functiony(x(n))is also allowed to be nonlinear and nonstatic, but it must be continuously differentiable in bothxandwand have derivatives bounded in time.[2]
Oja's rule was originally described in Oja's 1982 paper,[1]but the principle of self-organization to which it is applied is first attributed toAlan Turingin 1952.[2]PCA has also had a long history of use before Oja's rule formalized its use in network computation in 1989. The model can thus be applied to any problem ofself-organizing mapping, in particular those in which feature extraction is of primary interest. Therefore, Oja's rule has an important place in image and speech processing. It is also useful as it expands easily to higher dimensions of processing, thus being able to integrate multiple outputs quickly. A canonical example is its use inbinocular vision.[3]
There is clear evidence for bothlong-term potentiationandlong-term depressionin biological neural networks, along with a normalization effect in both input weights and neuron outputs. However, while there is no direct experimental evidence yet of Oja's rule active in a biological neural network, abiophysicalderivation of a generalization of the rule is possible. Such a derivation requires retrograde signalling from the postsynaptic neuron, which is biologically plausible (seeneural backpropagation), and takes the form of
where as beforewijis the synaptic weight between theith input andjth output neurons,xis the input,yis the postsynaptic output, and we defineεto be a constant analogous the learning rate, andcpreandcpostare presynaptic and postsynaptic functions that model the weakening of signals over time. Note that the angle brackets denote the average and the ∗ operator is aconvolution. By taking the pre- and post-synaptic functions into frequency space and combining integration terms with the convolution, we find that this gives an arbitrary-dimensional generalization of Oja's rule known asOja's Subspace,[4]namely
|
https://en.wikipedia.org/wiki/Oja%27s_rule
|
Thepoint distribution modelis a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes.
The point distribution model concept has been developed by Cootes,[1]Tayloret al.[2]and became a standard incomputer visionfor thestatistical study of shape[3]and forsegmentationofmedical images[2]where shape priors really help interpretation of noisy and low-contrastedpixels/voxels. The latter point leads toactive shape models(ASM) andactive appearance models(AAM).
Point distribution models rely onlandmark points. A landmark is an annotating point posed by an anatomist onto a given locus for every shape instance across the training set population. For instance, the same landmark will designate the tip of theindex fingerin a training set of 2D hands outlines.Principal component analysis(PCA), for instance, is a relevant tool for studying correlations of movement between groups of landmarks among the training set population. Typically, it might detect that all the landmarks located along the same finger move exactly together across the training set examples showing different finger spacing for a flat-posed hands collection.
First, a set of training images are manually landmarked with enough corresponding landmarks to sufficiently approximate the geometry of the original shapes. These landmarks are aligned using thegeneralized procrustes analysis, which minimizes the least squared error between the points.
k{\displaystyle k}aligned landmarks in two dimensions are given as
It's important to note that each landmarki∈{1,…k}{\displaystyle i\in \lbrace 1,\ldots k\rbrace }should represent the same anatomical location. For example, landmark #3,(x3,y3){\displaystyle (x_{3},y_{3})}might represent the tip of the ring finger across all training images.
Now the shape outlines are reduced to sequences ofk{\displaystyle k}landmarks, so that a given training shape is defined as the vectorX∈R2k{\displaystyle \mathbf {X} \in \mathbb {R} ^{2k}}. Assuming the scattering isgaussianin this space, PCA is used to compute normalizedeigenvectorsandeigenvaluesof thecovariance matrixacross all training shapes. The matrix of the topd{\displaystyle d}eigenvectors is given asP∈R2k×d{\displaystyle \mathbf {P} \in \mathbb {R} ^{2k\times d}}, and each eigenvector describes a principal mode of variation along the set.
Finally, alinear combinationof the eigenvectors is used to define a new shapeX′{\displaystyle \mathbf {X} '}, mathematically defined as:
whereX¯{\displaystyle {\overline {\mathbf {X} }}}is defined as the mean shape across all training images, andb{\displaystyle \mathbf {b} }is a vector of scaling values for each principal component. Therefore, by modifying the variableb{\displaystyle \mathbf {b} }an infinite number of shapes can be defined. To ensure that the new shapes are all within the variation seen in the training set, it is common to only allow each element ofb{\displaystyle \mathbf {b} }to be within±{\displaystyle \pm }3 standard deviations, where the standard deviation of a given principal component is defined as the square root of its corresponding eigenvalue.
PDM's can be extended to any arbitrary number of dimensions, but are typically used in 2D image and 3D volume applications (where each landmark point isR2{\displaystyle \mathbb {R} ^{2}}orR3{\displaystyle \mathbb {R} ^{3}}).
An eigenvector, interpreted ineuclidean space, can be seen as a sequence ofk{\displaystyle k}euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twistingnematodeworm is used as an example in the teaching ofkernel PCA-based methods.
Due to the PCA properties: eigenvectors are mutuallyorthogonal, form a basis of the training set cloud in the shape space, and cross at the 0 in this space, which represents the mean shape. Also, PCA is a traditional way of fitting a closed ellipsoid to a Gaussian cloud of points (whatever their dimension): this suggests the concept of bounded variation.
The idea behind PDMs is that eigenvectors can be linearly combined to create an infinity of new shape instances that will 'look like' the one in the training set. The coefficients are bounded alike the values of the corresponding eigenvalues, so as to ensure the generated 2n/3n-dimensional dot will remain into the hyper-ellipsoidal allowed domain—allowable shape domain(ASD).[2]
|
https://en.wikipedia.org/wiki/Point_distribution_model
|
Instatistics,principal component regression(PCR) is aregression analysistechnique that is based onprincipal component analysis(PCA). PCR is a form ofreduced rank regression.[1]More specifically, PCR is used forestimatingthe unknownregression coefficientsin astandard linear regression model.
In PCR, instead of regressing the dependent variable on the explanatory variables directly, theprincipal componentsof the explanatory variables are used asregressors. One typically uses only a subset of all the principal components for regression, making PCR a kind ofregularizedprocedure and also a type ofshrinkage estimator.
Often the principal components with highervariances(the ones based oneigenvectorscorresponding to the highereigenvaluesof thesamplevariance-covariance matrixof the explanatory variables) are selected as regressors. However, for the purpose ofpredictingthe outcome, the principal components with low variances may also be important, in some cases even more important.[2]
One major use of PCR lies in overcoming themulticollinearityproblem which arises when two or more of the explanatory variables are close to beingcollinear.[3]PCR can aptly deal with such situations by excluding some of the low-variance principal components in the regression step. In addition, by usually regressing on only a subset of all the principal components, PCR can result indimension reductionthrough substantially lowering the effective number of parameters characterizing the underlying model. This can be particularly useful in settings withhigh-dimensional covariates. Also, through appropriate selection of the principal components to be used for regression, PCR can lead to efficientpredictionof the outcome based on the assumed model.
The PCR method may be broadly divided into three major steps:
Data representation:LetYn×1=(y1,…,yn)T{\displaystyle \mathbf {Y} _{n\times 1}=\left(y_{1},\ldots ,y_{n}\right)^{T}}denote the vector of observed outcomes andXn×p=(x1,…,xn)T{\displaystyle \mathbf {X} _{n\times p}=\left(\mathbf {x} _{1},\ldots ,\mathbf {x} _{n}\right)^{T}}denote the correspondingdata matrixof observed covariates where,n{\displaystyle n}andp{\displaystyle p}denote the size of the observedsampleand the number of covariates respectively, withn≥p{\displaystyle n\geq p}. Each of then{\displaystyle n}rows ofX{\displaystyle \mathbf {X} }denotes one set of observations for thep{\displaystyle p}dimensionalcovariate and the respective entry ofY{\displaystyle \mathbf {Y} }denotes the corresponding observed outcome.
Data pre-processing:Assume thatY{\displaystyle \mathbf {Y} }and each of thep{\displaystyle p}columns ofX{\displaystyle \mathbf {X} }have already beencenteredso that all of them have zeroempirical means. This centering step is crucial (at least for the columns ofX{\displaystyle \mathbf {X} }) since PCR involves the use of PCA onX{\displaystyle \mathbf {X} }andPCA is sensitivetocenteringof the data.
Underlying model:Following centering, the standardGauss–Markovlinear regressionmodel forY{\displaystyle \mathbf {Y} }onX{\displaystyle \mathbf {X} }can be represented as:Y=Xβ+ε,{\displaystyle \mathbf {Y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\;}whereβ∈Rp{\displaystyle {\boldsymbol {\beta }}\in \mathbb {R} ^{p}}denotes the unknown parameter vector of regression coefficients andε{\displaystyle {\boldsymbol {\varepsilon }}}denotes the vector of random errors withE(ε)=0{\displaystyle \operatorname {E} \left({\boldsymbol {\varepsilon }}\right)=\mathbf {0} \;}andVar(ε)=σ2In×n{\displaystyle \;\operatorname {Var} \left({\boldsymbol {\varepsilon }}\right)=\sigma ^{2}I_{n\times n}}for some unknownvarianceparameterσ2>0{\displaystyle \sigma ^{2}>0\;\;}
Objective:The primary goal is to obtain an efficientestimatorβ^{\displaystyle {\widehat {\boldsymbol {\beta }}}}for the parameterβ{\displaystyle {\boldsymbol {\beta }}}, based on the data. One frequently used approach for this isordinary least squaresregression which, assumingX{\displaystyle \mathbf {X} }isfull column rank, gives theunbiased estimator:β^ols=(XTX)−1XTY{\displaystyle {\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }=(\mathbf {X} ^{T}\mathbf {X} )^{-1}\mathbf {X} ^{T}\mathbf {Y} }ofβ{\displaystyle {\boldsymbol {\beta }}}. PCR is another technique that may be used for the same purpose of estimatingβ{\displaystyle {\boldsymbol {\beta }}}.
PCA step:PCR starts by performing a PCA on the centered data matrixX{\displaystyle \mathbf {X} }. For this, letX=UΔVT{\displaystyle \mathbf {X} =U\Delta V^{T}}denote thesingular value decompositionofX{\displaystyle \mathbf {X} }where,Δp×p=diag[δ1,…,δp]{\displaystyle \Delta _{p\times p}=\operatorname {diag} \left[\delta _{1},\ldots ,\delta _{p}\right]}withδ1≥⋯≥δp≥0{\displaystyle \delta _{1}\geq \cdots \geq \delta _{p}\geq 0}denoting the non-negativesingular valuesofX{\displaystyle \mathbf {X} }, while thecolumnsofUn×p=[u1,…,up]{\displaystyle U_{n\times p}=[\mathbf {u} _{1},\ldots ,\mathbf {u} _{p}]}andVp×p=[v1,…,vp]{\displaystyle V_{p\times p}=[\mathbf {v} _{1},\ldots ,\mathbf {v} _{p}]}are bothorthonormal setsof vectors denoting theleft and right singular vectorsofX{\displaystyle \mathbf {X} }respectively.
The principal components:VΛVT{\displaystyle V\Lambda V^{T}}gives aspectral decompositionofXTX{\displaystyle \mathbf {X} ^{T}\mathbf {X} }whereΛp×p=diag[λ1,…,λp]=diag[δ12,…,δp2]=Δ2{\displaystyle \Lambda _{p\times p}=\operatorname {diag} \left[\lambda _{1},\ldots ,\lambda _{p}\right]=\operatorname {diag} \left[\delta _{1}^{2},\ldots ,\delta _{p}^{2}\right]=\Delta ^{2}}withλ1≥⋯≥λp≥0{\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{p}\geq 0}denoting the non-negative eigenvalues (also known as theprincipal values) ofXTX{\displaystyle \mathbf {X} ^{T}\mathbf {X} }, while the columns ofV{\displaystyle V}denote the corresponding orthonormal set of eigenvectors. Then,Xvj{\displaystyle \mathbf {X} \mathbf {v} _{j}}andvj{\displaystyle \mathbf {v} _{j}}respectively denote thejth{\displaystyle j^{th}}principal componentand thejth{\displaystyle j^{th}}principal component direction(orPCA loading) corresponding to thejth{\displaystyle j^{\text{th}}}largestprincipal valueλj{\displaystyle \lambda _{j}}for eachj∈{1,…,p}{\displaystyle j\in \{1,\ldots ,p\}}.
Derived covariates:For anyk∈{1,…,p}{\displaystyle k\in \{1,\ldots ,p\}}, letVk{\displaystyle V_{k}}denote thep×k{\displaystyle p\times k}matrix with orthonormal columns consisting of the firstk{\displaystyle k}columns ofV{\displaystyle V}. LetWk=XVk{\displaystyle W_{k}=\mathbf {X} V_{k}}=[Xv1,…,Xvk]{\displaystyle =[\mathbf {X} \mathbf {v} _{1},\ldots ,\mathbf {X} \mathbf {v} _{k}]}denote then×k{\displaystyle n\times k}matrix having the firstk{\displaystyle k}principal components as its columns.W{\displaystyle W}may be viewed as the data matrix obtained by using thetransformedcovariatesxik=VkTxi∈Rk{\displaystyle \mathbf {x} _{i}^{k}=V_{k}^{T}\mathbf {x} _{i}\in \mathbb {R} ^{k}}instead of using the original covariatesxi∈Rp∀1≤i≤n{\displaystyle \mathbf {x} _{i}\in \mathbb {R} ^{p}\;\;\forall \;\;1\leq i\leq n}.
The PCR estimator:Letγ^k=(WkTWk)−1WkTY∈Rk{\displaystyle {\widehat {\gamma }}_{k}=(W_{k}^{T}W_{k})^{-1}W_{k}^{T}\mathbf {Y} \in \mathbb {R} ^{k}}denote the vector of estimated regression coefficients obtained byordinary least squaresregression of the response vectorY{\displaystyle \mathbf {Y} }on the data matrixWk{\displaystyle W_{k}}. Then, for anyk∈{1,…,p}{\displaystyle k\in \{1,\ldots ,p\}}, the final PCR estimator ofβ{\displaystyle {\boldsymbol {\beta }}}based on using the firstk{\displaystyle k}principal components is given by:β^k=Vkγ^k∈Rp{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}=V_{k}{\widehat {\gamma }}_{k}\in \mathbb {R} ^{p}}.
The fitting process for obtaining the PCR estimator involves regressing the response vector on the derived data matrixWk{\displaystyle W_{k}}which hasorthogonalcolumns for anyk∈{1,…,p}{\displaystyle k\in \{1,\ldots ,p\}}since the principal components aremutually orthogonalto each other. Thus in the regression step, performing amultiple linear regressionjointly on thek{\displaystyle k}selected principal components as covariates is equivalent to carrying outk{\displaystyle k}independentsimple linear regressions(or univariate regressions) separately on each of thek{\displaystyle k}selected principal components as a covariate.
When all the principal components are selected for regression so thatk=p{\displaystyle k=p}, then the PCR estimator is equivalent to theordinary least squaresestimator. Thus,β^p=β^ols{\displaystyle {\widehat {\boldsymbol {\beta }}}_{p}={\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }}. This is easily seen from the fact thatWp=XVp=XV{\displaystyle W_{p}=\mathbf {X} V_{p}=\mathbf {X} V}and also observing thatV{\displaystyle V}is anorthogonal matrix.
For anyk∈{1,…,p}{\displaystyle k\in \{1,\ldots ,p\}}, the variance ofβ^k{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}is given by
In particular:
Hence for allk∈{1,…,p−1}{\displaystyle k\in \{1,\ldots ,p-1\}}we have:
Thus, for allk∈{1,…,p}{\displaystyle k\in \{1,\ldots ,p\}}we have:
whereA⪰0{\displaystyle A\succeq 0}indicates that a square symmetric matrixA{\displaystyle A}isnon-negative definite. Consequently, any givenlinear formof the PCR estimator has a lower variance compared to that of the samelinear formof the ordinary least squares estimator.
Undermulticollinearity, two or more of the covariates are highlycorrelated, so that one can be linearly predicted from the others with a non-trivial degree of accuracy. Consequently, the columns of the data matrixX{\displaystyle \mathbf {X} }that correspond to the observations for these covariates tend to becomelinearly dependentand therefore,X{\displaystyle \mathbf {X} }tends to becomerank deficientlosing its full column rank structure. More quantitatively, one or more of the smaller eigenvalues ofXTX{\displaystyle \mathbf {X} ^{T}\mathbf {X} }get(s) very close or become(s) exactly equal to0{\displaystyle 0}under such situations. The variance expressions above indicate that these small eigenvalues have the maximuminflation effecton the variance of the least squares estimator, therebydestabilizingthe estimator significantly when they are close to0{\displaystyle 0}. This issue can be effectively addressed through using a PCR estimator obtained by excluding the principal components corresponding to these small eigenvalues.
PCR may also be used for performingdimension reduction. To see this, letLk{\displaystyle L_{k}}denote anyp×k{\displaystyle p\times k}matrix having orthonormal columns, for anyk∈{1,…,p}.{\displaystyle k\in \{1,\ldots ,p\}.}Suppose now that we want toapproximateeach of the covariate observationsxi{\displaystyle \mathbf {x} _{i}}through therankk{\displaystyle k}linear transformationLkzi{\displaystyle L_{k}\mathbf {z} _{i}}for somezi∈Rk(1≤i≤n){\displaystyle \mathbf {z} _{i}\in \mathbb {R} ^{k}(1\leq i\leq n)}.
Then, it can be shown that
is minimized atLk=Vk,{\displaystyle L_{k}=V_{k},}the matrix with the firstk{\displaystyle k}principal component directions as columns, andzi=xik=VkTxi,{\displaystyle \mathbf {z} _{i}=\mathbf {x} _{i}^{k}=V_{k}^{T}\mathbf {x} _{i},}the correspondingk{\displaystyle k}dimensional derived covariates. Thus thek{\displaystyle k}dimensional principal components provide the bestlinear approximationof rankk{\displaystyle k}to the observed data matrixX{\displaystyle \mathbf {X} }.
The correspondingreconstruction erroris given by:
Thus any potentialdimension reductionmay be achieved by choosingk{\displaystyle k}, the number of principal components to be used, through appropriate thresholding on the cumulative sum of theeigenvaluesofXTX{\displaystyle \mathbf {X} ^{T}\mathbf {X} }. Since the smaller eigenvalues do not contribute significantly to the cumulative sum, the corresponding principal components may be continued to be dropped as long as the desired threshold limit is not exceeded. The same criteria may also be used for addressing themulticollinearityissue whereby the principal components corresponding to the smaller eigenvalues may be ignored as long as the threshold limit is maintained.
Since the PCR estimator typically uses only a subset of all the principal components for regression, it can be viewed as some sort of aregularizedprocedure. More specifically, for any1⩽k<p{\displaystyle 1\leqslant k<p}, the PCR estimatorβ^k{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}denotes the regularized solution to the followingconstrained minimizationproblem:
The constraint may be equivalently written as:
where:
Thus, when only a proper subset of all the principal components are selected for regression, the PCR estimator so obtained is based on a hard form ofregularizationthat constrains the resulting solution to thecolumn spaceof the selected principal component directions, and consequently restricts it to beorthogonalto the excluded directions.
Given the constrained minimization problem as defined above, consider the following generalized version of it:
where,L(p−k){\displaystyle L_{(p-k)}}denotes any full column rank matrix of orderp×(p−k){\displaystyle p\times (p-k)}with1⩽k<p{\displaystyle 1\leqslant k<p}.
Letβ^L{\displaystyle {\widehat {\boldsymbol {\beta }}}_{L}}denote the corresponding solution. Thus
Then the optimal choice of the restriction matrixL(p−k){\displaystyle L_{(p-k)}}for which the corresponding estimatorβ^L{\displaystyle {\widehat {\boldsymbol {\beta }}}_{L}}achieves the minimum prediction error is given by:[4]
where
Quite clearly, the resulting optimal estimatorβ^L∗{\displaystyle {\widehat {\boldsymbol {\beta }}}_{L^{*}}}is then simply given by the PCR estimatorβ^k{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}based on the firstk{\displaystyle k}principal components.
Since the ordinary least squares estimator isunbiasedforβ{\displaystyle {\boldsymbol {\beta }}}, we have
where, MSE denotes themean squared error. Now, if for somek∈{1,…,p}{\displaystyle k\in \{1,\ldots ,p\}}, we additionally have:V(p−k)Tβ=0{\displaystyle V_{(p-k)}^{T}{\boldsymbol {\beta }}=\mathbf {0} }, then the correspondingβ^k{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}is alsounbiasedforβ{\displaystyle {\boldsymbol {\beta }}}and therefore
We have already seen that
which then implies:
for that particulark{\displaystyle k}. Thus in that case, the correspondingβ^k{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}would be a moreefficient estimatorofβ{\displaystyle {\boldsymbol {\beta }}}compared toβ^ols{\displaystyle {\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }}, based on using the mean squared error as the performance criteria. In addition, any givenlinear formof the correspondingβ^k{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}would also have a lowermean squared errorcompared to that of the samelinear formofβ^ols{\displaystyle {\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} }}.
Now suppose that for a givenk∈{1,…,p},V(p−k)Tβ≠0{\displaystyle k\in \{1,\ldots ,p\},V_{(p-k)}^{T}{\boldsymbol {\beta }}\neq \mathbf {0} }. Then the correspondingβ^k{\displaystyle {\widehat {\boldsymbol {\beta }}}_{k}}isbiasedforβ{\displaystyle {\boldsymbol {\beta }}}. However, since
it is still possible thatMSE(β^ols)−MSE(β^k)⪰0{\displaystyle \operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{\mathrm {ols} })-\operatorname {MSE} ({\widehat {\boldsymbol {\beta }}}_{k})\succeq 0}, especially ifk{\displaystyle k}is such that the excluded principal components correspond to the smaller eigenvalues, thereby resulting in lowerbias.
In order to ensure efficient estimation and prediction performance of PCR as an estimator ofβ{\displaystyle {\boldsymbol {\beta }}}, Park (1981)[4]proposes the following guideline for selecting the principal components to be used for regression: Drop thejth{\displaystyle j^{th}}principal component if and only ifλj<(pσ2)/βTβ.{\displaystyle \lambda _{j}<(p\sigma ^{2})/{\boldsymbol {\beta }}^{T}{\boldsymbol {\beta }}.}Practical implementation of this guideline of course requires estimates for the unknown model parametersσ2{\displaystyle \sigma ^{2}}andβ{\displaystyle {\boldsymbol {\beta }}}. In general, they may be estimated using the unrestricted least squares estimates obtained from the original full model. Park (1981) however provides a slightly modified set of estimates that may be better suited for this purpose.[4]
Unlike the criteria based on the cumulative sum of the eigenvalues ofXTX{\displaystyle \mathbf {X} ^{T}\mathbf {X} }, which is probably more suited for addressing the multicollinearity problem and for performing dimension reduction, the above criteria actually attempts to improve the prediction and estimation efficiency of the PCR estimator by involving both the outcome as well as the covariates in the process of selecting the principal components to be used in the regression step. Alternative approaches with similar goals include selection of the principal components based oncross-validationor theMallow's Cpcriteria. Often, the principal components are also selected based on their degree ofassociationwith the outcome.
In general, PCR is essentially ashrinkage estimatorthat usually retains the high variance principal components (corresponding to the higher eigenvalues ofXTX{\displaystyle \mathbf {X} ^{T}\mathbf {X} }) as covariates in the model and discards the remaining low variance components (corresponding to the lower eigenvalues ofXTX{\displaystyle \mathbf {X} ^{T}\mathbf {X} }). Thus it exerts a discreteshrinkage effecton the low variance components nullifying their contribution completely in the original model. In contrast, theridge regressionestimator exerts a smooth shrinkage effect through theregularization parameter(or the tuning parameter) inherently involved in its construction. While it does not completely discard any of the components, it exerts a shrinkage effect over all of them in a continuous manner so that the extent of shrinkage is higher for the low variance components and lower for the high variance components. Frank and Friedman (1993)[5]conclude that for the purpose of prediction itself, the ridge estimator, owing to its smooth shrinkage effect, is perhaps a better choice compared to the PCR estimator having a discrete shrinkage effect.
In addition, the principal components are obtained from theeigen-decompositionofX{\displaystyle \mathbf {X} }that involves the observations for the explanatory variables only. Therefore, the resulting PCR estimator obtained from using these principal components as covariates need not necessarily have satisfactory predictive performance for the outcome. A somewhat similar estimator that tries to address this issue through its very construction is thepartial least squares(PLS) estimator. Similar to PCR, PLS also uses derived covariates of lower dimensions. However unlike PCR, the derived covariates for PLS are obtained based on using both the outcome as well as the covariates. While PCR seeks the high variance directions in the space of the covariates, PLS seeks the directions in the covariate space that are most useful for the prediction of the outcome.
2006 a variant of the classical PCR known as thesupervised PCRwas proposed.[6]In a spirit similar to that of PLS, it attempts at obtaining derived covariates of lower dimensions based on a criterion that involves both the outcome as well as the covariates. The method starts by performing a set ofp{\displaystyle p}simple linear regressions(or univariate regressions) wherein the outcome vector is regressed separately on each of thep{\displaystyle p}covariates taken one at a time. Then, for somem∈{1,…,p}{\displaystyle m\in \{1,\ldots ,p\}}, the firstm{\displaystyle m}covariates that turn out to be the most correlated with the outcome (based on the degree of significance of the corresponding estimated regression coefficients) are selected for further use. A conventional PCR, as described earlier, is then performed, but now it is based on only then×m{\displaystyle n\times m}data matrix corresponding to the observations for the selected covariates. The number of covariates used:m∈{1,…,p}{\displaystyle m\in \{1,\ldots ,p\}}and the subsequent number of principal components used:k∈{1,…,m}{\displaystyle k\in \{1,\ldots ,m\}}are usually selected bycross-validation.
The classical PCR method as described above is based onclassical PCAand considers alinear regression modelfor predicting the outcome based on the covariates. However, it can be easily generalized to akernel machinesetting whereby theregression functionneed not necessarily belinearin the covariates, but instead it can belong to theReproducing Kernel Hilbert Spaceassociated with any arbitrary (possiblynon-linear),symmetricpositive-definite kernel. Thelinear regression modelturns out to be a special case of this setting when thekernel functionis chosen to be thelinear kernel.
In general, under thekernel machinesetting, the vector of covariates is firstmappedinto ahigh-dimensional(potentiallyinfinite-dimensional)feature spacecharacterized by thekernel functionchosen. Themappingso obtained is known as thefeature mapand each of itscoordinates, also known as thefeature elements, corresponds to one feature (may belinearornon-linear) of the covariates. Theregression functionis then assumed to be alinear combinationof thesefeature elements. Thus, theunderlying regression modelin thekernel machinesetting is essentially alinear regression modelwith the understanding that instead of the original set of covariates, the predictors are now given by the vector (potentiallyinfinite-dimensional) offeature elementsobtained bytransformingthe actual covariates using thefeature map.
However, thekernel trickactually enables us to operate in thefeature spacewithout ever explicitly computing thefeature map. It turns out that it is only sufficient to compute the pairwiseinner productsamong the feature maps for the observed covariate vectors and theseinner productsare simply given by the values of thekernel functionevaluated at the corresponding pairs of covariate vectors. The pairwise inner products so obtained may therefore be represented in the form of an×n{\displaystyle n\times n}symmetric non-negative definite matrix also known as thekernel matrix.
PCR in thekernel machinesetting can now be implemented by firstappropriately centeringthiskernel matrix(K, say) with respect to thefeature spaceand then performing akernel PCAon thecentered kernel matrix(K', say) whereby aneigendecompositionof K' is obtained. Kernel PCR then proceeds by (usually) selecting a subset of all theeigenvectorsso obtained and then performing astandard linear regressionof the outcome vector on these selectedeigenvectors. Theeigenvectorsto be used for regression are usually selected usingcross-validation. The estimated regression coefficients (having the same dimension as the number of selected eigenvectors) along with the corresponding selected eigenvectors are then used for predicting the outcome for a future observation. Inmachine learning, this technique is also known asspectral regression.
Clearly, kernel PCR has a discrete shrinkage effect on the eigenvectors of K', quite similar to the discrete shrinkage effect of classical PCR on the principal components, as discussed earlier. However, the feature map associated with the chosen kernel could potentially be infinite-dimensional, and hence the corresponding principal components and principal component directions could be infinite-dimensional as well. Therefore, these quantities are often practically intractable under the kernel machine setting. Kernel PCR essentially works around this problem by considering an equivalent dual formulation based on using thespectral decompositionof the associated kernel matrix. Under the linear regression model (which corresponds to choosing the kernel function as the linear kernel), this amounts to considering a spectral decomposition of the correspondingn×n{\displaystyle n\times n}kernel matrixXXT{\displaystyle \mathbf {X} \mathbf {X} ^{T}}and then regressing the outcome vector on a selected subset of the eigenvectors ofXXT{\displaystyle \mathbf {X} \mathbf {X} ^{T}}so obtained. It can be easily shown that this is the same as regressing the outcome vector on the corresponding principal components (which are finite-dimensional in this case), as defined in the context of the classical PCR. Thus, for the linear kernel, the kernel PCR based on a dual formulation is exactly equivalent to the classical PCR based on a primal formulation. However, for arbitrary (and possibly non-linear) kernels, this primal formulation may become intractable owing to the infinite dimensionality of the associated feature map. Thus classical PCR becomes practically infeasible in that case, but kernel PCR based on the dual formulation still remains valid and computationally scalable.
|
https://en.wikipedia.org/wiki/Principal_component_regression
|
Intime series analysis,singular spectrum analysis(SSA) is anonparametricspectral estimationmethod. It combines elements of classicaltime seriesanalysis,multivariate statistics, multivariate geometry,dynamical systemsandsignal processing. Its roots lie in the classical Karhunen (1946)–Loève (1945, 1978)spectral decompositionoftime seriesandrandom fieldsand in the Mañé (1981)–Takens (1981)embedding theorem. SSA can be an aid in thedecomposition of time seriesinto a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum ofeigenvaluesin asingular value decompositionof acovariance matrix, and not directly to afrequency domain decomposition.
The origins of SSA and, more generally, of subspace-based methods for signal processing, go back to the eighteenth century (Prony's method). A key development was the formulation of thespectral decompositionof thecovariance operatorof stochastic processes byKari KarhunenandMichel Loèvein the late 1940s (Loève, 1945; Karhunen, 1947).
Broomhead and King (1986a, b) and Fraedrich (1986) proposed to use SSA and multichannel SSA (M-SSA) in the context of nonlinear dynamics for the purpose of reconstructing theattractorof a system from measured time series. These authors provided an extension and a more robust application of the idea of reconstructing dynamics from a single time series based on theembedding theorem. Several other authors had already applied simple versions of M-SSA to meteorological and ecological data sets (Colebrook, 1978; Barnett and Hasselmann, 1979; Weare and Nasstrom, 1982).
Ghil, Vautard and their colleagues (Vautard and Ghil, 1989; Ghil and Vautard, 1991; Vautard et al., 1992; Ghil et al., 2002) noticed the analogy between the trajectory matrix of Broomhead and King, on the one hand, and theKarhunen–Loeve decomposition(Principal component analysisin the time domain), on the other. Thus, SSA can be used as a time-and-frequency domain method fortime seriesanalysis — independently fromattractorreconstruction and including cases in which the latter may fail. The survey paper of Ghil et al. (2002) is the basis of the§ Methodologysection of this article. A crucial result of the work of these authors is that SSA can robustly recover the "skeleton" of an attractor, including in the presence of noise. This skeleton is formed by the least unstable periodic orbits, which can be identified in the eigenvalue spectra of SSA and M-SSA. The identification and detailed description of these orbits can provide highly useful pointers to the underlying nonlinear dynamics.
The so-called ‘Caterpillar’ methodology is a version of SSA that was developed in the former Soviet Union, independently of the mainstream SSA work in the West. This methodology became known in the rest of the world more recently (Danilov and Zhigljavsky, Eds., 1997; Golyandina et al., 2001; Zhigljavsky, Ed., 2010; Golyandina and Zhigljavsky, 2013; Golyandina et al., 2018). ‘Caterpillar-SSA’ emphasizes the concept of separability, a concept that leads, for example, to specific recommendations concerning the choice of SSA parameters. This method is thoroughly described in§ SSA as a model-free toolof this article.
In practice, SSA is a nonparametric spectral estimation method based on embedding atime series{X(t):t=1,…,N}{\displaystyle \{X(t):t=1,\ldots ,N\}}in a vector space of dimensionM{\displaystyle M}. SSA proceeds by diagonalizing theM×M{\displaystyle M\times M}lag-covariance matrixCX{\displaystyle {\textbf {C}}_{X}}ofX(t){\displaystyle X(t)}to obtainspectral informationon the time series, assumed to bestationaryin the weak sense. The matrixCX{\displaystyle {\textbf {C}}_{X}}can be estimated directly from the data as aToeplitz matrixwith constant diagonals (Vautard and Ghil, 1989), i.e., its entriescij{\displaystyle c_{ij}}depend only on the lag|i−j|{\displaystyle |i-j|}:
An alternative way to computeCX{\displaystyle {\textbf {C}}_{X}}, is by using theN′×M{\displaystyle N'\times M}"trajectory matrix"D{\displaystyle {\textbf {D}}}that is formed byM{\displaystyle M}lag-shifted copies ofX(t){\displaystyle {\it {X(t)}}}, which areN′=N−M+1{\displaystyle N'=N-M+1}long; then
TheM{\displaystyle M}eigenvectorsEk{\displaystyle {\textbf {E}}_{k}}of the lag-covariance matrixCX{\displaystyle {\textbf {C}}_{X}}are called temporalempirical orthogonal functions (EOFs). The eigenvaluesλk{\displaystyle \lambda _{k}}ofCX{\displaystyle {\textbf {C}}_{X}}account for the partial variance in the
directionEk{\displaystyle {\textbf {E}}_{k}}and the sum of the eigenvalues, i.e., the trace ofCX{\displaystyle {\textbf {C}}_{X}}, gives the total variance of the original time seriesX(t){\displaystyle X(t)}. The name of the method derives from the singular valuesλk1/2{\displaystyle \lambda _{k}^{1/2}}ofCX.{\displaystyle {\textbf {C}}_{X}.}
Projecting the time series onto each EOF yields the corresponding
temporal principal components (PCs)Ak{\displaystyle {\textbf {A}}_{k}}:
An oscillatory mode is characterized by a pair of
nearly equal SSA eigenvalues and associated PCs that are in approximate phase quadrature (Ghil et al., 2002). Such a pair can represent efficiently a nonlinear, anharmonic oscillation. This is due to the fact that a single pair of data-adaptive SSA eigenmodes often will capture better the basic periodicity of an oscillatory mode than methods with fixedbasis functions, such as thesinesandcosinesused in theFourier transform.
The window widthM{\displaystyle M}determines the longest periodicity captured by SSA. Signal-to-noise separation can be obtained by merely inspecting the slope break in a "scree diagram" of eigenvaluesλk{\displaystyle \lambda _{k}}or singular valuesλk1/2{\displaystyle \lambda _{k}^{1/2}}vs.k{\displaystyle k}. The pointk∗=S{\displaystyle k^{*}=S}at which this break occurs should not be confused with a "dimension"D{\displaystyle D}of the underlying deterministic dynamics (Vautard and Ghil, 1989).
A Monte-Carlo test (Allen and Smith, 1996; Allen and Robertson, 1996;Grothand Ghil, 2015) can be applied to ascertain the statistical significance of the oscillatory pairs detected by SSA. The entire time series or parts of it that correspond to trends, oscillatory modes or noise can be reconstructed by using linear combinations of the PCs and EOFs, which provide the reconstructed components (RCs)RK{\displaystyle {\textbf {R}}_{K}}:
hereK{\displaystyle K}is the set of EOFs on which the reconstruction is based. The values of the normalization factorMt{\displaystyle M_{t}}, as well as of the lower and upper bound of summationLt{\displaystyle L_{t}}andUt{\displaystyle U_{t}}, differ between the central part of the time series and the vicinity of its endpoints (Ghil et al., 2002).
Multi-channel SSA (or M-SSA) is a natural extension of SSA to anL{\displaystyle L}-channel time series of vectors or maps withN{\displaystyle N}data points{Xl(t):l=1,…,L;t=1,…,N}{\displaystyle \{X_{l}(t):l=1,\dots ,L;t=1,\dots ,N\}}. In the meteorological literature, extended EOF (EEOF) analysis is often assumed to be synonymous with M-SSA. The two methods are both extensions of classicalprincipal component analysis (PCA)but they differ in emphasis: EEOF analysis typically utilizes a numberL{\displaystyle L}of spatial channels much greater than the numberM{\displaystyle M}of temporal lags, thus limiting the temporal and spectral information. In M-SSA, on the other hand, one usually choosesL≤M{\displaystyle L\leq M}. Often M-SSA is applied to a few leading PCs of the spatial data, withM{\displaystyle M}chosen large enough to extract detailed temporal and spectral information from the multivariate time series (Ghil et al., 2002). However, Groth and Ghil (2015) have demonstrated possible negative effects of this variance compression on the detection rate of weak signals when the numberL{\displaystyle L}of retained PCs becomes too small. This practice can further affect negatively the judicious reconstruction of the spatio-temporal patterns of such weak signals, and Groth et al. (2016) recommend retaining a maximum number of PCs, i.e.,L=N{\displaystyle L=N}.
Groth and Ghil (2011) have demonstrated that a classical M-SSA analysis suffers from a degeneracy problem, namely the EOFs do not separate well between distinct oscillations when the corresponding eigenvalues are similar in size. This problem is a shortcoming of principal component analysis in general, not just of M-SSA in particular. In order to reduce mixture effects and to improve the physical interpretation, Groth and Ghil (2011) have proposed a subsequentVARIMAX rotationof the spatio-temporal EOFs (ST-EOFs) of the M-SSA. To avoid a loss of spectral properties (Plaut and Vautard 1994), they have introduced a slight modification of thecommon VARIMAX rotationthat does take the spatio-temporal structure of ST-EOFs into account. Alternatively, a closed matrix formulation of the algorithm for the simultaneous rotation of the EOFs by iterative SVD decompositions has been proposed (Portes and Aguirre, 2016).
M-SSA has two forecasting approaches known as recurrent and vector. The discrepancies between these two approaches are attributable to the organization of the single trajectory matrixX{\displaystyle {\textbf {X}}}of each series into the block trajectory matrix in the multivariate case. Two trajectory matrices can be organized as either vertical (VMSSA) or horizontal (HMSSA) as was recently introduced in Hassani and Mahmoudvand (2013), and it was shown that these constructions lead to better forecasts. Accordingly, we have four different forecasting algorithms that can be exploited in this version of MSSA (Hassani and Mahmoudvand, 2013).
In this subsection, we focus on phenomena that exhibit a significant oscillatory component: repetition increases understanding and hence confidence in a prediction method that is closely connected with such understanding.
Singular spectrum analysis (SSA) and the maximum entropy method (MEM) have been combined to predict a variety of phenomena in meteorology, oceanography and climate dynamics (Ghil et al., 2002, and references therein). First, the “noise” is filtered out by projecting the time series onto a subset of leading EOFs obtained by SSA; the selected subset should include statistically significant, oscillatory modes. Experience shows that this approach works best when the partial variance associated with the pairs of RCs that capture these modes is large (Ghil and Jiang, 1998).
The prefiltered RCs are then extrapolated by least-square fitting to anautoregressive modelAR[p]{\displaystyle AR[p]}, whose coefficients give the MEM spectrum of the remaining “signal”. Finally, the extended RCs are used in the SSA reconstruction process to produce the forecast values. The reason why this approach – via SSA prefiltering, AR extrapolation of the RCs, and SSA reconstruction – works better than the customary AR-based prediction is explained by the fact that the individual RCs are narrow-band signals, unlike the original, noisy time seriesX(t){\displaystyle X(t)}(Penland et al., 1991; Keppenne and Ghil, 1993). In fact, the optimal orderpobtained for the individual RCs is considerably lower than the one given by the standard Akaike information criterion (AIC) or similar ones.
The gap-filling version of SSA can be used to analyze data sets that areunevenly sampledor containmissing data(Kondrashov and Ghil, 2006; Kondrashov et al. 2010). For a univariate time series, the SSA gap filling procedure utilizes temporal correlations to fill in the missing points. For a multivariate data set, gap filling by M-SSA takes advantage of both spatial and temporal correlations. In either case: (i) estimates of missing data points are produced iteratively, and are then used to compute a self-consistent lag-covariance matrixCX{\displaystyle {\textbf {C}}_{X}}and its EOFsEk{\displaystyle {\textbf {E}}_{k}}; and (ii)cross-validationis used to optimize the window widthM{\displaystyle M}and the number of leading SSA modes to fill the gaps with the iteratively estimated "signal," while the noise is discarded.
The areas where SSA can be applied are very broad: climatology, marine science, geophysics, engineering, image processing, medicine, econometrics among them. Hence different modifications of SSA have been proposed and different methodologies of SSA are used in practical applications such astrendextraction,periodicitydetection,seasonal adjustment,smoothing,noise reduction(Golyandina, et al, 2001).
SSA can be used as a model-free technique so that it can be applied to arbitrary time series including non-stationary time series. The basic aim of SSA is to decompose the time series into the sum of interpretable components such as trend, periodic components and noise with no a-priori assumptions about the parametric form of these components.
Consider a real-valued time seriesX=(x1,…,xN){\displaystyle \mathbb {X} =(x_{1},\ldots ,x_{N})}of lengthN{\displaystyle N}. LetL{\displaystyle L}(1<L<N){\displaystyle \ (1<L<N)}be some integer called thewindow lengthandK=N−L+1{\displaystyle K=N-L+1}.
1st step: Embedding.
Form thetrajectory matrixof the seriesX{\displaystyle \mathbb {X} }, which is theL×K{\displaystyle L\!\times \!K}matrix
whereXi=(xi,…,xi+L−1)T(1≤i≤K){\displaystyle X_{i}=(x_{i},\ldots ,x_{i+L-1})^{\mathrm {T} }\;\quad (1\leq i\leq K)}arelagged vectorsof sizeL{\displaystyle L}. The matrixX{\displaystyle \mathbf {X} }is aHankel matrixwhich means thatX{\displaystyle \mathbf {X} }has equal elementsxij{\displaystyle x_{ij}}on the anti-diagonalsi+j=const{\displaystyle i+j=\,{\rm {const}}}.
2nd step:Singular Value Decomposition(SVD).
Perform the singular value decomposition (SVD) of the trajectory matrixX{\displaystyle \mathbf {X} }. SetS=XXT{\displaystyle \mathbf {S} =\mathbf {X} \mathbf {X} ^{\mathrm {T} }}and denote byλ1,…,λL{\displaystyle \lambda _{1},\ldots ,\lambda _{L}}theeigenvaluesofS{\displaystyle \mathbf {S} }taken in the decreasing order of magnitude (λ1≥…≥λL≥0{\displaystyle \lambda _{1}\geq \ldots \geq \lambda _{L}\geq 0}) and byU1,…,UL{\displaystyle U_{1},\ldots ,U_{L}}the orthonormal system of theeigenvectorsof the matrixS{\displaystyle \mathbf {S} }corresponding to these eigenvalues.
Setd=rankX=max{i,such thatλi>0}{\displaystyle d=\mathop {\mathrm {rank} } \mathbf {X} =\max\{i,\ {\mbox{such that}}\ \lambda _{i}>0\}}(note thatd=L{\displaystyle d=L}for a typical real-life series) andVi=XTUi/λi{\displaystyle V_{i}=\mathbf {X} ^{\mathrm {T} }U_{i}/{\sqrt {\lambda _{i}}}}(i=1,…,d){\displaystyle (i=1,\ldots ,d)}. In this notation, the SVD of the trajectory matrixX{\displaystyle \mathbf {X} }can be written as
where
are matrices having rank 1; these are calledelementary matrices. The collection(λi,Ui,Vi){\displaystyle ({\sqrt {\lambda _{i}}},U_{i},V_{i})}will be called thei{\displaystyle i}theigentriple(abbreviated as ET) of the SVD. VectorsUi{\displaystyle U_{i}}are the left singular vectors of the matrixX{\displaystyle \mathbf {X} }, numbersλi{\displaystyle {\sqrt {\lambda _{i}}}}are the singular values and provide the singular spectrum ofX{\displaystyle \mathbf {X} }; this gives the name to SSA. VectorsλiVi=XTUi{\displaystyle {\sqrt {\lambda _{i}}}V_{i}=\mathbf {X} ^{\mathrm {T} }U_{i}}are called vectors of principal components (PCs).
3rd step: Eigentriple grouping.
Partition the set of indices{1,…,d}{\displaystyle \{1,\ldots ,d\}}intom{\displaystyle m}disjoint subsetsI1,…,Im{\displaystyle I_{1},\ldots ,I_{m}}.
LetI={i1,…,ip}{\displaystyle I=\{i_{1},\ldots ,i_{p}\}}. Then the resultant matrixXI{\displaystyle \mathbf {X} _{I}}corresponding to the groupI{\displaystyle I}is defined asXI=Xi1+…+Xip{\displaystyle \mathbf {X} _{I}=\mathbf {X} _{i_{1}}+\ldots +\mathbf {X} _{i_{p}}}. The resultant matrices are computed for the groupsI=I1,…,Im{\displaystyle I=I_{1},\ldots ,I_{m}}and the grouped SVD expansion ofX{\displaystyle \mathbf {X} }can now be written as
4th step: Diagonal averaging.
Each matrixXIj{\displaystyle \mathbf {X} _{I_{j}}}of the grouped decomposition is hankelized and then the obtainedHankel matrixis transformed into a new series of lengthN{\displaystyle N}using the one-to-one correspondence between Hankel matrices and time series.
Diagonal averaging applied to a resultant matrixXIk{\displaystyle \mathbf {X} _{I_{k}}}produces areconstructed seriesX~(k)=(x~1(k),…,x~N(k)){\displaystyle {\widetilde {\mathbb {X} }}^{(k)}=({\widetilde {x}}_{1}^{(k)},\ldots ,{\widetilde {x}}_{N}^{(k)})}. In this way, the initial seriesx1,…,xN{\displaystyle x_{1},\ldots ,x_{N}}is decomposed into a sum ofm{\displaystyle m}reconstructed subseries:
This decomposition is the main result of the SSA algorithm. The decomposition is meaningful if each reconstructed
subseries could be classified as a part of either trend or some periodic component or noise.
The two main questions which the theory of SSA attempts to answer are: (a) what time series components can be separated by SSA, and (b) how to choose the window lengthL{\displaystyle L}and make proper grouping for extraction of a desirable component. Many theoretical results can be found in Golyandina et al. (2001, Ch. 1 and 6).
Trend (which is defined as a slowly varying component of the time series), periodic components and noise are asymptotically separable asN→∞{\displaystyle N\rightarrow \infty }. In practiceN{\displaystyle N}is fixed and one is interested in approximate separability between time series components. A number of indicators of approximate separability can be used, see Golyandina et al. (2001, Ch. 1). The window lengthL{\displaystyle L}determines the resolution of the method: larger values ofL{\displaystyle L}provide more refined decomposition into elementary components and therefore better separability. The window lengthL{\displaystyle L}determines the longest periodicity captured by SSA. Trends can be extracted by grouping of eigentriples with slowly varying eigenvectors. A sinusoid with frequency smaller than 0.5 produces two approximately equal eigenvalues and two sine-wave eigenvectors with the same frequencies andπ/2{\displaystyle \pi /2}-shifted phases.
Separation of two time series components can be considered as extraction of one component in the presence of perturbation by the other component. SSA perturbation theory is developed in Nekrutkin (2010) and Hassani et al. (2011).
If for some seriesX{\displaystyle \mathbb {X} }the SVD step in Basic SSA givesd<L{\displaystyle d<L}, then this series is calledtime series of rankd{\displaystyle d}(Golyandina et al., 2001, Ch.5). The subspace spanned by thed{\displaystyle d}leading eigenvectors is calledsignal subspace. This subspace is used for estimating the signal parameters insignal processing, e.g.ESPRITfor high-resolution frequency estimation. Also, this subspace determines thelinear homogeneous recurrence relation(LRR) governing the series, which can be used for forecasting. Continuation of the series by the LRR is similar to forwardlinear predictionin signal processing.
Let the series be governed by the minimal LRRxn=∑k=1dbkxn−k{\displaystyle x_{n}=\sum _{k=1}^{d}b_{k}x_{n-k}}. Let us chooseL>d{\displaystyle L>d},U1,…,Ud{\displaystyle U_{1},\ldots ,U_{d}}be the eigenvectors (left singular vectors of theL{\displaystyle L}-trajectory matrix), which are provided by the SVD step of SSA. Then this series is governed by an LRRxn=∑k=1L−1akxn−k{\displaystyle x_{n}=\sum _{k=1}^{L-1}a_{k}x_{n-k}}, where(aL−1,…,a1)T{\displaystyle (a_{L-1},\ldots ,a_{1})^{\mathrm {T} }}are expressed throughU1,…,Ud{\displaystyle U_{1},\ldots ,U_{d}}(Golyandina et al., 2001, Ch.5), and can be continued by the same LRR.
This provides the basis for SSA recurrent and vector forecasting algorithms (Golyandina et al., 2001, Ch.2). In practice, the signal is corrupted by a perturbation, e.g., by noise, and its subspace is estimated by SSA approximately. Thus, SSA forecasting can be applied for forecasting of a time series component that is approximately governed by an LRR and is approximately separated from the residual.
Multi-channel, Multivariate SSA (or M-SSA) is a natural extension of SSA to for analyzing multivariate time series, where the size of different univariate series does not have to be the same. The trajectory matrix of multi-channel time series consists of linked trajectory matrices of separate times series. The rest of the algorithm is the same as in the univariate case. System of series can be forecasted analogously to SSA recurrent and vector algorithms (Golyandina and Stepanov, 2005). MSSA has many applications. It is especially popular in analyzing and forecasting economic and financial time series with short and long series length (Patterson et al., 2011, Hassani et al., 2012, Hassani and Mahmoudvand, 2013).
Other multivariate extension is 2D-SSA that can be applied to two-dimensional data like digital images (Golyandina and Usevich, 2010). The analogue of trajectory matrix is constructed by moving 2D windows of sizeLx×Ly{\displaystyle L_{x}\times L_{y}}.
A question that frequently arises in time series analysis is whether one economic variable can
help in predicting another economic variable. One way to address this question was proposed by
Granger (1969), in which he formalized the causality concept. A comprehensive causality test based on MSSA has recently introduced for causality measurement. The test is based on the forecasting accuracy and predictability of the direction of change of the MSSA algorithms (Hassani et al., 2011 and Hassani et al.,2012).
The MSSA forecasting results can be used in examining theefficient-market hypothesiscontroversy (EMH).
The EMH suggests that the information contained in the price series of an asset is reflected “instantly, fully, and perpetually” in the asset’s current price. Since the price series and the information contained in it are available to all market participants, no one can benefit by attempting to take advantage of the information contained in the price history of an asset by trading in the markets. This is evaluated using two series with different series length in a multivariate system in SSA analysis (Hassani et al. 2010).
Business cycles plays a key role in macroeconomics, and are interest for a variety of players in the economy, including central banks, policy-makers, and financial intermediaries. MSSA-based methods for tracking business cycles have been recently introduced, and have been shown to allow for a reliable assessment of the cyclical position of the economy in real-time (de Carvalho et al., 2012 and de Carvalho and Rua, 2017).
SSA's applicability to any kind of stationary or deterministically trending series has been extended to the case of a series with a stochastic trend, also known as a series with a unit root. In Hassani and Thomakos (2010) and Thomakos (2010) the basic theory on the properties and application of SSA in the case of series of a unit root is given, along with several examples. It is shown that SSA in such series produces a special kind of filter, whose form and spectral properties are derived, and that forecasting the single reconstructed component reduces to a moving average. SSA in unit roots thus provides an `optimizing' non-parametric framework for smoothing series with a unit root. This line of work is also extended to the case of two series, both of which have a unit root but are cointegrated. The application of SSA in this bivariate framework produces a smoothed series of the common root component.
The gap-filling versions of SSA can be used to analyze data sets that are unevenly sampled or containmissing data(Schoellhamer, 2001; Golyandina and Osipov, 2007).
Schoellhamer (2001) shows that the straightforward idea to formally calculate approximate inner products omitting unknown terms is workable for long stationary time series.
Golyandina and Osipov (2007) uses the idea of filling in missing entries in vectors taken from the given subspace. The recurrent and vector SSA forecasting can be considered as particular cases of filling in algorithms described in the paper.
SSA can be effectively used as a non-parametric method of time series monitoring andchange detection. To do that, SSA performs the subspace tracking in the following way. SSA is applied sequentially to the initial parts of the series, constructs the corresponding signal subspaces and checks the distances between these subspaces and the lagged vectors formed from the few most recent observations. If these distances become too large, a structural change is suspected to have occurred in the series (Golyandina et al., 2001, Ch.3; Moskvina and Zhigljavsky, 2003).
In this way, SSA could be used forchange detectionnot only in trends but also in the variability of the series, in the mechanism that determines dependence between different series and even in the noise structure. The method have proved to be useful in different engineering problems (e.g. Mohammad and Nishida (2011) in robotics), and has been extended to the multivariate case with corresponding analysis of detection delay and false positive rate.[1]
|
https://en.wikipedia.org/wiki/Singular_spectrum_analysis
|
Sparse principal component analysis(SPCA or sparse PCA) is a technique used in statistical analysis and, in particular, in the analysis ofmultivariatedata sets. It extends the classic method ofprincipal component analysis(PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables.
A particular disadvantage of ordinary PCA is that the principal components are usually linear combinations of all input variables. SPCA overcomes this disadvantage by finding components that are linear combinations of just a few input variables (SPCs). This means that some of the coefficients of the linear combinations defining the SPCs, calledloadings,[note 1]are equal to zero. The number of nonzero loadings is called thecardinalityof the SPC.
Consider a datamatrix,X{\displaystyle X}, where each of thep{\displaystyle p}columns represent an input variable, and each of then{\displaystyle n}rows represents an independent sample from data population. One assumes each column ofX{\displaystyle X}has mean zero, otherwise one can subtract column-wise mean from each element ofX{\displaystyle X}.
LetΣ=1n−1X⊤X{\displaystyle \Sigma ={\frac {1}{n-1}}X^{\top }X}be the empiricalcovariance matrixofX{\displaystyle X}, which has dimensionp×p{\displaystyle p\times p}.
Given an integerk{\displaystyle k}with1≤k≤p{\displaystyle 1\leq k\leq p}, the sparse PCA problem can be formulated as maximizing the variance along a direction represented by vectorv∈Rp{\displaystyle v\in \mathbb {R} ^{p}}while constraining its cardinality:
The first constraint specifies thatvis a unit vector. In the second constraint,‖v‖0{\displaystyle \left\Vert v\right\Vert _{0}}represents theℓ0{\displaystyle \ell _{0}}pseudo-normofv, which is defined as the number of its non-zero components. So the second constraint specifies that the number of non-zero components invis less than or equal tok, which is typically an integer that is much smaller than dimensionp. The optimal value ofEq. 1is known as thek-sparse largesteigenvalue.
If one takesk=p, the problem reduces to the ordinaryPCA, and the optimal value becomes the largest eigenvalue of covariance matrixΣ.
After finding the optimal solutionv, one deflatesΣto obtain a new matrix
and iterate this process to obtain further principal components. However, unlike PCA, sparse PCA cannot guarantee that different principal components areorthogonal. In order to achieve orthogonality, additional constraints must be enforced.
The following equivalent definition is in matrix form.
LetV{\displaystyle V}be ap×psymmetric matrix, one can rewrite the sparse PCA problem as
Tris thematrix trace, and‖V‖0{\displaystyle \Vert V\Vert _{0}}represents the non-zero elements in matrixV.
The last line specifies thatVhasmatrix rankone and ispositive semidefinite.
The last line means that one hasV=vvT{\displaystyle V=vv^{T}}, soEq. 2is equivalent toEq. 1.
Moreover, the rank constraint in this formulation is actually redundant, and therefore sparse PCA can be cast as the following mixed-integer semidefinite program[1]
Because of the cardinality constraint, the maximization problem is hard to solve exactly, especially when dimensionpis high. In fact, the sparse PCA problem inEq. 1isNP-hardin the strong sense.[2]
As most sparse problems, variable selection in SPCA is a computationally intractable non-convex NP-hard problem,[3]therefore greedy sub-optimal algorithms are often employed to find solutions.
Note also that SPCA introduces hyperparameters quantifying in what capacity large parameter values are penalized.[4]These might needtuningto achieve satisfactory performance, thereby adding to the total computational cost.
Several alternative approaches (ofEq. 1) have been proposed, including
The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies are recently reviewed in a survey paper.[13]
It has been proposed that sparse PCA can be approximated bysemidefinite programming(SDP).[7]If one drops the rank constraint and relaxes the cardinality constraint by a 1-normconvexconstraint, one gets a semidefinite programming relaxation, which can be solved efficiently in polynomial time:
In the second constraint,1{\displaystyle \mathbf {1} }is ap×1vector of ones, and|V|is the matrix whose elements are the absolute values of the elements ofV.
The optimal solutionV{\displaystyle V}to the relaxed problemEq. 3is not guaranteed to have rank one. In that case,V{\displaystyle V}can be truncated to retain only the dominant eigenvector.
While the semidefinite program does not scale beyond n=300 covariates, it has been shown that a second-order cone relaxation of the semidefinite relaxation is almost as tight and successfully solves problems with n=1000s of covariates[14]
Suppose ordinary PCA is applied to a dataset where each input variable represents a different asset, it may generate principal components that are weighted combination of all the assets. In contrast, sparse PCA would produce principal components that are weighted combination of only a few input assets, so one can easily interpret its meaning. Furthermore, if one uses a trading strategy based on these principal components, fewer assets imply less transaction costs.
Consider a dataset where each input variable corresponds to a specific gene. Sparse PCA can produce a principal component that involves only a few genes, so researchers can focus on these specific genes for further analysis.
Contemporary datasets often have the number of input variables (p{\displaystyle p}) comparable with or even much larger than the number of samples (n{\displaystyle n}). It has been shown that ifp/n{\displaystyle p/n}does not converge to zero, the classical PCA is notconsistent. In other words, if we letk=p{\displaystyle k=p}inEq. 1, then
the optimal value does not converge to the largest eigenvalue of data population when the sample sizen→∞{\displaystyle n\rightarrow \infty }, and the optimal solution does not converge to the direction of maximum variance.
But sparse PCA can retain consistency even ifp≫n.{\displaystyle p\gg n.}
Thek-sparse largest eigenvalue (the optimal value ofEq. 1) can be used to discriminate an isometric model, where every direction has the same variance, from a spiked covariance model in high-dimensional setting.[15]Consider a hypothesis test where the null hypothesis specifies that dataX{\displaystyle X}are generated from a multivariate normal distribution with mean 0 and covariance equal to an identity matrix, and the alternative hypothesis specifies that dataX{\displaystyle X}is generated from a spiked model with signal strengthθ{\displaystyle \theta }:
wherev∈Rp{\displaystyle v\in \mathbb {R} ^{p}}has onlyknon-zero coordinates. The largestk-sparse eigenvalue can discriminate the two hypotheses if and only ifθ>Θ(klog(p)/n){\displaystyle \theta >\Theta ({\sqrt {k\log(p)/n}})}.
Since computingk-sparse eigenvalue is NP-hard, one can approximate it by the optimal value of semidefinite programming relaxation (Eq. 3). If that case, we can discriminate the two hypotheses ifθ>Θ(k2log(p)/n){\displaystyle \theta >\Theta ({\sqrt {k^{2}\log(p)/n}})}. The additionalk{\displaystyle {\sqrt {k}}}term cannot be improved by any other polynomial time algorithm if theplanted clique conjectureholds.
|
https://en.wikipedia.org/wiki/Sparse_PCA
|
Transform codingis a type ofdata compressionfor "natural" data likeaudiosignalsor photographicimages. The transformation is typically lossless (perfectly reversible) on its own but is used to enable better (more targeted)quantization, which then results in a lower quality copy of the original input (lossy compression).
In transform coding, knowledge of the application is used to choose information to discard, thereby lowering itsbandwidth. The remaining information can then be compressed via a variety of methods. When the output is decoded, the result may not be identical to the original input, but is expected to be close enough for the purpose of the application.
One of the most successful transform encoding system is typically not referred to as such—the example beingNTSCcolortelevision. After an extensive series of studies in the 1950s,Alda Bedfordshowed that the human eye has high resolution only for black and white, somewhat less for "mid-range" colors like yellows and greens, and much less for colors on the end of the spectrum, reds and blues.
Using this knowledge allowedRCAto develop a system in which they discarded most of the blue signal after it comes from the camera, keeping most of the green and only some of the red; this ischroma subsamplingin theYIQcolor space.
The result is a signal with considerably less content, one that would fit within existing 6 MHz black-and-white signals as a phase modulated differential signal. The average TV displays the equivalent of 350 pixels on a line, but the TV signal contains enough information for only about 50 pixels of blue and perhaps 150 of red. This is not apparent to the viewer in most cases, as the eye makes little use of the "missing" information anyway.
The PAL and SECAM systems use nearly identical or very similar methods to transmit colour. In any case both systems are subsampled.
The term is much more commonly used indigital mediaanddigital signal processing. The most widely used transform coding technique in this regard is thediscrete cosine transform(DCT),[1][2]proposed byNasir Ahmedin 1972,[3][4]and presented by Ahmed with T. Natarajan andK. R. Raoin 1974.[5]This DCT, in the context of the family of discrete cosine transforms, is the DCT-II. It is the basis for the commonJPEGimage compressionstandard,[6]which examines small blocks of the image and transforms them to thefrequency domainfor more efficient quantization (lossy) anddata compression. Invideo coding, theH.26xandMPEGstandards modify this DCT image compression technique across frames in a motion image usingmotion compensation, further reducing the size compared to a series of JPEGs.
Inaudio coding, MPEG audio compression analyzes the transformed data according to apsychoacoustic modelthat describes the human ear's sensitivity to parts of the signal, similar to the TV model.MP3uses a hybrid coding algorithm, combining themodified discrete cosine transform(MDCT) andfast Fourier transform(FFT).[7]It was succeeded byAdvanced Audio Coding(AAC), which uses a pure MDCT algorithm to significantly improve compression efficiency.[8]
The basic process ofdigitizingan analog signal is a kind of transform coding that usessamplingin one or more domains as its transform.
|
https://en.wikipedia.org/wiki/Transform_coding
|
Weighted least squares(WLS), also known asweighted linear regression,[1][2]is a generalization ofordinary least squaresandlinear regressionin which knowledge of the unequalvarianceof observations (heteroscedasticity) is incorporated into the regression.
WLS is also a specialization ofgeneralized least squares, when all the off-diagonal entries of thecovariance matrixof the errors, are null.
The fit of a model to a data point is measured by itsresidual,ri{\displaystyle r_{i}}, defined as the difference between a measured value of the dependent variable,yi{\displaystyle y_{i}}and the value predicted by the model,f(xi,β){\displaystyle f(x_{i},{\boldsymbol {\beta }})}:ri(β)=yi−f(xi,β).{\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f(x_{i},{\boldsymbol {\beta }}).}
If the errors are uncorrelated and have equal variance, then the functionS(β)=∑iri(β)2,{\displaystyle S({\boldsymbol {\beta }})=\sum _{i}r_{i}({\boldsymbol {\beta }})^{2},}is minimised atβ^{\displaystyle {\boldsymbol {\hat {\beta }}}}, such that∂S∂βj(β^)=0{\displaystyle {\frac {\partial S}{\partial \beta _{j}}}({\hat {\boldsymbol {\beta }}})=0}.
TheGauss–Markov theoremshows that, when this is so,β^{\displaystyle {\hat {\boldsymbol {\beta }}}}is abest linear unbiased estimator(BLUE). If, however, the measurements are uncorrelated but have different uncertainties, a modified approach might be adopted.Aitkenshowed that when a weighted sum of squared residuals is minimized,β^{\displaystyle {\hat {\boldsymbol {\beta }}}}is theBLUEif each weight is equal to the reciprocal of the variance of the measurementS=∑i=1nWiiri2,Wii=1σi2{\displaystyle {\begin{aligned}S&=\sum _{i=1}^{n}W_{ii}{r_{i}}^{2},&W_{ii}&={\frac {1}{{\sigma _{i}}^{2}}}\end{aligned}}}
The gradient equations for this sum of squares are−2∑iWii∂f(xi,β)∂βjri=0,j=1,…,m{\displaystyle -2\sum _{i}W_{ii}{\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}r_{i}=0,\quad j=1,\ldots ,m}
which, in a linear least squares system give the modifiednormal equations,∑i=1n∑k=1mXijWiiXikβ^k=∑i=1nXijWiiyi,j=1,…,m.{\displaystyle \sum _{i=1}^{n}\sum _{k=1}^{m}X_{ij}W_{ii}X_{ik}{\hat {\beta }}_{k}=\sum _{i=1}^{n}X_{ij}W_{ii}y_{i},\quad j=1,\ldots ,m\,.}The matrixX{\displaystyle X}above is as defined in thecorresponding discussion of linear least squares.
When the observational errors are uncorrelated and theweight matrix,W=Ω−1, is diagonal, these may be written as(XTWX)β^=XTWy.{\displaystyle \mathbf {\left(X^{\textsf {T}}WX\right){\hat {\boldsymbol {\beta }}}=X^{\textsf {T}}Wy} .}
If the errors are correlated, the resulting estimator is theBLUEif the weight matrix is equal to the inverse of thevariance-covariance matrixof the observations.
When the errors are uncorrelated, it is convenient to simplify the calculations to factor the weight matrix aswii=Wii{\displaystyle w_{ii}={\sqrt {W_{ii}}}}. The normal equations can then be written in the same form as ordinary least squares:(X′TX′)β^=X′Ty′{\displaystyle \mathbf {\left(X'^{\textsf {T}}X'\right){\hat {\boldsymbol {\beta }}}=X'^{\textsf {T}}y'} \,}
where we define the following scaled matrix and vector:X′=diag(w)X,y′=diag(w)y=y⊘σ.{\displaystyle {\begin{aligned}\mathbf {X'} &=\operatorname {diag} \left(\mathbf {w} \right)\mathbf {X} ,\\\mathbf {y'} &=\operatorname {diag} \left(\mathbf {w} \right)\mathbf {y} =\mathbf {y} \oslash \mathbf {\sigma } .\end{aligned}}}
This is a type ofwhitening transformation; the last expression involves anentrywise division.
Fornon-linear least squaressystems a similar argument shows that the normal equations should be modified as follows.(JTWJ)Δβ=JTWΔy.{\displaystyle \mathbf {\left(J^{\textsf {T}}WJ\right)\,{\boldsymbol {\Delta }}\beta =J^{\textsf {T}}W\,{\boldsymbol {\Delta }}y} .\,}
Note that for empirical tests, the appropriateWis not known for sure and must be estimated. For thisfeasible generalized least squares(FGLS) techniques may be used; in this case it is specialized for a diagonal covariance matrix, thus yielding a feasible weighted least squares solution.
If the uncertainty of the observations is not known from external sources, then the weights could be estimated from the given observations. This can be useful, for example, to identify outliers. After the outliers have been removed from the data set, the weights should be reset to one.[3]
In some cases the observations may be weighted—for example, they may not be equally reliable. In this case, one can minimize the weighted sum of squares:argminβ∑i=1nwi|yi−∑j=1mXijβj|2=argminβ‖W12(y−Xβ)‖2.{\displaystyle {\underset {\boldsymbol {\beta }}{\operatorname {arg\ min} }}\,\sum _{i=1}^{n}w_{i}\left|y_{i}-\sum _{j=1}^{m}X_{ij}\beta _{j}\right|^{2}={\underset {\boldsymbol {\beta }}{\operatorname {arg\ min} }}\,\left\|W^{\frac {1}{2}}\left(\mathbf {y} -X{\boldsymbol {\beta }}\right)\right\|^{2}.}wherewi> 0 is the weight of theith observation, andWis thediagonal matrixof such weights.
The weights should, ideally, be equal to thereciprocalof thevarianceof the measurement. (This implies that the observations are uncorrelated. If the observations arecorrelated, the expressionS=∑k∑jrkWkjrj{\textstyle S=\sum _{k}\sum _{j}r_{k}W_{kj}r_{j}\,}applies. In this case the weight matrix should ideally be equal to the inverse of thevariance-covariance matrixof the observations).[3]The normal equations are then:(XTWX)β^=XTWy.{\displaystyle \left(X^{\textsf {T}}WX\right){\hat {\boldsymbol {\beta }}}=X^{\textsf {T}}W\mathbf {y} .}
This method is used initeratively reweighted least squares.
The estimated parameter values are linear combinations of the observed valuesβ^=(XTWX)−1XTWy.{\displaystyle {\hat {\boldsymbol {\beta }}}=(X^{\textsf {T}}WX)^{-1}X^{\textsf {T}}W\mathbf {y} .}
Therefore, an expression for the estimatedvariance-covariance matrixof the parameter estimates can be obtained byerror propagationfrom the errors in the observations. Let the variance-covariance matrix for the observations be denoted byMand that of the estimated parameters byMβ. ThenMβ=(XTWX)−1XTWMWTX(XTWTX)−1.{\displaystyle M^{\beta }=\left(X^{\textsf {T}}WX\right)^{-1}X^{\textsf {T}}WMW^{\textsf {T}}X\left(X^{\textsf {T}}W^{\textsf {T}}X\right)^{-1}.}
WhenW=M−1, this simplifies toMβ=(XTWX)−1.{\displaystyle M^{\beta }=\left(X^{\textsf {T}}WX\right)^{-1}.}
When unit weights are used (W=I, theidentity matrix), it is implied that the experimental errors are uncorrelated and all equal:M=σ2I, whereσ2is thea priorivariance of an observation. In any case,σ2is approximated by thereduced chi-squaredχν2{\displaystyle \chi _{\nu }^{2}}:Mβ=χν2(XTWX)−1,χν2=S/ν,{\displaystyle {\begin{aligned}M^{\beta }&=\chi _{\nu }^{2}\left(X^{\textsf {T}}WX\right)^{-1},\\\chi _{\nu }^{2}&=S/\nu ,\end{aligned}}}
whereSis the minimum value of the weightedobjective function:S=rTWr=‖W12(y−Xβ^)‖2.{\displaystyle S=r^{\textsf {T}}Wr=\left\|W^{\frac {1}{2}}\left(\mathbf {y} -X{\hat {\boldsymbol {\beta }}}\right)\right\|^{2}.}
The denominator,ν=n−m{\displaystyle \nu =n-m}, is the number ofdegrees of freedom; seeeffective degrees of freedomfor generalizations for the case of correlated observations.
In all cases, thevarianceof the parameter estimateβ^i{\displaystyle {\hat {\beta }}_{i}}is given byMiiβ{\displaystyle M_{ii}^{\beta }}and thecovariancebetween the parameter estimatesβ^i{\displaystyle {\hat {\beta }}_{i}}andβ^j{\displaystyle {\hat {\beta }}_{j}}is given byMijβ{\displaystyle M_{ij}^{\beta }}. Thestandard deviationis the square root of variance,σi=Miiβ{\displaystyle \sigma _{i}={\sqrt {M_{ii}^{\beta }}}}, and the correlation coefficient is given byρij=Mijβ/(σiσj){\displaystyle \rho _{ij}=M_{ij}^{\beta }/(\sigma _{i}\sigma _{j})}. These error estimates reflect onlyrandom errorsin the measurements. The true uncertainty in the parameters is larger due to the presence ofsystematic errors, which, by definition, cannot be quantified.
Note that even though the observations may be uncorrelated, the parameters are typicallycorrelated.
It is oftenassumed, for want of any concrete evidence but often appealing to thecentral limit theorem—seeNormal distribution#Occurrence and applications—that the error on each observation belongs to anormal distributionwith a mean of zero and standard deviationσ{\displaystyle \sigma }. Under that assumption the following probabilities can be derived for a single scalar parameter estimate in terms of its estimated standard errorseβ{\displaystyle se_{\beta }}(givenhere):
The assumption is not unreasonable whenn>>m. If the experimental errors are normally distributed the parameters will belong to aStudent's t-distributionwithn−mdegrees of freedom. Whenn≫mStudent's t-distribution approximates a normal distribution. Note, however, that these confidence limits cannot take systematic error into account. Also, parameter errors should be quoted to one significant figure only, as they are subject tosampling error.[4]
When the number of observations is relatively small,Chebychev's inequalitycan be used for an upper bound on probabilities, regardless of any assumptions about the distribution of experimental errors: the maximum probabilities that a parameter will be more than 1, 2, or 3 standard deviations away from its expectation value are 100%, 25% and 11% respectively.
Theresidualsare related to the observations byr^=y−Xβ^=y−Hy=(I−H)y,{\displaystyle \mathbf {\hat {r}} =\mathbf {y} -X{\hat {\boldsymbol {\beta }}}=\mathbf {y} -H\mathbf {y} =(I-H)\mathbf {y} ,}
whereHis theidempotent matrixknown as thehat matrix:H=X(XTWX)−1XTW,{\displaystyle H=X\left(X^{\textsf {T}}WX\right)^{-1}X^{\textsf {T}}W,}
andIis theidentity matrix. The variance-covariance matrix of the residuals,Mris given byMr=(I−H)M(I−H)T.{\displaystyle M^{\mathbf {r} }=(I-H)M(I-H)^{\textsf {T}}.}
Thus the residuals are correlated, even if the observations are not.
WhenW=M−1{\displaystyle W=M^{-1}},Mr=(I−H)M.{\displaystyle M^{\mathbf {r} }=(I-H)M.}
The sum of weighted residual values is equal to zero whenever the model function contains a constant term. Left-multiply the expression for the residuals byXTWT:XTWr^=XTWy−XTWXβ^=XTWy−(XTWX)(XTWX)−1XTWy=0.{\displaystyle X^{\textsf {T}}W{\hat {\mathbf {r} }}=X^{\textsf {T}}W\mathbf {y} -X^{\textsf {T}}WX{\hat {\boldsymbol {\beta }}}=X^{\textsf {T}}W\mathbf {y} -\left(X^{\rm {T}}WX\right)\left(X^{\textsf {T}}WX\right)^{-1}X^{\textsf {T}}W\mathbf {y} =\mathbf {0} .}
Say, for example, that the first term of the model is a constant, so thatXi1=1{\displaystyle X_{i1}=1}for alli. In that case it follows that∑imXi1Wir^i=∑imWir^i=0.{\displaystyle \sum _{i}^{m}X_{i1}W_{i}{\hat {r}}_{i}=\sum _{i}^{m}W_{i}{\hat {r}}_{i}=0.}
Thus, in the motivational example, above, the fact that the sum of residual values is equal to zero is not accidental, but is a consequence of the presence of the constant term, α, in the model.
If experimental error follows anormal distribution, then, because of the linear relationship between residuals and observations, so should residuals,[5]but since the observations are only a sample of the population of all possible observations, the residuals should belong to aStudent's t-distribution.Studentized residualsare useful in making a statistical test for anoutlierwhen a particular residual appears to be excessively large.
|
https://en.wikipedia.org/wiki/Weighted_least_squares
|
Asigmoid functionis anymathematical functionwhosegraphhas a characteristic S-shaped orsigmoid curve.
A common example of a sigmoid function is thelogistic function, which is defined by the formula[1]
Other sigmoid functions are given in theExamples section. In some fields, most notably in the context ofartificial neural networks, the term "sigmoid function" is used as a synonym for "logistic function".
Special cases of the sigmoid function include theGompertz curve(used in modeling systems that saturate at large values ofx) and theogee curve(used in thespillwayof somedams). Sigmoid functions have domain of allreal numbers, with return (response) value commonlymonotonically increasingbut could be decreasing. Sigmoid functions most often show a return value (yaxis) in the range 0 to 1. Another commonly used range is from −1 to 1.
A wide variety of sigmoid functions including the logistic andhyperbolic tangentfunctions have been used as theactivation functionofartificial neurons. Sigmoid curves are also common in statistics ascumulative distribution functions(which go from 0 to 1), such as the integrals of thelogistic density, thenormal density, andStudent'stprobability density functions. The logistic sigmoid function is invertible, and its inverse is thelogitfunction.
A sigmoid function is abounded,differentiable, real function that is defined for all real input values and has a positive derivative at each point[1][2]and exactly 1/4 at theinflection point.
In general, a sigmoid function ismonotonic, and has a firstderivativewhich isbell shaped. Conversely, theintegralof any continuous, non-negative, bell-shaped function (with one local maximum and no local minimum, unlessdegenerate) will be sigmoidal. Thus thecumulative distribution functionsfor many commonprobability distributionsare sigmoidal. One such example is theerror function, which is related to the cumulative distribution function of anormal distribution; another is thearctanfunction, which is related to the cumulative distribution function of aCauchy distribution.
A sigmoid function is constrained by a pair ofhorizontal asymptotesasx→±∞{\displaystyle x\rightarrow \pm \infty }.
A sigmoid function isconvexfor values less than a particular point, and it isconcavefor values greater than that point: in many of the examples here, that point is 0.
f(x)={21+e−2mx1−x2−1,|x|<1sgn(x)|x|≥1={tanh(mx1−x2),|x|<1sgn(x)|x|≥1{\displaystyle {\begin{aligned}f(x)&={\begin{cases}{\displaystyle {\frac {2}{1+e^{-2m{\frac {x}{1-x^{2}}}}}}-1},&|x|<1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\\&={\begin{cases}{\displaystyle \tanh \left(m{\frac {x}{1-x^{2}}}\right)},&|x|<1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\end{aligned}}}using the hyperbolic tangent mentioned above. Here,m{\displaystyle m}is a free parameter encoding the slope atx=0{\displaystyle x=0}, which must be greater than or equal to3{\displaystyle {\sqrt {3}}}because any smaller value will result in a function with multiple inflection points, which is therefore not a true sigmoid. This function is unusual because it actually attains the limiting values of −1 and 1 within a finite range, meaning that its value is constant at −1 for allx≤−1{\displaystyle x\leq -1}and at 1 for allx≥1{\displaystyle x\geq 1}. Nonetheless, it issmooth(infinitely differentiable,C∞{\displaystyle C^{\infty }})everywhere, including atx=±1{\displaystyle x=\pm 1}.
Many natural processes, such as those of complex systemlearning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a specific mathematical model is lacking, a sigmoid function is often used.[6]
Thevan Genuchten–Gupta modelis based on an inverted S-curve and applied to the response of crop yield tosoil salinity.
Examples of the application of the logistic S-curve to the response of crop yield (wheat) to both the soil salinity and depth towater tablein the soil are shown inmodeling crop response in agriculture.
Inartificial neural networks, sometimes non-smooth functions are used instead for efficiency; these are known ashard sigmoids.
Inaudio signal processing, sigmoid functions are used aswaveshapertransfer functionsto emulate the sound ofanalog circuitryclipping.[7]
Inbiochemistryandpharmacology, theHillandHill–Langmuir equationsare sigmoid functions.
In computer graphics and real-time rendering, some of the sigmoid functions are used to blend colors or geometry between two values, smoothly and without visible seams or discontinuities.
Titration curvesbetween strong acids and strong bases have a sigmoid shape due to the logarithmic nature of thepH scale.
The logistic function can be calculated efficiently by utilizingtype III Unums.[8]
An hierarchy of sigmoid growth models with increasing complexity (number of parameters) was built[9]with the primary goal to re-analyze kinetic data, the so called N-t curves, from heterogeneousnucleationexperiments,[10]inelectrochemistry. The hierarchy includes at present three models, with 1, 2 and 3 parameters, if not counting the maximal number of nuclei Nmax, respectively—a tanh2based model called α21[11]originally devised to describe diffusion-limited crystal growth (not aggregation!) in 2D, the Johnson-Mehl-Avrami-Kolmogorov (JMAKn) model,[12]and the Richards model.[13]It was shown that for the concrete purpose even the simplest model works and thus it was implied that the experiments revisited are an example of two-step nucleation with the first step being the growth of the metastable phase in which the nuclei of the stable phase form.[9]
|
https://en.wikipedia.org/wiki/Sigmoid_function
|
In statistics, atobit modelis any of a class ofregression modelsin which the observed range of thedependent variableiscensoredin some way.[1]The term was coined byArthur Goldbergerin reference toJames Tobin,[2][a]who developed the model in 1958 to mitigate the problem ofzero-inflateddata for observations of household expenditure ondurable goods.[3][b]Because Tobin's method can be easily extended to handletruncatedand other non-randomly selected samples,[c]some authors adopt a broader definition of the tobit model that includes these cases.[4]
Tobin's idea was to modify thelikelihood functionso that it reflects the unequalsampling probabilityfor each observation depending on whether thelatent dependent variablefell above or below the determined threshold.[5]For a sample that, as in Tobin's original case, was censored from below at zero, the sampling probability for each non-limit observation is simply the height of the appropriatedensity function. For any limit observation, it is the cumulative distribution, i.e. theintegralbelow zero of the appropriate density function. The tobit likelihood function is thus a mixture of densities and cumulative distribution functions.[6]
Below are thelikelihoodand log likelihood functions for a type I tobit. This is a tobit that is censored from below atyL{\displaystyle y_{L}}when the latent variableyj∗≤yL{\displaystyle y_{j}^{*}\leq y_{L}}. In writing out the likelihood function, we first define an indicator functionI{\displaystyle I}:
Next, letΦ{\displaystyle \Phi }be the standard normalcumulative distribution functionandφ{\displaystyle \varphi }to be the standard normalprobability density function. For a data set withNobservations the likelihood function for a type I tobit is
and the log likelihood is given by
The log-likelihood as stated above is not globallyconcave, which complicates themaximum likelihood estimation. Olsen suggested the simple reparametrizationβ=δ/γ{\displaystyle \beta =\delta /\gamma }andσ2=γ−2{\displaystyle \sigma ^{2}=\gamma ^{-2}}, resulting in a transformed log-likelihood,
which is globally concave in terms of the transformed parameters.[7]
For the truncated (tobit II) model, Orme showed that while the log-likelihood is not globally concave, it is concave at anystationary pointunder the above transformation.[8][9]
If the relationship parameterβ{\displaystyle \beta }is estimated by regressing the observedyi{\displaystyle y_{i}}onxi{\displaystyle x_{i}}, the resulting ordinaryleast squaresregression estimator isinconsistent. It will yield a downwards-biased estimate of the slope coefficient and an upward-biased estimate of the intercept.Takeshi Amemiya(1973) has proven that themaximum likelihood estimatorsuggested by Tobin for this model is consistent.[10]
Theβ{\displaystyle \beta }coefficient should not be interpreted as the effect ofxi{\displaystyle x_{i}}onyi{\displaystyle y_{i}}, as one would with alinear regression model; this is a common error. Instead, it should be interpreted as the combination of
(1) the change inyi{\displaystyle y_{i}}of those above the limit, weighted by the probability of being above the limit; and
(2) the change in the probability of being above the limit, weighted by the expected value ofyi{\displaystyle y_{i}}if above.[11]
Variations of the tobit model can be produced by changing where and whencensoringoccurs.Amemiya (1985, p. 384) classifies these variations into five categories (tobit type I – tobit type V), where tobit type I stands for the first model described above. Schnedler (2005) provides a general formula to obtain consistent likelihood estimators for these and other variations of the tobit model.[12]
The tobit model is a special case of acensored regression model, because the latent variableyi∗{\displaystyle y_{i}^{*}}cannot always be observed while the independent variablexi{\displaystyle x_{i}}is observable. A common variation of the tobit model is censoring at a valueyL{\displaystyle y_{L}}different from zero:
Another example is censoring of values aboveyU{\displaystyle y_{U}}.
Yet another model results whenyi{\displaystyle y_{i}}is censored from above and below at the same time.
The rest of the models will be presented as being bounded from below at 0, though this can be generalized as done for Type I.
Type II tobit models introduce a second latent variable.[13]
In Type I tobit, the latent variable absorbs both the process of participation and the outcome of interest. Type II tobit allows the process of participation (selection) and the outcome of interest to be independent, conditional on observable data.
TheHeckman selection modelfalls into the Type II tobit,[14]which is sometimes called Heckit afterJames Heckman.[15]
Type III introduces a second observed dependent variable.
TheHeckmanmodel falls into this type.
Type IV introduces a third observed dependent variable and a third latent variable.
Similar to Type II, in Type V only the sign ofy1i∗{\displaystyle y_{1i}^{*}}is observed.
If the underlying latent variableyi∗{\displaystyle y_{i}^{*}}is not normally distributed, one must use quantiles instead of moments to analyze the
observable variableyi{\displaystyle y_{i}}.Powell's CLAD estimatoroffers a possible way to achieve this.[16]
Tobit models have, for example, been applied to estimate factors that impact grant receipt, including financial transfers distributed to sub-national governments who may apply for these grants. In these cases, grant recipients cannot receive negative amounts, and the data is thus left-censored. For instance, Dahlberg and Johansson (2002) analyse a sample of 115 municipalities (42 of which received a grant).[17]Dubois and Fattore (2011) use a tobit model to investigate the role of various factors in European Union fund receipt by applying Polish sub-national governments.[18]The data may however be left-censored at a point higher than zero, with the risk of mis-specification. Both studies apply Probit and other models to check for robustness. Tobit models have also been applied in demand analysis to accommodate observations with zero expenditures on some goods. In a related application of tobit models, a system of nonlinear tobit regressions models has been used to jointly estimate a brand demand system with homoscedastic, heteroscedastic and generalized heteroscedastic variants.[19]
|
https://en.wikipedia.org/wiki/Tobit_model
|
Alayerin a deep learning model is a structure ornetwork topologyin the model's architecture, which takes information from the previous layers and then passes it to the next layer.
The first type of layer is theDense layer, also called thefully-connected layer,[1][2][3]and is used for abstract representations of input data. In this layer, neurons connect to every neuron in the preceding layer. Inmultilayer perceptronnetworks, these layers are stacked together.
TheConvolutional layer[4]is typically used for image analysis tasks. In this layer, the network detects edges, textures, and patterns. The outputs from this layer are then fed into a fully-connected layer for further processing. See also:CNNmodel.
ThePooling layer[5]is used to reduce the size of data input.
TheRecurrent layeris used for text processing with a memory function. Similar to the Convolutional layer, the output of recurrent layers are usually fed into a fully-connected layer for further processing. See also:RNNmodel.[6][7][8]
TheNormalization layeradjusts the output data from previous layers to achieve a regular distribution. This results in improved scalability and model training.
AHidden layeris any of the layers in aNeural Networkthat aren't the input or output layers.
There is an intrinsic difference betweendeep learninglayering andneocortical layering: deep learning layering depends onnetwork topology, while neocortical layering depends on intra-layershomogeneity.
|
https://en.wikipedia.org/wiki/Layer_(deep_learning)
|
Alogistic functionorlogistic curveis a common S-shaped curve (sigmoid curve) with the equation
f(x)=L1+e−k(x−x0){\displaystyle f(x)={\frac {L}{1+e^{-k(x-x_{0})}}}}
where
The logistic function has domain thereal numbers, the limit asx→−∞{\displaystyle x\to -\infty }is 0, and the limit asx→+∞{\displaystyle x\to +\infty }isL{\displaystyle L}.
Thestandard logistic function, depicted at right, whereL=1,k=1,x0=0{\displaystyle L=1,k=1,x_{0}=0}, has the equationf(x)=11+e−x{\displaystyle f(x)={\frac {1}{1+e^{-x}}}}and is sometimes simply calledthe sigmoid.[2]It is also sometimes called theexpit, being the inverse function of thelogit.[3][4]
The logistic function finds applications in a range of fields, includingbiology(especiallyecology),biomathematics,chemistry,demography,economics,geoscience,mathematical psychology,probability,sociology,political science,linguistics,statistics, andartificial neural networks. There are variousgeneralizations, depending on the field.
The logistic function was introduced in a series of three papers byPierre François Verhulstbetween 1838 and 1847, who devised it as a model ofpopulation growthby adjusting theexponential growthmodel, under the guidance ofAdolphe Quetelet.[5]Verhulst first devised the function in the mid 1830s, publishing a brief note in 1838,[1]then presented an expanded analysis and named the function in 1844 (published 1845);[a][6]the third paper adjusted the correction term in his model of Belgian population growth.[7]
The initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth approaches the limit with an exponentially decaying gap, like the initial stage in reverse.
Verhulst did not explain the choice of the term "logistic" (French:logistique), but it is presumably in contrast to thelogarithmiccurve,[8][b]and by analogy with arithmetic and geometric. His growth model is preceded by a discussion ofarithmetic growthandgeometric growth(whose curve he calls alogarithmic curve, instead of the modern termexponential curve), and thus "logistic growth" is presumably named by analogy,logisticbeing fromAncient Greek:λογιστικός,romanized:logistikós, a traditional division ofGreek mathematics.[c]
As a word derived from ancient Greek mathematical terms,[9]the name of this function is unrelated to the military and management termlogistics, which is instead fromFrench:logis"lodgings",[10]though some believe the Greek term also influencedlogistics;[9]seeLogistics § Originfor details.
Thestandard logistic functionis the logistic function with parametersk=1{\displaystyle k=1},x0=0{\displaystyle x_{0}=0},L=1{\displaystyle L=1}, which yields
f(x)=11+e−x=exex+1=ex/2ex/2+e−x/2.{\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}={\frac {e^{x/2}}{e^{x/2}+e^{-x/2}}}.}
In practice, due to the nature of theexponential functione−x{\displaystyle e^{-x}}, it is often sufficient to compute the standard logistic function forx{\displaystyle x}over a small range of real numbers, such as a range contained in [−6, +6], as it quickly converges very close to its saturation values of 0 and 1.
The logistic function has the symmetry property that
1−f(x)=f(−x).{\displaystyle 1-f(x)=f(-x).}
This reflects that the growth from 0 whenx{\displaystyle x}is small is symmetric with the decay of the gap to the limit (1) whenx{\displaystyle x}is large.
Further,x↦f(x)−1/2{\displaystyle x\mapsto f(x)-1/2}is anodd function.
The sum of the logistic function and its reflection about the vertical axis,f(−x){\displaystyle f(-x)}, is
11+e−x+11+e−(−x)=exex+1+1ex+1=1.{\displaystyle {\frac {1}{1+e^{-x}}}+{\frac {1}{1+e^{-(-x)}}}={\frac {e^{x}}{e^{x}+1}}+{\frac {1}{e^{x}+1}}=1.}
The logistic function is thus rotationally symmetrical about the point (0, 1/2).[11]
The logistic function is the inverse of the naturallogitfunction
logitp=logp1−pfor0<p<1{\displaystyle \operatorname {logit} p=\log {\frac {p}{1-p}}\quad {\text{ for }}\,0<p<1}
and so converts the logarithm ofoddsinto aprobability. The conversion from thelog-likelihood ratioof two alternatives also takes the form of a logistic curve.
The logistic function is an offset and scaledhyperbolic tangentfunction:f(x)=12+12tanh(x2),{\displaystyle f(x)={\frac {1}{2}}+{\frac {1}{2}}\tanh \left({\frac {x}{2}}\right),}ortanh(x)=2f(2x)−1.{\displaystyle \tanh(x)=2f(2x)-1.}
This follows fromtanh(x)=ex−e−xex+e−x=ex⋅(1−e−2x)ex⋅(1+e−2x)=f(2x)−e−2x1+e−2x=f(2x)−e−2x+1−11+e−2x=2f(2x)−1.{\displaystyle {\begin{aligned}\tanh(x)&={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}={\frac {e^{x}\cdot \left(1-e^{-2x}\right)}{e^{x}\cdot \left(1+e^{-2x}\right)}}\\&=f(2x)-{\frac {e^{-2x}}{1+e^{-2x}}}=f(2x)-{\frac {e^{-2x}+1-1}{1+e^{-2x}}}=2f(2x)-1.\end{aligned}}}
The hyperbolic-tangent relationship leads to another form for the logistic function's derivative:
ddxf(x)=14sech2(x2),{\displaystyle {\frac {d}{dx}}f(x)={\frac {1}{4}}\operatorname {sech} ^{2}\left({\frac {x}{2}}\right),}
which ties the logistic function into thelogistic distribution.
Geometrically, the hyperbolic tangent function is thehyperbolic angleon theunit hyperbolax2−y2=1{\displaystyle x^{2}-y^{2}=1}, which factors as(x+y)(x−y)=1{\displaystyle (x+y)(x-y)=1}, and thus has asymptotes the lines through the origin with slope−1{\displaystyle -1}and with slope1{\displaystyle 1}, and vertex at(1,0){\displaystyle (1,0)}corresponding to the range and midpoint (1{\displaystyle {1}}) of tanh. Analogously, the logistic function can be viewed as the hyperbolic angle on the hyperbolaxy−y2=1{\displaystyle xy-y^{2}=1}, which factors asy(x−y)=1{\displaystyle y(x-y)=1}, and thus has asymptotes the lines through the origin with slope0{\displaystyle 0}and with slope1{\displaystyle 1}, and vertex at(2,1){\displaystyle (2,1)}, corresponding to the range and midpoint (1/2{\displaystyle 1/2}) of the logistic function.
Parametrically,hyperbolic cosineandhyperbolic sinegive coordinates on the unit hyperbola:[d]((et+e−t)/2,(et−e−t)/2){\displaystyle \left((e^{t}+e^{-t})/2,(e^{t}-e^{-t})/2\right)}, with quotient the hyperbolic tangent. Similarly,(et/2+e−t/2,et/2){\displaystyle {\bigl (}e^{t/2}+e^{-t/2},e^{t/2}{\bigr )}}parametrizes the hyperbolaxy−y2=1{\displaystyle xy-y^{2}=1}, with quotient the logistic function. These correspond tolinear transformations(and rescaling the parametrization) ofthe hyperbolaxy=1{\displaystyle xy=1}, with parametrization(e−t,et){\displaystyle (e^{-t},e^{t})}: the parametrization of the hyperbola for the logistic function corresponds tot/2{\displaystyle t/2}and the linear transformation(1101){\displaystyle {\bigl (}{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}{\bigr )}}, while the parametrization of the unit hyperbola (for the hyperbolic tangent) corresponds to the linear transformation12(11−11){\displaystyle {\tfrac {1}{2}}{\bigl (}{\begin{smallmatrix}1&1\\-1&1\end{smallmatrix}}{\bigr )}}.
The standard logistic function has an easily calculatedderivative. The derivative is known as the density of thelogistic distribution:
f(x)=11+e−x=ex1+ex,{\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}},}
ddxf(x)=ex⋅(1+ex)−ex⋅ex(1+ex)2=ex(1+ex)2=(ex1+ex)(11+ex)=(ex1+ex)(1−ex1+ex)=f(x)(1−f(x)){\displaystyle {\begin{aligned}{\frac {d}{dx}}f(x)&={\frac {e^{x}\cdot (1+e^{x})-e^{x}\cdot e^{x}}{{\left(1+e^{x}\right)}^{2}}}\\[1ex]&={\frac {e^{x}}{{\left(1+e^{x}\right)}^{2}}}\\[1ex]&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left({\frac {1}{1+e^{x}}}\right)\\[1ex]&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left(1-{\frac {e^{x}}{1+e^{x}}}\right)\\[1.2ex]&=f(x)\left(1-f(x)\right)\end{aligned}}}from which all higher derivatives can be derived algebraically. For example,f″=(1−2f)(1−f)f{\displaystyle f''=(1-2f)(1-f)f}.
The logistic distribution is alocation–scale family, which corresponds to parameters of the logistic function. IfL=1{\displaystyle L=1}is fixed, then the midpointx0{\displaystyle x_{0}}is the location and the slopek{\displaystyle k}is the scale.
Conversely, itsantiderivativecan be computed by thesubstitutionu=1+ex{\displaystyle u=1+e^{x}}, since
f(x)=ex1+ex=u′u,{\displaystyle f(x)={\frac {e^{x}}{1+e^{x}}}={\frac {u'}{u}},}
so (dropping theconstant of integration)
∫ex1+exdx=∫1udu=lnu=ln(1+ex).{\displaystyle \int {\frac {e^{x}}{1+e^{x}}}\,dx=\int {\frac {1}{u}}\,du=\ln u=\ln(1+e^{x}).}
Inartificial neural networks, this is known as thesoftplusfunction and (with scaling) is a smooth approximation of theramp function, just as the logistic function (with scaling) is a smooth approximation of theHeaviside step function.
The standard logistic function isanalyticon the whole real line sincef:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} },f(x)=11+e−x=h(g(x)){\displaystyle f(x)={\frac {1}{1+e^{-x}}}=h(g(x))}whereg:R→R{\displaystyle g:\mathbb {R} \to \mathbb {R} },g(x)=1+e−x{\displaystyle g(x)=1+e^{-x}}andh:(0,∞)→(0,∞){\displaystyle h:(0,\infty )\to (0,\infty )},h(x)=1x{\displaystyle h(x)={\frac {1}{x}}}are analytic on their domains, and the composition of analytic functions is again analytic.
A formula for thenth derivative of the standard logistic function is
dnfdxn=∑i=1n(∑j=1n(−1)i+j(ij)jn)e−ix(1+e−x)i+1{\displaystyle {\frac {d^{n}f}{dx^{n}}}=\sum _{i=1}^{n}{\frac {\left(\sum _{j=1}^{n}{\left(-1\right)}^{i+j}{\binom {i}{j}}j^{n}\right)e^{-ix}}{{\left(1+e^{-x}\right)}^{i+1}}}}
therefore itsTaylor seriesabout the pointa{\displaystyle a}is
f(x)=f(a)(x−a)+∑n=1∞∑i=1n(∑j=1n(−1)i+j(ij)jn)e−ix(1+e−x)i+1(x−a)nn!.{\displaystyle f(x)=f(a)(x-a)+\sum _{n=1}^{\infty }\sum _{i=1}^{n}{\frac {\left(\sum _{j=1}^{n}{\left(-1\right)}^{i+j}{\binom {i}{j}}j^{n}\right)e^{-ix}}{{\left(1+e^{-x}\right)}^{i+1}}}{\frac {{\left(x-a\right)}^{n}}{n!}}.}
The unique standard logistic function is the solution of the simple first-order non-linearordinary differential equation
ddxf(x)=f(x)(1−f(x)){\displaystyle {\frac {d}{dx}}f(x)=f(x){\big (}1-f(x){\big )}}
withboundary conditionf(0)=1/2{\displaystyle f(0)=1/2}. This equation is the continuous version of thelogistic map. Note that the reciprocal logistic function is solution to a simple first-orderlinearordinary differential equation.[12]
The qualitative behavior is easily understood in terms of thephase line: the derivative is 0 when the function is 1; and the derivative is positive forf{\displaystyle f}between 0 and 1, and negative forf{\displaystyle f}above 1 or less than 0 (though negative populations do not generally accord with a physical model). This yields an unstable equilibrium at 0 and a stable equilibrium at 1, and thus for any function value greater than 0 and less than 1, it grows to 1.
The logistic equation is a special case of theBernoulli differential equationand has the following solution:
f(x)=exex+C.{\displaystyle f(x)={\frac {e^{x}}{e^{x}+C}}.}
Choosing the constant of integrationC=1{\displaystyle C=1}gives the other well known form of the definition of the logistic curve:
f(x)=exex+1=11+e−x.{\displaystyle f(x)={\frac {e^{x}}{e^{x}+1}}={\frac {1}{1+e^{-x}}}.}
More quantitatively, as can be seen from the analytical solution, the logistic curve shows earlyexponential growthfor negative argument, which reaches to linear growth of slope 1/4 for an argument near 0, then approaches 1 with an exponentially decaying gap.
The differential equation derived above is a special case of a general differential equation that only models the sigmoid function forx>0{\displaystyle x>0}. In many modeling applications, the moregeneral form[13]df(x)dx=kLf(x)(L−f(x)),f(0)=L1+ekx0{\displaystyle {\frac {df(x)}{dx}}={\frac {k}{L}}f(x){\big (}L-f(x){\big )},\quad f(0)={\frac {L}{1+e^{kx_{0}}}}}can be desirable. Its solution is the shifted and scaledsigmoid functionLσ(k(x−x0))=L1+e−k(x−x0){\displaystyle L\sigma {\big (}k(x-x_{0}){\big )}={\frac {L}{1+e^{-k(x-x_{0})}}}}.
When the capacityL=1{\displaystyle L=1}, the value of the logistic function is in the range(0,1){\displaystyle (0,1)}and can be interpreted as a probabilityp.[e]In more detail,pcan be interpreted as the probability of one of two alternatives (the parameter of aBernoulli distribution);[f]the two alternatives are complementary, so the probability of the other alternative isq=1−p{\displaystyle q=1-p}andp+q=1{\displaystyle p+q=1}. The two alternatives are coded as 1 and 0, corresponding to the limiting values asx→±∞{\displaystyle x\to \pm \infty }.
In this interpretation the inputxis thelog-oddsfor the first alternative (relative to the other alternative), measured in "logistic units" (orlogits),ex{\displaystyle e^{x}}is theoddsfor the first event (relative to the second), and, recalling that given odds ofO=O:1{\displaystyle O=O:1}for (O{\displaystyle O}against1), the probability is the ratio of for over (for plus against),O/(O+1){\displaystyle O/(O+1)}, we see thatex/(ex+1)=1/(1+e−x)=p{\displaystyle e^{x}/(e^{x}+1)=1/(1+e^{-x})=p}is the probability of the first alternative. Conversely,xis the log-oddsagainstthe second alternative,−x{\displaystyle -x}is the log-oddsforthe second alternative,e−x{\displaystyle e^{-x}}is the odds for the second alternative, ande−x/(e−x+1)=1/(1+ex)=q{\displaystyle e^{-x}/(e^{-x}+1)=1/(1+e^{x})=q}is the probability of the second alternative.
This can be framed more symmetrically in terms of two inputs,x0{\displaystyle x_{0}}andx1{\displaystyle x_{1}}, which then generalizes naturally to more than two alternatives. Given two real number inputs,x0{\displaystyle x_{0}}andx1{\displaystyle x_{1}}, interpreted as logits, theirdifferencex1−x0{\displaystyle x_{1}-x_{0}}is the log-odds for option 1 (the log-oddsagainstoption 0),ex1−x0{\displaystyle e^{x_{1}-x_{0}}}is the odds,ex1−x0/(ex1−x0+1)=1/(1+e−(x1−x0))=ex1/(ex0+ex1){\displaystyle e^{x_{1}-x_{0}}/(e^{x_{1}-x_{0}}+1)=1/\left(1+e^{-(x_{1}-x_{0})}\right)=e^{x_{1}}/(e^{x_{0}}+e^{x_{1}})}is the probability of option 1, and similarlyex0/(ex0+ex1){\displaystyle e^{x_{0}}/(e^{x_{0}}+e^{x_{1}})}is the probability of option 0.
This form immediately generalizes to more alternatives as thesoftmax function, which is a vector-valued function whosei-th coordinate isexi/∑i=0nexi{\textstyle e^{x_{i}}/\sum _{i=0}^{n}e^{x_{i}}}.
More subtly, the symmetric form emphasizes interpreting the inputxasx1−x0{\displaystyle x_{1}-x_{0}}and thusrelativeto some reference point, implicitly tox0=0{\displaystyle x_{0}=0}. Notably, the softmax function is invariant under adding a constant to all the logitsxi{\displaystyle x_{i}}, which corresponds to the differencexj−xi{\displaystyle x_{j}-x_{i}}being the log-odds for optionjagainst optioni, but the individual logitsxi{\displaystyle x_{i}}not being log-odds on their own. Often one of the options is used as a reference ("pivot"), and its value fixed as0, so the other logits are interpreted as odds versus this reference. This is generally done with the first alternative, hence the choice of numbering:x0=0{\displaystyle x_{0}=0}, and thenxi=xi−x0{\displaystyle x_{i}=x_{i}-x_{0}}is the log-odds for optioniagainst option0. Sincee0=1{\displaystyle e^{0}=1}, this yields the+1{\displaystyle +1}term in many expressions for the logistic function and generalizations.[g]
In growth modeling, numerous generalizations exist, including thegeneralized logistic curve, theGompertz function, thecumulative distribution functionof theshifted Gompertz distribution, and thehyperbolastic function of type I.
In statistics, where the logistic function is interpreted as the probability of one of two alternatives, the generalization to three or more alternatives is thesoftmax function, which is vector-valued, as it gives the probability of each alternative.
A typical application of the logistic equation is a common model ofpopulation growth(see alsopopulation dynamics), originally due toPierre-François Verhulstin 1838, where the rate of reproduction is proportional to both the existing population and the amount of available resources, all else being equal. The Verhulst equation was published after Verhulst had readThomas Malthus'An Essay on the Principle of Population, which describes theMalthusian growth modelof simple (unconstrained) exponential growth. Verhulst derived his logistic equation to describe the self-limiting growth of abiologicalpopulation. The equation was rediscovered in 1911 byA. G. McKendrickfor the growth of bacteria in broth and experimentally tested using a technique for nonlinear parameter estimation.[14]The equation is also sometimes called theVerhulst-Pearl equationfollowing its rediscovery in 1920 byRaymond Pearl(1879–1940) andLowell Reed(1888–1966) of theJohns Hopkins University.[15]Another scientist,Alfred J. Lotkaderived the equation again in 1925, calling it thelaw of population growth.
LettingP{\displaystyle P}represent population size (N{\displaystyle N}is often used in ecology instead) andt{\displaystyle t}represent time, this model is formalized by thedifferential equation:
dPdt=rP(1−PK),{\displaystyle {\frac {dP}{dt}}=rP\left(1-{\frac {P}{K}}\right),}
where the constantr{\displaystyle r}defines thegrowth rateandK{\displaystyle K}is thecarrying capacity.
In the equation, the early, unimpeded growth rate is modeled by the first term+rP{\displaystyle +rP}. The value of the rater{\displaystyle r}represents the proportional increase of the populationP{\displaystyle P}in one unit of time. Later, as the population grows, the modulus of the second term (which multiplied out is−rP2/K{\displaystyle -rP^{2}/K}) becomes almost as large as the first, as some members of the populationP{\displaystyle P}interfere with each other by competing for some critical resource, such as food or living space. This antagonistic effect is called thebottleneck, and is modeled by the value of the parameterK{\displaystyle K}. The competition diminishes the combined growth rate, until the value ofP{\displaystyle P}ceases to grow (this is calledmaturityof the population).
The solution to the equation (withP0{\displaystyle P_{0}}being the initial population) is
P(t)=KP0ertK+P0(ert−1)=K1+(K−P0P0)e−rt,{\displaystyle P(t)={\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}={\frac {K}{1+\left({\frac {K-P_{0}}{P_{0}}}\right)e^{-rt}}},}
where
limt→∞P(t)=K,{\displaystyle \lim _{t\to \infty }P(t)=K,}
whereK{\displaystyle K}is the limiting value ofP{\displaystyle P}, the highest value that the population can reach given infinite time (or come close to reaching in finite time). The carrying capacity is asymptotically reached independently of the initial valueP(0)>0{\displaystyle P(0)>0}, and also in the case thatP(0)>K{\displaystyle P(0)>K}.
In ecology,speciesare sometimes referred to asr{\displaystyle r}-strategist orK{\displaystyle K}-strategistdepending upon theselectiveprocesses that have shaped theirlife historystrategies.Choosing the variable dimensionsso thatn{\displaystyle n}measures the population in units of carrying capacity, andτ{\displaystyle \tau }measures time in units of1/r{\displaystyle 1/r}, gives the dimensionless differential equation
dndτ=n(1−n).{\displaystyle {\frac {dn}{d\tau }}=n(1-n).}
Theantiderivativeof the ecological form of the logistic function can be computed by thesubstitutionu=K+P0(ert−1){\displaystyle u=K+P_{0}\left(e^{rt}-1\right)}, sincedu=rP0ertdt{\displaystyle du=rP_{0}e^{rt}dt}
∫KP0ertK+P0(ert−1)dt=∫Kr1udu=Krlnu+C=Krln(K+P0(ert−1))+C{\displaystyle \int {\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}\,dt=\int {\frac {K}{r}}{\frac {1}{u}}\,du={\frac {K}{r}}\ln u+C={\frac {K}{r}}\ln \left(K+P_{0}(e^{rt}-1)\right)+C}
Since the environmental conditions influence the carrying capacity, as a consequence it can be time-varying, withK(t)>0{\displaystyle K(t)>0}, leading to the following mathematical model:
dPdt=rP⋅(1−PK(t)).{\displaystyle {\frac {dP}{dt}}=rP\cdot \left(1-{\frac {P}{K(t)}}\right).}
A particularly important case is that of carrying capacity that varies periodically with periodT{\displaystyle T}:
K(t+T)=K(t).{\displaystyle K(t+T)=K(t).}
It can be shown[16]that in such a case, independently from the initial valueP(0)>0{\displaystyle P(0)>0},P(t){\displaystyle P(t)}will tend to a unique periodic solutionP∗(t){\displaystyle P_{*}(t)}, whose period isT{\displaystyle T}.
A typical value ofT{\displaystyle T}is one year: In such caseK(t){\displaystyle K(t)}may reflect periodical variations of weather conditions.
Another interesting generalization is to consider that the carrying capacityK(t){\displaystyle K(t)}is a function of the population at an earlier time, capturing a delay in the way population modifies its environment. This leads to a logistic delay equation,[17]which has a very rich behavior, with bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finite-time death.
Logistic functions are used in several roles in statistics. For example, they are thecumulative distribution functionof thelogistic family of distributions, and they are, a bit simplified, used to model the chance a chess player has to beat their opponent in theElo rating system. More specific examples now follow.
Logistic functions are used inlogistic regressionto model how the probabilityp{\displaystyle p}of an event may be affected by one or moreexplanatory variables: an example would be to have the model
p=f(a+bx),{\displaystyle p=f(a+bx),}
wherex{\displaystyle x}is the explanatory variable,a{\displaystyle a}andb{\displaystyle b}are model parameters to be fitted, andf{\displaystyle f}is the standard logistic function.
Logistic regression and otherlog-linear modelsare also commonly used inmachine learning. A generalisation of the logistic function to multiple inputs is thesoftmax activation function, used inmultinomial logistic regression.
Another application of the logistic function is in theRasch model, used initem response theory. In particular, the Rasch model forms a basis formaximum likelihoodestimation of the locations of objects or persons on acontinuum, based on collections ofcategorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect.
Logistic functions are often used inartificial neural networksto introducenonlinearityin the model or to clamp signals to within a specifiedinterval. A popularneural net elementcomputes alinear combinationof its input signals, and applies a bounded logistic function as theactivation functionto the result; this model can be seen as a "smoothed" variant of the classicalthreshold neuron.
A common choice for the activation or "squashing" functions, used to clip large magnitudes to keep the response of the neural network bounded,[18]is
g(h)=11+e−2βh,{\displaystyle g(h)={\frac {1}{1+e^{-2\beta h}}},}
which is a logistic function.
These relationships result in simplified implementations ofartificial neural networkswithartificial neurons. Practitioners caution that sigmoidal functions which areantisymmetricabout the origin (e.g. thehyperbolic tangent) lead to faster convergence when training networks withbackpropagation.[19]
The logistic function is itself the derivative of another proposed activation function, thesoftplus.
Another application of logistic curve is in medicine, where the logistic differential equation can be used to model the growth oftumors. This application can be considered an extension of the above-mentioned use in the framework of ecology (see also theGeneralized logistic curve, allowing for more parameters). Denoting withX(t){\displaystyle X(t)}the size of the tumor at timet{\displaystyle t}, its dynamics are governed by
X′=r(1−XK)X,{\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X,}
which is of the type
X′=F(X)X,F′(X)≤0,{\displaystyle X'=F(X)X,\quad F'(X)\leq 0,}
whereF(X){\displaystyle F(X)}is the proliferation rate of the tumor.
If a course ofchemotherapyis started with a log-kill effect, the equation may be revised to be
X′=r(1−XK)X−c(t)X,{\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X-c(t)X,}
wherec(t){\displaystyle c(t)}is the therapy-induced death rate. In the idealized case of very long therapy,c(t){\displaystyle c(t)}can be modeled as a periodic function (of periodT{\displaystyle T}) or (in case of continuous infusion therapy) as a constant function, and one has that
1T∫0Tc(t)dt>r→limt→+∞x(t)=0,{\displaystyle {\frac {1}{T}}\int _{0}^{T}c(t)\,dt>r\to \lim _{t\to +\infty }x(t)=0,}
i.e. if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy. For example, it does not take into account the evolution of clonal resistance, or the side-effects of the therapy on the patient. These factors can result in the eventual failure of chemotherapy, or its discontinuation.[citation needed]
A novel infectious pathogen to which a population has no immunity will generally spread exponentially in the early stages, while the supply of susceptible individuals is plentiful. The SARS-CoV-2 virus that causesCOVID-19exhibited exponential growth early in the course of infection in several countries in early 2020.[20]Factors including a lack of susceptible hosts (through the continued spread of infection until it passes the threshold forherd immunity) or reduction in the accessibility of potential hosts through physical distancing measures, may result in exponential-looking epidemic curves first linearizing (replicating the "logarithmic" to "logistic" transition first noted byPierre-François Verhulst, as noted above) and then reaching a maximal limit.[21]
A logistic function, or related functions (e.g. theGompertz function) are usually used in a descriptive or phenomenological manner because they fit well not only to the early exponential rise, but to the eventual levelling off of the pandemic as the population develops a herd immunity. This is in contrast to actual models of pandemics which attempt to formulate a description based on the dynamics of the pandemic (e.g. contact rates, incubation times, social distancing, etc.). Some simple models have been developed, however, which yield a logistic solution.[22][23][24]
Ageneralized logistic function, also called the Richards growth curve, has been applied to model the early phase of theCOVID-19outbreak.[25]The authors fit the generalized logistic function to the cumulative number of infected cases, here referred to asinfection trajectory. There are different parameterizations of thegeneralized logistic functionin the literature. One frequently used forms is
f(t;θ1,θ2,θ3,ξ)=θ1[1+ξexp(−θ2⋅(t−θ3))]1/ξ{\displaystyle f(t;\theta _{1},\theta _{2},\theta _{3},\xi )={\frac {\theta _{1}}{{\left[1+\xi \exp \left(-\theta _{2}\cdot (t-\theta _{3})\right)\right]}^{1/\xi }}}}
whereθ1,θ2,θ3{\displaystyle \theta _{1},\theta _{2},\theta _{3}}are real numbers, andξ{\displaystyle \xi }is a positive real number. The flexibility of the curvef{\displaystyle f}is due to the parameterξ{\displaystyle \xi }: (i) ifξ=1{\displaystyle \xi =1}then the curve reduces to the logistic function, and (ii) asξ{\displaystyle \xi }approaches zero, the curve converges to theGompertz function. In epidemiological modeling,θ1{\displaystyle \theta _{1}},θ2{\displaystyle \theta _{2}}, andθ3{\displaystyle \theta _{3}}represent the final epidemic size, infection rate, and lag phase, respectively. See the right panel for an example infection trajectory when(θ1,θ2,θ3){\displaystyle (\theta _{1},\theta _{2},\theta _{3})}is set to(10000,0.2,40){\displaystyle (10000,0.2,40)}.
One of the benefits of using a growth function such as thegeneralized logistic functionin epidemiological modeling is its relatively easy application to themultilevel modelframework, where information from different geographic regions can be pooled together.
The concentration of reactants and products inautocatalytic reactionsfollow the logistic function.
The degradation ofPlatinum groupmetal-free (PGM-free) oxygen reduction reaction (ORR) catalyst in fuel cell cathodes follows the logistic decay function,[26]suggesting an autocatalytic degradation mechanism.
The logistic function determines the statistical distribution of fermions over the energy states of a system in thermal equilibrium. In particular, it is the distribution of the probabilities that each possible energy level is occupied by a fermion, according toFermi–Dirac statistics.
The logistic function also finds applications in optics, particularly in modelling phenomena such asmirages. Under certain conditions, such as the presence of a temperature or concentration gradient due to diffusion and balancing with gravity, logistic curve behaviours can emerge.[27][28]
A mirage, resulting from a temperature gradient that modifies the refractive index related to the density/concentration of the material over distance, can be modelled using a fluid with a refractive index gradient due to the concentration gradient. This mechanism can be equated to a limiting population growth model, where the concentrated region attempts to diffuse into the lower concentration region, while seeking equilibrium with gravity, thus yielding a logistic function curve.[27]
SeeDiffusion bonding.
In linguistics, the logistic function can be used to modellanguage change:[29]an innovation that is at first marginal begins to spread more quickly with time, and then more slowly as it becomes more universally adopted.
The logistic S-curve can be used for modeling the crop response to changes in growth factors. There are two types of response functions:positiveandnegativegrowth curves. For example, the crop yield mayincreasewith increasing value of the growth factor up to a certain level (positive function), or it maydecreasewith increasing growth factor values (negative function owing to a negative growth factor), which situation requires aninvertedS-curve.
The logistic function can be used to illustrate the progress of thediffusion of an innovationthrough its life cycle.
InThe Laws of Imitation(1890),Gabriel Tardedescribes the rise and spread of new ideas through imitative chains. In particular, Tarde identifies three main stages through which innovations spread: the first one corresponds to the difficult beginnings, during which the idea has to struggle within a hostile environment full of opposing habits and beliefs; the second one corresponds to the properly exponential take-off of the idea, withf(x)=2x{\displaystyle f(x)=2^{x}}; finally, the third stage is logarithmic, withf(x)=log(x){\displaystyle f(x)=\log(x)}, and corresponds to the time when the impulse of the idea gradually slows down while, simultaneously new opponent ideas appear. The ensuing situation halts or stabilizes the progress of the innovation, which approaches an asymptote.
In asovereign state, the subnational units (constituent states or cities) may use loans to finance their projects. However, this funding source is usually subject to strict legal rules as well as to economyscarcityconstraints, especially the resources the banks can lend (due to theirequityorBasellimits). These restrictions, which represent a saturation level, along with an exponential rush in aneconomic competitionfor money, create apublic financediffusion of credit pleas and the aggregate national response is asigmoid curve.[32]
Historically, when new products are introduced there is an intense amount ofresearch and developmentwhich leads to dramatic improvements in quality and reductions in cost. This leads to a period of rapid industry growth. Some of the more famous examples are: railroads, incandescent light bulbs,electrification, cars and air travel. Eventually, dramatic improvement and cost reduction opportunities are exhausted, the product or process are in widespread use with few remaining potential new customers, and markets become saturated.
Logistic analysis was used in papers by several researchers at the International Institute of Applied Systems Analysis (IIASA). These papers deal with the diffusion of various innovations, infrastructures and energy source substitutions and the role of work in the economy as well as with the long economic cycle. Long economic cycles were investigated by Robert Ayres (1989).[33]Cesare Marchetti published onlong economic cyclesand on diffusion of innovations.[34][35]Arnulf Grübler's book (1990) gives a detailed account of the diffusion of infrastructures including canals, railroads, highways and airlines, showing that their diffusion followed logistic shaped curves.[36]
Carlota Perez used a logistic curve to illustrate the long (Kondratiev) business cycle with the following labels: beginning of a technological era asirruption, the ascent asfrenzy, the rapid build out assynergyand the completion asmaturity.[37]
Logistic growth regressions carry significant uncertainty when data is available only up to around the inflection point of the growth process. Under these conditions, estimating the height at which the inflection point will occur may have uncertainties comparable to the carrying capacity (K) of the system.
A method to mitigate this uncertainty involves using the carrying capacity from a surrogate logistic growth process as a reference point.[38]By incorporating this constraint, even if K is only an estimate within a factor of two, the regression is stabilized, which improves accuracy and reduces uncertainty in the prediction parameters. This approach can be applied in fields such as economics and biology, where analogous surrogate systems or populations are available to inform the analysis.
Link[39]created an extension ofWald's theoryof sequential analysis to a distribution-free accumulation of random variables until either a positive or negative bound is first equaled or exceeded. Link[40]derives the probability of first equaling or exceeding the positive boundary as1/(1+e−θA){\displaystyle 1/(1+e^{-\theta A})}, the logistic function. This is the first proof that the logistic function may have a stochastic process as its basis. Link[41]provides a century of examples of "logistic" experimental results and a newly derived relation between this probability and the time of absorption at the boundaries.
|
https://en.wikipedia.org/wiki/Logistic_function
|
Stability, also known asalgorithmic stability, is a notion incomputational learning theoryof how amachine learning algorithmoutput is changed with small perturbations to its inputs. A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly. For instance, consider a machine learning algorithm that is being trained torecognize handwritten lettersof the alphabet, using 1000 examples of handwritten letters and their labels ("A" to "Z") as a training set. One way to modify this training set is to leave out an example, so that only 999 examples of handwritten letters and their labels are available. A stable learning algorithm would produce a similarclassifierwith both the 1000-element and 999-element training sets.
Stability can be studied for many types of learning problems, fromlanguage learningtoinverse problemsin physics and engineering, as it is a property of the learning process rather than the type of information being learned. The study of stability gained importance incomputational learning theoryin the 2000s when it was shown to have a connection withgeneralization.[1]It was shown that for large classes of learning algorithms, notablyempirical risk minimizationalgorithms, certain types of stability ensure good generalization.
A central goal in designing amachine learning systemis to guarantee that the learning algorithm willgeneralize, or perform accurately on new examples after being trained on a finite number of them. In the 1990s, milestones were reached in obtaining generalization bounds forsupervised learning algorithms. The technique historically used to prove generalization was to show that an algorithm wasconsistent, using theuniform convergenceproperties of empirical quantities to their means. This technique was used to obtain generalization bounds for the large class ofempirical risk minimization(ERM) algorithms. An ERM algorithm is one that selects a solution from a hypothesis spaceH{\displaystyle H}in such a way to minimize the empirical error on a training setS{\displaystyle S}.
A general result, proved byVladimir Vapnikfor an ERM binary classification algorithms, is that for any target function and input distribution, any hypothesis spaceH{\displaystyle H}withVC-dimensiond{\displaystyle d}, andn{\displaystyle n}training examples, the algorithm is consistent and will produce a training error that is at mostO(dn){\displaystyle O\left({\sqrt {\frac {d}{n}}}\right)}(plus logarithmic factors) from the true error. The result was later extended to almost-ERM algorithms with function classes that do not have unique minimizers.
Vapnik's work, using what became known asVC theory, established a relationship between generalization of a learning algorithm and properties of the hypothesis spaceH{\displaystyle H}of functions being learned. However, these results could not be applied to algorithms with hypothesis spaces of unbounded VC-dimension. Put another way, these results could not be applied when the information being learned had a complexity that was too large to measure. Some of the simplest machine learning algorithms—for instance, for regression—have hypothesis spaces with unbounded VC-dimension. Another example is language learning algorithms that can produce sentences of arbitrary length.
Stability analysis was developed in the 2000s forcomputational learning theoryand is an alternative method for obtaining generalization bounds. The stability of an algorithm is a property of the learning process, rather than a direct property of the hypothesis spaceH{\displaystyle H}, and it can be assessed in algorithms that have hypothesis spaces with unbounded or undefined VC-dimension such as nearest neighbor. A stable learning algorithm is one for which the learned function does not change much when the training set is slightly modified, for instance by leaving out an example. A measure of Leave one out error is used in a Cross Validation Leave One Out (CVloo) algorithm to evaluate a learning algorithm's stability with respect to the loss function. As such, stability analysis is the application ofsensitivity analysisto machine learning.
We define several terms related to learning algorithms training sets, so that we can then define stability in multiple ways and present theorems from the field.
A machine learning algorithm, also known as a learning mapL{\displaystyle L}, maps a training data set, which is a set of labeled examples(x,y){\displaystyle (x,y)}, onto a functionf{\displaystyle f}fromX{\displaystyle X}toY{\displaystyle Y}, whereX{\displaystyle X}andY{\displaystyle Y}are in the same space of the training examples. The functionsf{\displaystyle f}are selected from a hypothesis space of functions calledH{\displaystyle H}.
The training set from which an algorithm learns is defined as
S={z1=(x1,y1),..,zm=(xm,ym)}{\displaystyle S=\{z_{1}=(x_{1},\ y_{1})\ ,..,\ z_{m}=(x_{m},\ y_{m})\}}
and is of sizem{\displaystyle m}inZ=X×Y{\displaystyle Z=X\times Y}
drawn i.i.d. from an unknown distribution D.
Thus, the learning mapL{\displaystyle L}is defined as a mapping fromZm{\displaystyle Z_{m}}intoH{\displaystyle H}, mapping a training setS{\displaystyle S}onto a functionfS{\displaystyle f_{S}}fromX{\displaystyle X}toY{\displaystyle Y}. Here, we consider only deterministic algorithms whereL{\displaystyle L}is symmetric with respect toS{\displaystyle S}, i.e. it does not depend on the order of the elements in the training set. Furthermore, we assume that all functions are measurable and all sets are countable.
The lossV{\displaystyle V}of a hypothesisf{\displaystyle f}with respect to an examplez=(x,y){\displaystyle z=(x,y)}is then defined asV(f,z)=V(f(x),y){\displaystyle V(f,z)=V(f(x),y)}.
The empirical error off{\displaystyle f}isIS[f]=1n∑V(f,zi){\displaystyle I_{S}[f]={\frac {1}{n}}\sum V(f,z_{i})}.
The true error off{\displaystyle f}isI[f]=EzV(f,z){\displaystyle I[f]=\mathbb {E} _{z}V(f,z)}
Given a training set S of size m, we will build, for all i = 1....,m, modified training sets as follows:
S|i={z1,...,zi−1,zi+1,...,zm}{\displaystyle S^{|i}=\{z_{1},...,\ z_{i-1},\ z_{i+1},...,\ z_{m}\}}
Si={z1,...,zi−1,zi′,zi+1,...,zm}{\displaystyle S^{i}=\{z_{1},...,\ z_{i-1},\ z_{i}',\ z_{i+1},...,\ z_{m}\}}
An algorithmL{\displaystyle L}has hypothesis stability β with respect to the loss function V if the following holds:
∀i∈{1,...,m},ES,z[|V(fS,z)−V(fS|i,z)|]≤β.{\displaystyle \forall i\in \{1,...,m\},\mathbb {E} _{S,z}[|V(f_{S},z)-V(f_{S^{|i}},z)|]\leq \beta .}
An algorithmL{\displaystyle L}has point-wise hypothesis stability β with respect to the loss function V if the following holds:
∀i∈{1,...,m},ES[|V(fS,zi)−V(fS|i,zi)|]≤β.{\displaystyle \forall i\in \ \{1,...,m\},\mathbb {E} _{S}[|V(f_{S},z_{i})-V(f_{S^{|i}},z_{i})|]\leq \beta .}
An algorithmL{\displaystyle L}has error stability β with respect to the loss function V if the following holds:
∀S∈Zm,∀i∈{1,...,m},|Ez[V(fS,z)]−Ez[V(fS|i,z)]|≤β{\displaystyle \forall S\in Z^{m},\forall i\in \{1,...,m\},|\mathbb {E} _{z}[V(f_{S},z)]-\mathbb {E} _{z}[V(f_{S^{|i}},z)]|\leq \beta }
An algorithmL{\displaystyle L}has uniform stability β with respect to the loss function V if the following holds:
∀S∈Zm,∀i∈{1,...,m},supz∈Z|V(fS,z)−V(fS|i,z)|≤β{\displaystyle \forall S\in Z^{m},\forall i\in \{1,...,m\},\sup _{z\in Z}|V(f_{S},z)-V(f_{S^{|i}},z)|\leq \beta }
A probabilistic version of uniform stability β is:
∀S∈Zm,∀i∈{1,...,m},PS{supz∈Z|V(fS,z)−V(fS|i,z)|≤β}≥1−δ{\displaystyle \forall S\in Z^{m},\forall i\in \{1,...,m\},\mathbb {P} _{S}\{\sup _{z\in Z}|V(f_{S},z)-V(f_{S^{|i}},z)|\leq \beta \}\geq 1-\delta }
An algorithm is said to bestable, when the value ofβ{\displaystyle \beta }decreases asO(1m){\displaystyle O({\frac {1}{m}})}.
An algorithmL{\displaystyle L}has CVloo stability β with respect to the loss function V if the following holds:
∀i∈{1,...,m},PS{|V(fS,zi)−V(fS|i,zi)|≤βCV}≥1−δCV{\displaystyle \forall i\in \{1,...,m\},\mathbb {P} _{S}\{|V(f_{S},z_{i})-V(f_{S^{|i}},z_{i})|\leq \beta _{CV}\}\geq 1-\delta _{CV}}
The definition of (CVloo) Stability isequivalentto Pointwise-hypothesis stability seen earlier.
An algorithmL{\displaystyle L}hasElooerr{\displaystyle Eloo_{err}}stability if for each n there exists aβELm{\displaystyle \beta _{EL}^{m}}and aδELm{\displaystyle \delta _{EL}^{m}}such that:
∀i∈{1,...,m},PS{|I[fS]−1m∑i=1mV(fS|i,zi)|≤βELm}≥1−δELm{\displaystyle \forall i\in \{1,...,m\},\mathbb {P} _{S}\{|I[f_{S}]-{\frac {1}{m}}\sum _{i=1}^{m}V(f_{S^{|i}},z_{i})|\leq \beta _{EL}^{m}\}\geq 1-\delta _{EL}^{m}}, withβELm{\displaystyle \beta _{EL}^{m}}andδELm{\displaystyle \delta _{EL}^{m}}going to zero form,→∞{\displaystyle m,\rightarrow \infty }
From Bousquet and Elisseeff (02):
For symmetric learning algorithms with bounded loss, if the algorithm has Uniform Stability with the probabilistic definition above, then the algorithm generalizes.
Uniform Stability is a strong condition which is not met by all algorithms but is, surprisingly, met by the large and important class of Regularization algorithms.
The generalization bound is given in the article.
From Mukherjee et al. (06):
This is an important result for the foundations of learning theory, because it shows that two previously unrelated properties of an algorithm, stability and consistency, are equivalent for ERM (and certain loss functions).
The generalization bound is given in the article.
This is a list of algorithms that have been shown to be stable, and the article where the associated generalization bounds are provided.
|
https://en.wikipedia.org/wiki/Stability_(learning_theory)
|
Least absolute deviations(LAD), also known asleast absolute errors(LAE),least absolute residuals(LAR), orleast absolute values(LAV), is a statisticaloptimality criterionand a statisticaloptimizationtechnique based onminimizingthesum of absolute deviations(alsosum of absolute residualsorsum of absolute errors) or theL1normof such values. It is analogous to theleast squarestechnique, except that it is based onabsolute valuesinstead ofsquared values. It attempts to find afunctionwhich closely approximates a set of data by minimizingresidualsbetween points generated by the function and corresponding data points. The LAD estimate also arises as themaximum likelihoodestimate if the errors have aLaplace distribution. It was introduced in 1757 byRoger Joseph Boscovich.[1]
Suppose that thedata setconsists of the points (xi,yi) withi= 1, 2, ...,n. We want to find a functionfsuch thatf(xi)≈yi.{\displaystyle f(x_{i})\approx y_{i}.}
To attain this goal, we suppose that the functionfis of a particular form containing some parameters that need to be determined. For instance, the simplest form would be linear:f(x) =bx+c, wherebandcare parameters whose values are not known but which we would like to estimate. Less simply, suppose thatf(x) isquadratic, meaning thatf(x) =ax2+bx+c, wherea,bandcare not yet known. (More generally, there could be not just one explanatorx, but rather multiple explanators, all appearing as arguments of the functionf.)
We now seek estimated values of the unknown parameters that minimize the sum of the absolute values of the residuals:
Though the idea of least absolute deviations regression is just as straightforward as that of least squares regression, the least absolute deviations line is not as simple to compute efficiently. Unlike least squares regression, least absolute deviations regression does not have an analytical solving method. Therefore, an iterative approach is required. The following is an enumeration of some least absolute deviations solving methods.
Simplex-based methods are the “preferred” way to solve the least absolute deviations problem.[7]A Simplex method is a method for solving a problem in linear programming. The most popular algorithm is the Barrodale-Roberts modified Simplex algorithm. The algorithms for IRLS, Wesolowsky's Method, and Li's Method can be found in Appendix A of[7]among other methods. Checking all combinations of lines traversing any two (x,y) data points is another method of finding the least absolute deviations line. Since it is known that at least one least absolute deviations line traverses at least two data points, this method will find a line by comparing the SAE (Smallest Absolute Error over data points) of each line, and choosing the line with the smallest SAE. In addition, if multiple lines have the same, smallest SAE, then the lines outline the region of multiple solutions. Though simple, this final method is inefficient for large sets of data.
The problem can be solved using any linear programming technique on the following problem specification. We wish to
with respect to the choice of the values of the parametersa0,…,ak{\displaystyle a_{0},\ldots ,a_{k}}, whereyiis the value of theithobservation of the dependent variable, andxijis the value of theithobservation of thejthindependent variable (j= 1,...,k). We rewrite this problem in terms of artificial variablesuias
These constraints have the effect of forcing eachui{\displaystyle u_{i}}to equal|yi−a0−a1xi1−a2xi2−⋯−akxik|{\displaystyle |y_{i}-a_{0}-a_{1}x_{i1}-a_{2}x_{i2}-\cdots -a_{k}x_{ik}|}upon being minimized, so the objective function is equivalent to the original objective function. Since this version of the problem statement does not contain the absolute value operator, it is in a format that can be solved with any linear programming package.
There exist other unique properties of the least absolute deviations line. In the case of a set of (x,y) data, the least absolute deviations line will always pass through at least two of the data points, unless there are multiple solutions. If multiple solutions exist, then the region of valid least absolute deviations solutions will be bounded by at least two lines, each of which passes through at least two data points. More generally, if there arekregressors(including the constant), then at least one optimal regression surface will pass throughkof the data points.[8]: p.936
This "latching" of the line to the data points can help to understand the "instability" property: if the line always latches to at least two points, then the line will jump between different sets of points as the data points are altered. The "latching" also helps to understand the "robustness" property: if there exists an outlier, and a least absolute deviations line must latch onto two data points, the outlier will most likely not be one of those two points because that will not minimize the sum of absolute deviations in most cases.
One known case in which multiple solutions exist is a set of points symmetric about a horizontal line, as shown in Figure A below.
To understand why there are multiple solutions in the case shown in Figure A, consider the pink line in the green region. Its sum of absolute errors is some value S. If one were to tilt the line upward slightly, while still keeping it within the green region, the sum of errors would still be S. It would not change because the distance from each point to the line grows on one side of the line, while the distance to each point on the opposite side of the line diminishes by exactly the same amount. Thus the sum of absolute errors remains the same. Also, since one can tilt the line in infinitely small increments, this also shows that if there is more than one solution, there are infinitely many solutions.
The following is a table contrasting some properties of the method of least absolute deviations with those of the method of least squares (for non-singular problems).[9][10]
*Provided that the number of data points is greater than or equal to the number of features.
The method of least absolute deviations finds applications in many areas, due to its robustness compared to the least squares method. Least absolute deviations is robust in that it is resistant to outliers in the data. LAD gives equal emphasis to all observations, in contrast to ordinary least squares (OLS) which, by squaring the residuals, gives more weight to large residuals, that is, outliers in which predicted values are far from actual observations. This may be helpful in studies where outliers do not need to be given greater weight than other observations. If it is important to give greater weight to outliers, the method of least squares is a better choice.
If in the sum of the absolute values of the residuals one generalises the absolute value function to a tilted absolute value function, which on the left half-line has slopeτ−1{\displaystyle \tau -1}and on the right half-line has slopeτ{\displaystyle \tau }, where0<τ<1{\displaystyle 0<\tau <1}, one obtainsquantile regression. The case ofτ=1/2{\displaystyle \tau =1/2}gives the standard regression by least absolute deviations and is also known asmedian regression.
The least absolute deviation problem may be extended to include multiple explanators, constraints andregularization, e.g., a linear model with linear constraints:[11]
whereβ{\displaystyle \mathbf {\beta } }is a column vector of coefficients to be estimated,bis an intercept to be estimated,xiis a column vector of theithobservations on the various explanators,yiis theithobservation on the dependent variable, andkis a known constant.
RegularizationwithLASSO(least absolute shrinkage and selection operator) may also be combined with LAD.[12]
|
https://en.wikipedia.org/wiki/Least_absolute_deviations
|
Taxicab geometryorManhattan geometryisgeometrywhere the familiarEuclidean distanceis ignored, and thedistancebetween twopointsis instead defined to be the sum of theabsolute differencesof their respectiveCartesian coordinates, a distance function (ormetric) called thetaxicab distance,Manhattan distance, orcity block distance. The name refers to the island ofManhattan, or generically any planned city with arectangular gridof streets, in which a taxicab can only travel along grid directions. In taxicab geometry, the distance between any two points equals the length of their shortest grid path. This different definition of distance also leads to a different definition of the length of a curve, for which aline segmentbetween any two points has the same length as a grid path between those points rather than its Euclidean length.
The taxicab distance is also sometimes known asrectilinear distanceorL1distance (seeLpspace).[1]This geometry has been used inregression analysissince the 18th century, and is often referred to asLASSO. Its geometric interpretation dates tonon-Euclidean geometryof the 19th century and is due toHermann Minkowski.
In the two-dimensionalreal coordinate spaceR2{\displaystyle \mathbb {R} ^{2}}, the taxicab distance between two points(x1,y1){\displaystyle (x_{1},y_{1})}and(x2,y2){\displaystyle (x_{2},y_{2})}is|x1−x2|+|y1−y2|{\displaystyle \left|x_{1}-x_{2}\right|+\left|y_{1}-y_{2}\right|}. That is, it is the sum of theabsolute valuesof the differences in both coordinates.
The taxicab distance,dT{\displaystyle d_{\text{T}}}, between two pointsp=(p1,p2,…,pn)andq=(q1,q2,…,qn){\displaystyle \mathbf {p} =(p_{1},p_{2},\dots ,p_{n}){\text{ and }}\mathbf {q} =(q_{1},q_{2},\dots ,q_{n})}in ann-dimensionalreal coordinate spacewith fixedCartesian coordinate system, is the sum of the lengths of the projections of theline segmentbetween the points onto thecoordinate axes. More formally,dT(p,q)=‖p−q‖T=∑i=1n|pi−qi|{\displaystyle d_{\text{T}}(\mathbf {p} ,\mathbf {q} )=\left\|\mathbf {p} -\mathbf {q} \right\|_{\text{T}}=\sum _{i=1}^{n}\left|p_{i}-q_{i}\right|}For example, inR2{\displaystyle \mathbb {R} ^{2}}, the taxicab distance betweenp=(p1,p2){\displaystyle \mathbf {p} =(p_{1},p_{2})}andq=(q1,q2){\displaystyle \mathbf {q} =(q_{1},q_{2})}is|p1−q1|+|p2−q2|.{\displaystyle \left|p_{1}-q_{1}\right|+\left|p_{2}-q_{2}\right|.}
TheL1metric was used inregression analysis, as a measure ofgoodness of fit, in 1757 byRoger Joseph Boscovich.[2]The interpretation of it as a distance between points in a geometric space dates to the late 19th century and the development ofnon-Euclidean geometries. Notably it appeared in 1910 in the works of bothFrigyes RieszandHermann Minkowski. The formalization ofLpspaces, which include taxicab geometry as a special case, is credited to Riesz.[3]In developing thegeometry of numbers,Hermann Minkowskiestablished hisMinkowski inequality, stating that these spaces definenormed vector spaces.[4]
The nametaxicab geometrywas introduced byKarl Mengerin a 1952 bookletYou Will Like Geometry, accompanying a geometry exhibit intended for the general public at theMuseum of Science and Industryin Chicago.[5]
Thought of as an additional structure layered onEuclidean space, taxicab distance depends on theorientationof the coordinate system and is changed by Euclideanrotationof the space, but is unaffected bytranslationor axis-alignedreflections. Taxicab geometry satisfies all ofHilbert's axioms(a formalization ofEuclidean geometry) except that the congruence of angles cannot be defined to precisely match the Euclidean concept, and under plausible definitions of congruent taxicab angles, theside-angle-side axiomis not satisfied as in general triangles with two taxicab-congruent sides and a taxicab-congruent angle between them are notcongruent triangles.
In anymetric space, asphereis a set of points at a fixed distance, theradius, from a specificcenterpoint. Whereas a Euclidean sphere is round and rotationally symmetric, under the taxicab distance, the shape of a sphere is across-polytope, then-dimensional generalization of aregular octahedron, whose pointsp{\displaystyle \mathbf {p} }satisfy the equation:
wherec{\displaystyle \mathbf {c} }is the center andris the radius. Pointsp{\displaystyle \mathbf {p} }on theunit sphere, a sphere of radius 1 centered at theorigin, satisfy the equationdT(p,0)=∑i=1n|pi|=1.{\textstyle d_{\text{T}}(\mathbf {p} ,\mathbf {0} )=\sum _{i=1}^{n}|p_{i}|=1.}
In two dimensional taxicab geometry, the sphere (called acircle) is asquareoriented diagonally to the coordinate axes. The image to the right shows in red the set of all points on a square grid with a fixed distance from the blue center. As the grid is made finer, the red points become more numerous, and in the limit tend to a continuous tilted square. Each side has taxicab length 2r, so thecircumferenceis 8r. Thus, in taxicab geometry, the value of the analog of the circle constantπ, the ratio of circumference todiameter, is equal to 4.
A closedball(or closeddiskin the 2-dimensional case) is a filled-in sphere, the set of points at distance less than or equal to the radius from a specific center. Forcellular automataon a square grid, a taxicabdiskis thevon Neumann neighborhoodof rangerof its center.
A circle of radiusrfor theChebyshev distance(L∞metric) on a plane is also a square with side length 2rparallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between L1and L∞metrics does not generalize to higher dimensions.
Whenever each pair in a collection of these circles has a nonempty intersection, there exists an intersection point for the whole collection; therefore, the Manhattan distance forms aninjective metric space.
Lety=f(x){\displaystyle y=f(x)}be acontinuously differentiablefunction. Lets{\displaystyle s}be the taxicabarc lengthof thegraphoff{\displaystyle f}on some interval[a,b]{\displaystyle [a,b]}. Take apartitionof the interval into equal infinitesimal subintervals, and letΔsi{\displaystyle \Delta s_{i}}be the taxicab length of theith{\displaystyle i^{\text{th}}}subarc. Then[6]
Δsi=Δxi+Δyi=Δxi+|f(xi)−f(xi−1)|.{\displaystyle \Delta s_{i}=\Delta x_{i}+\Delta y_{i}=\Delta x_{i}+|f(x_{i})-f(x_{i-1})|.}
By themean value theorem, there exists some pointxi∗{\displaystyle x_{i}^{*}}betweenxi{\displaystyle x_{i}}andxi−1{\displaystyle x_{i-1}}such thatf(xi)−f(xi−1)=f′(xi∗)dxi{\displaystyle f(x_{i})-f(x_{i-1})=f'(x_{i}^{*})dx_{i}}.[7]Then the previous equation can be written
Δsi=Δxi+|f′(xi∗)|Δxi=Δxi(1+|f′(xi∗)|).{\displaystyle \Delta s_{i}=\Delta x_{i}+|f'(x_{i}^{*})|\Delta x_{i}=\Delta x_{i}(1+|f'(x_{i}^{*})|).}
Thens{\displaystyle s}is given as the sum of every partition ofs{\displaystyle s}on[a,b]{\displaystyle [a,b]}as they getarbitrarily small.
s=limn→∞∑i=1nΔxi(1+|f′(xi∗)|)=∫ab1+|f′(x)|dx{\displaystyle {\begin{aligned}s&=\lim _{n\to \infty }\sum _{i=1}^{n}\Delta x_{i}(1+|f'(x_{i}^{*})|)\\&=\int _{a}^{b}1+|f'(x)|\,dx\end{aligned}}}To test this, take the taxicab circle ofradiusr{\displaystyle r}centered at the origin. Its curve in the firstquadrantis given byf(x)=−x+r{\displaystyle f(x)=-x+r}whose length is
s=∫0r1+|−1|dx=2r{\displaystyle s=\int _{0}^{r}1+|-1|dx=2r}
Multiplying this value by4{\displaystyle 4}to account for the remaining quadrants gives8r{\displaystyle 8r}, which agrees with thecircumferenceof a taxicab circle.[8]Now take theEuclideancircle of radiusr{\displaystyle r}centered at the origin, which is given byf(x)=r2−x2{\displaystyle f(x)={\sqrt {r^{2}-x^{2}}}}. Its arc length in the first quadrant is given by
s=∫0r1+|−xr2−x2|dx=x+r2−x2|0r=r−(−r)=2r{\displaystyle {\begin{aligned}s&=\int _{0}^{r}1+\left|{\frac {-x}{\sqrt {r^{2}-x^{2}}}}\right|dx\\&=\left.x+{\sqrt {r^{2}-x^{2}}}\right|_{0}^{r}\\&=r-(-r)\\&=2r\end{aligned}}}
Accounting for the remaining quadrants gives4×2r=8r{\displaystyle 4\times 2r=8r}again. Therefore, thecircumferenceof the taxicab circle and theEuclideancircle in the taxicabmetricare equal.[9]In fact, for any functionf{\displaystyle f}that is monotonic anddifferentiablewith a continuousderivativeover an interval[a,b]{\displaystyle [a,b]}, the arc length off{\displaystyle f}over[a,b]{\displaystyle [a,b]}is(b−a)+∣f(b)−f(a)∣{\displaystyle (b-a)+\mid f(b)-f(a)\mid }.[10]
Two triangles are congruent if and only if three corresponding sides are equal in distance and three corresponding angles are equal in measure. There are several theorems that guaranteetriangle congruencein Euclidean geometry, namely Angle-Angle-Side (AAS), Angle-Side-Angle (ASA), Side-Angle-Side (SAS), and Side-Side-Side (SSS). In taxicab geometry, however, only SASAS guarantees triangle congruence.[11]
Take, for example, two right isosceles taxicab triangles whose angles measure 45-90-45. The two legs of both triangles have a taxicab length 2, but thehypotenusesare not congruent. This counterexample eliminates AAS, ASA, and SAS. It also eliminates AASS, AAAS, and even ASASA. Having three congruent angles and two sides does not guarantee triangle congruence in taxicab geometry. Therefore, the only triangle congruence theorem in taxicab geometry is SASAS, where all three corresponding sides must be congruent and at least two corresponding angles must be congruent.[12]This result is mainly due to the fact that the length of a line segment depends on its orientation in taxicab geometry.
In solving anunderdetermined systemof linear equations, theregularizationterm for the parameter vector is expressed in terms of theℓ1{\displaystyle \ell _{1}}norm (taxicab geometry) of the vector.[13]This approach appears in the signal recovery framework calledcompressed sensing.
Taxicab geometry can be used to assess the differences in discrete frequency distributions. For example, inRNA splicingpositional distributions ofhexamers, which plot the probability of each hexamer appearing at each givennucleotidenear a splice site, can be compared with L1-distance. Each position distribution can be represented as a vector where each entry represents the likelihood of the hexamer starting at a certain nucleotide. A large L1-distance between the two vectors indicates a significant difference in the nature of the distributions while a small distance denotes similarly shaped distributions. This is equivalent to measuring the area between the two distribution curves because the area of each segment is the absolute difference between the two curves' likelihoods at that point. When summed together for all segments, it provides the same measure as L1-distance.[14]
|
https://en.wikipedia.org/wiki/Taxicab_geometry
|
Themean absolute percentage error(MAPE), also known asmean absolute percentage deviation(MAPD), is a measure of prediction accuracy of a forecasting method instatistics. It usually expresses the accuracy as a ratio defined by the formula:
whereAtis the actual value andFtis the forecast value. Their difference is divided by the actual valueAt. The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted pointsn.
Mean absolute percentage error is commonly used as a loss function forregression problemsand in model evaluation, because of its very intuitive interpretation in terms of relative error.
Consider a standard regression setting in which the data are fully described by a random pairZ=(X,Y){\displaystyle Z=(X,Y)}with values inRd×R{\displaystyle \mathbb {R} ^{d}\times \mathbb {R} }, andni.i.d. copies(X1,Y1),...,(Xn,Yn){\displaystyle (X_{1},Y_{1}),...,(X_{n},Y_{n})}of(X,Y){\displaystyle (X,Y)}. Regression models aim at finding a good model for the pair, that is ameasurable functiongfromRd{\displaystyle \mathbb {R} ^{d}}toR{\displaystyle \mathbb {R} }such thatg(X){\displaystyle g(X)}is close toY.
In the classical regression setting, the closeness ofg(X){\displaystyle g(X)}toYis measured via theL2risk, also called themean squared error(MSE). In the MAPE regression context,[1]the closeness ofg(X){\displaystyle g(X)}toYis measured via the MAPE, and the aim of MAPE regressions is to find a modelgMAPE{\displaystyle g_{\text{MAPE}}}such that:
gMAPE(x)=argming∈GE[|g(X)−YY||X=x]{\displaystyle g_{\mathrm {MAPE} }(x)=\arg \min _{g\in {\mathcal {G}}}\mathbb {E} {\Biggl [}\left|{\frac {g(X)-Y}{Y}}\right||X=x{\Biggr ]}}
whereG{\displaystyle {\mathcal {G}}}is the class of models considered (e.g. linear models).
In practice
In practicegMAPE(x){\displaystyle g_{\text{MAPE}}(x)}can be estimated by theempirical risk minimizationstrategy, leading to
g^MAPE(x)=argming∈G∑i=1n|g(Xi)−YiYi|{\displaystyle {\widehat {g}}_{\text{MAPE}}(x)=\arg \min _{g\in {\mathcal {G}}}\sum _{i=1}^{n}\left|{\frac {g(X_{i})-Y_{i}}{Y_{i}}}\right|}
From a practical point of view, the use of the MAPE as a quality function for regression model is equivalent to doing weightedmean absolute error(MAE) regression, also known asquantile regression. This property is trivial since
g^MAPE(x)=argming∈G∑i=1nω(Yi)|g(Xi)−Yi|withω(Yi)=|1Yi|{\displaystyle {\widehat {g}}_{\text{MAPE}}(x)=\arg \min _{g\in {\mathcal {G}}}\sum _{i=1}^{n}\omega (Y_{i})\left|g(X_{i})-Y_{i}\right|{\mbox{ with }}\omega (Y_{i})=\left|{\frac {1}{Y_{i}}}\right|}
As a consequence, the use of the MAPE is very easy in practice, for example using existing libraries for quantile regression allowing weights.
The use of the MAPE as a loss function for regression analysis is feasible both on a practical point of view and on a theoretical one, since the existence of an optimal model and theconsistencyof the empirical risk minimization can be proved.[1]
WMAPE(sometimes spelledwMAPE) stands for weighted mean absolute percentage error.[2]It is a measure used to evaluate the performance of regression or forecasting models. It is a variant of MAPE in which the mean absolute percent errors is treated as a weighted arithmetic mean. Most commonly the absolute percent errors are weighted by the actuals (e.g. in case of sales forecasting, errors are weighted by sales volume).[3]Effectively, this overcomes the 'infinite error' issue.[4]Its formula is:[4]wMAPE=∑i=1n(wi⋅|Ai−Fi||Ai|)∑i=1nwi=∑i=1n(|Ai|⋅|Ai−Fi||Ai|)∑i=1n|Ai|{\displaystyle {\mbox{wMAPE}}={\frac {\displaystyle \sum _{i=1}^{n}\left(w_{i}\cdot {\tfrac {\left|A_{i}-F_{i}\right|}{|A_{i}|}}\right)}{\displaystyle \sum _{i=1}^{n}w_{i}}}={\frac {\displaystyle \sum _{i=1}^{n}\left(|A_{i}|\cdot {\tfrac {\left|A_{i}-F_{i}\right|}{|A_{i}|}}\right)}{\displaystyle \sum _{i=1}^{n}\left|A_{i}\right|}}}
Wherewi{\displaystyle w_{i}}is the weight,A{\displaystyle A}is a vector of the actual data andF{\displaystyle F}is the forecast or prediction.
However, this effectively simplifies to a much simpler formula:wMAPE=∑i=1n|Ai−Fi|∑i=1n|Ai|{\displaystyle {\mbox{wMAPE}}={\frac {\displaystyle \sum _{i=1}^{n}\left|A_{i}-F_{i}\right|}{\displaystyle \sum _{i=1}^{n}\left|A_{i}\right|}}}
Confusingly, sometimes when people refer to wMAPE they are talking about a different model in which the numerator and denominator of the wMAPE formula above are weighted again by another set of custom weightswi{\displaystyle w_{i}}. Perhaps it would be more accurate to call this the double weighted MAPE (wwMAPE). Its formula is:wwMAPE=∑i=1nwi|Ai−Fi|∑i=1nwi|Ai|{\displaystyle {\mbox{wwMAPE}}={\frac {\displaystyle \sum _{i=1}^{n}w_{i}\left|A_{i}-F_{i}\right|}{\displaystyle \sum _{i=1}^{n}w_{i}\left|A_{i}\right|}}}
Although the concept of MAPE sounds very simple and convincing, it has major drawbacks in practical application,[5]and there are many studies on shortcomings and misleading results from MAPE.[6][7]
To overcome these issues with MAPE, there are some other measures proposed in literature:
|
https://en.wikipedia.org/wiki/Mean_absolute_percentage_error
|
Thesymmetric mean absolute percentage error(SMAPEorsMAPE) is an accuracy measure based on percentage (or relative) errors. It is usually defined[citation needed]as follows:
whereAtis the actual value andFtis the forecast value.
Theabsolute differencebetweenAtandFtis divided by half the sum of absolute values of the actual valueAtand the forecast valueFt. The value of this calculation is summed for every fitted pointtand divided again by the number of fitted pointsn.
The earliest reference to a similar formula appears to be Armstrong (1985, p. 348), where it is called "adjustedMAPE" and is defined without the absolute values in the denominator. It was later discussed, modified, and re-proposed by Flores (1986).
Armstrong's original definition is as follows:
The problem is that it can be negative (ifAt+Ft<0{\displaystyle A_{t}+F_{t}<0}) or even undefined (ifAt+Ft=0{\displaystyle A_{t}+F_{t}=0}). Therefore, the currently accepted version of SMAPE assumes the absolute values in the denominator.
In contrast to themean absolute percentage error, SMAPE has both a lower and an upper bound. Indeed, the formula above provides a result between 0% and 200%. However, a percentage error between 0% and 100% is much easier to interpret. That is the reason why the formula below is often used in practice (i.e., no factor 0.5 in the denominator):
In the above formula, ifAt=Ft=0{\displaystyle A_{t}=F_{t}=0}, then the t'th term in the summation is 0 since the percent error between the two is 0 and the value of|0−0||0|+|0|{\displaystyle {\frac {|0-0|}{|0|+|0|}}}is undefined.
One supposed problem withSMAPEis that it is not symmetric with respect to the sign of the error term since over- and under-forecasts are not treated equally. The following example illustrates this by applying the secondSMAPEformula:
However, one should only expect this type of symmetry for measures which are entirely difference-based and not relative (such as mean squared error and mean absolute deviation).
There is a third version of SMAPE, which allows measuring the direction of the bias in the data by generating a positive and a negative error on line item level. Furthermore, it is better protected against outliers and the bias effect mentioned in the previous paragraph than the two other formulas.
The formula is:
A limitation of SMAPE is that if the actual value or forecast value is 0, the value of error will boom up to the upper-limit of error. (200% for the first and 100% for the second formula).
Provided the data are strictly positive, a better measure of relative accuracy can be obtained based on the log of the accuracy ratio: log(Ft/At)
This measure is easier to analyze statistically and has valuable symmetry and unbiasedness properties. When used in constructing forecasting models, the resulting prediction corresponds to thegeometric mean(Tofallis, 2015).
|
https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error
|
Data dredging(also known asdata snoopingorp-hacking)[1][a]is the misuse ofdata analysisto find patterns in data that can be presented asstatistically significant, thus dramatically increasing and understating the risk offalse positives. This is done by performing manystatistical testson the data and only reporting those that come back with significant results.[2]Thus data dredging is also often a misused or misapplied form ofdata mining.
The process of data dredging involves testing multiple hypotheses using a singledata setbyexhaustively searching—perhaps for combinations of variables that might show acorrelation, and perhaps for groups of cases or observations that show differences in their mean or in their breakdown by some other variable.
Conventional tests ofstatistical significanceare based on the probability that a particular result would arise if chance alone were at work, and necessarily accept some risk ofmistaken conclusions of a certain type(mistaken rejections of thenull hypothesis). This level of risk is called thesignificance. When large numbers of tests are performed, some produce false results of this type; hence 5% of randomly chosen hypotheses might be (erroneously) reported to be statistically significant at the 5% significance level, 1% might be (erroneously) reported to be statistically significant at the 1% significance level, and so on, by chance alone. When enough hypotheses are tested, it is virtually certain that some will be reported to be statistically significant (even though this is misleading), since almost every data set with any degree of randomness is likely to contain (for example) somespurious correlations. If they are not cautious, researchers using data mining techniques can be easily misled by these results. The termp-hacking(in reference top-values) was coined in a 2014 paper by the three researchers behind the blogData Colada, which has been focusing on uncovering such problems in social sciences research.[3][4][5]
Data dredging is an example of disregarding themultiple comparisons problem. One form is when subgroups are compared without alerting the reader to the total number of subgroup comparisons examined.[6]When misused it is aquestionable research practicethat can undermine scientific integrity.
The conventionalstatistical hypothesis testingprocedure usingfrequentist probabilityis to formulate a research hypothesis, such as "people in higher social classes live longer", then collect relevant data. Lastly, a statisticalsignificance testis carried out to see how likely the results are by chance alone (also called testing against the null hypothesis).
A key point in proper statistical analysis is to test a hypothesis with evidence (data) that was not used in constructing the hypothesis. This is critical because everydata setcontains some patterns due entirely to chance. If the hypothesis is not tested on a different data set from the samestatistical population, it is impossible to assess the likelihood that chance alone would produce such patterns.
For example,flipping a coinfive times with a result of 2 heads and 3 tails might lead one to hypothesize that the coin favors tails by 3/5 to 2/5. If this hypothesis is then tested on the existing data set, it is confirmed, but the confirmation is meaningless. The proper procedure would have been to form in advance a hypothesis of what the tails probability is, and then throw the coin various times to see if the hypothesis is rejected or not. If three tails and two heads are observed, another hypothesis, that the tails probability is 3/5, could be formed, but it could only be tested by a new set of coin tosses. The statistical significance under the incorrect procedure is completely spurious—significance tests do not protect against data dredging.
Optional stopping is a practice where one collects data until some stopping criteria is reached. While it is a valid procedure, it is easily misused. The problem is that p-value of an optionally stopped statistical test is larger than what it seems. Intuitively, this is because the p-value is supposed to be the sum of all events at least as rare as what is observed. With optional stopping, there are even rarer events that are difficult to account for, i.e. not triggering the optional stopping rule, and collect even more data, before stopping. Neglecting these events leads to a p-value that's too low. In fact, if the null hypothesis is true, thenanysignificance level can be reached if one is allowed to keep collecting data and stop when the desired p-value (calculated as if one has always been planning to collect exactly this much data) is obtained.[7]For a concrete example of testing for a fair coin, seep-value § Optional stopping.
Or, more succinctly, the proper calculation of p-value requires accounting for counterfactuals, that is, what the experimentercouldhave done in reaction to data thatmighthave been. Accounting for what might have been is hard, even for honest researchers.[7]One benefit of preregistration is to account for all counterfactuals, allowing the p-value to be calculated correctly.[8]
The problem of early stopping is not just limited to researcher misconduct. There is often pressure to stop early if the cost of collecting data is high. Some animal ethics boards even mandate early stopping if the study obtains a significant result midway.[9]
If data is removedaftersome data analysis is already done on it, such as on the pretext of "removing outliers", then it would increase the false positive rate. Replacing "outliers" by replacement data increases the false positive rate further.[10]
If a dataset contains multiple features, then one or more of the features can be used as grouping, and potentially create a statistically significant result. For example, if a dataset of patients records their age and sex, then a researcher can consider grouping them by age and check if the illness recovery rate is correlated with age. If it does not work, then the researcher might check if it correlates with sex. If not, then perhaps it correlates with age after controlling for sex, etc. The number of possible groupings grows exponentially with the number of features.[10]
Suppose that a study of arandom sampleof people includes exactly two people with a birthday of August 7: Mary and John. Someone engaged in data dredging might try to find additional similarities between Mary and John. By going through hundreds or thousands of potential similarities between the two, each having a low probability of being true, an unusual similarity can almost certainly be found. Perhaps John and Mary are the only two people in the study who switched minors three times in college. A hypothesis, biased by data dredging, could then be "people born on August 7 have a much higher chance of switching minors more than twice in college."
The data itself taken out of context might be seen as strongly supporting that correlation, since no one with a different birthday had switched minors three times in college. However, if (as is likely) this is a spurious hypothesis, this result will most likely not bereproducible; any attempt to check if others with an August 7 birthday have a similar rate of changing minors will most likely get contradictory results almost immediately.
Bias is a systematic error in the analysis. For example, doctors directedHIVpatients at high cardiovascular risk to a particular HIV treatment,abacavir, and lower-risk patients to other drugs, preventing a simple assessment of abacavir compared to other treatments. An analysis that did not correct for this bias unfairly penalized abacavir, since its patients were more high-risk so more of them had heart attacks.[6]This problem can be very severe, for example, in theobservational study.[6][2]
Missing factors, unmeasuredconfounders, and loss to follow-up can also lead to bias.[6]By selecting papers with significantp-values, negative studies are selected against, which ispublication bias. This is also known asfile drawer bias, because less significantp-value results are left in the file drawer and never published.
Another aspect of the conditioning ofstatistical testsby knowledge of the data can be seen while using thesystem or machine analysis andlinear regressionto observe the frequency of data.[clarify]A crucial step in the process is to decide whichcovariatesto include in a relationship explaining one or more other variables. There are both statistical (seestepwise regression) and substantive considerations that lead the authors to favor some of their models over others, and there is a liberal use of statistical tests. However, to discard one or more variables from an explanatory relation on the basis of the data means one cannot validly apply standard statistical procedures to the retained variables in the relation as though nothing had happened. In the nature of the case, the retained variables have had to pass some kind of preliminary test (possibly an imprecise intuitive one) that the discarded variables failed. In 1966, Selvin and Stuart compared variables retained in the model to the fish that don't fall through the net—in the sense that their effects are bound to be bigger than those that do fall through the net. Not only does this alter the performance of all subsequent tests on the retained explanatory model, but it may also introduce bias and altermean square errorin estimation.[11][12]
Inmeteorology, hypotheses are often formulated using weather data up to the present and tested against future weather data, which ensures that, even subconsciously, future data could not influence the formulation of the hypothesis. Of course, such a discipline necessitates waiting for new data to come in, to show the formulated theory'spredictive powerversus thenull hypothesis. This process ensures that no one can accuse the researcher of hand-tailoring thepredictive modelto the data on hand, since the upcoming weather is not yet available.
As another example, suppose that observers note that a particular town appears to have acancer cluster, but lack a firm hypothesis of why this is so. However, they have access to a large amount ofdemographic dataabout the town and surrounding area, containing measurements for the area of hundreds or thousands of different variables, mostly uncorrelated. Even if all these variables are independent of the cancer incidence rate, it is highly likely that at least one variable correlates significantly with the cancer rate across the area. While this may suggest a hypothesis, further testing using the same variables but with data from a different location is needed to confirm. Note that ap-value of 0.01 suggests that 1% of the time a result at least that extreme would be obtained by chance; if hundreds or thousands of hypotheses (with mutually relatively uncorrelated independent variables) are tested, then one is likely to obtain ap-value less than 0.01 for many null hypotheses.
One example is thechocolate weight loss hoax studyconducted by journalistJohn Bohannon, who explained publicly in aGizmodoarticle that the study was deliberately conducted fraudulently as asocial experiment.[13]This study was widespread in many media outlets around 2015, with many people believing the claim that eating a chocolate bar every day would cause them to lose weight, against their better judgement. Thisstudywas published in the Institute of Diet and Health. According to Bohannon, to reduce thep-value to below 0.05, taking 18 different variables into consideration when testing was crucial.
While looking for patterns in data is legitimate, applying a statistical test of significance orhypothesis testto the same data until a pattern emerges is prone to abuse. One way to construct hypotheses while avoiding data dredging is to conduct randomizedout-of-sample tests. The researcher collects a data set, then randomly partitions it into two subsets, A and B. Only one subset—say, subset A—is examined for creating hypotheses. Once a hypothesis is formulated, it must be tested on subset B, which was not used to construct the hypothesis. Only where B also supports such a hypothesis is it reasonable to believe the hypothesis might be valid. (This is a simple type ofcross-validationand is often termed training-test or split-half validation.)
Another remedy for data dredging is to record the number of all significance tests conducted during the study and simply divide one's criterion for significance (alpha) by this number; this is theBonferroni correction. However, this is a very conservative metric. A family-wise alpha of 0.05, divided in this way by 1,000 to account for 1,000 significance tests, yields a very stringent per-hypothesis alpha of 0.00005. Methods particularly useful in analysis of variance, and in constructing simultaneous confidence bands for regressions involving basis functions areScheffé's methodand, if the researcher has in mind onlypairwise comparisons, theTukey method. To avoid the extreme conservativeness of the Bonferroni correction, more sophisticated selective inference methods are available.[14]The most common selective inference method is the use of Benjamini and Hochberg'sfalse discovery ratecontrolling procedure: it is a less conservative approach that has become a popular method for control of multiple hypothesis tests.
When neither approach is practical, one can make a clear distinction between data analyses that areconfirmatoryand analyses that areexploratory. Statistical inference is appropriate only for the former.[12]
Ultimately, the statistical significance of a test and the statistical confidence of a finding are joint properties of data and the method used to examine the data. Thus, if someone says that a certain event has probability of 20% ± 2% 19 times out of 20, this means that if the probability of the event is estimatedby the same methodused to obtain the 20% estimate, the result is between 18% and 22% with probability 0.95. No claim of statistical significance can be made by only looking, without due regard to the method used to assess the data.
Academic journals increasingly shift to theregistered reportformat, which aims to counteract very serious issues such as data dredging andHARKing, which have made theory-testing research very unreliable. For example,Nature Human Behaviourhas adopted the registered report format, as it "shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them".[15]TheEuropean Journal of Personalitydefines this format as follows: "In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes."[16]
Methods and results can also be made publicly available, as in theopen scienceapproach, making it yet more difficult for data dredging to take place.[17]
|
https://en.wikipedia.org/wiki/Data_dredging
|
In machine learning,feature selectionis the process of selecting a subset of relevantfeatures(variables, predictors) for use in model construction. Feature selection techniques are used for several reasons:
The central premise when using feature selection is that data sometimes contains features that areredundantorirrelevant, and can thus be removed without incurring much loss of information.[9]Redundancy and irrelevance are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.[10]
Feature extractioncreates new features from functions of the original features, whereas feature selection finds a subset of the features. Feature selection techniques are often used in domains where there are many features and comparatively few samples (data points).
A feature selection algorithm can be seen as the combination of a search technique for proposing new feature subsets, along with an evaluation measure which scores the different feature subsets. The simplest algorithm is to test each possible subset of features finding the one which minimizes the error rate. This is an exhaustive search of the space, and is computationally intractable for all but the smallest of feature sets. The choice of evaluation metric heavily influences the algorithm, and it is these evaluation metrics which distinguish between the three main categories of feature selection algorithms: wrappers, filters and embedded methods.[10]
In traditionalregression analysis, the most popular form of feature selection isstepwise regression, which is a wrapper technique. It is agreedy algorithmthat adds the best feature (or deletes the worst feature) at each round. The main control issue is deciding when to stop the algorithm. In machine learning, this is typically done bycross-validation. In statistics, some criteria are optimized. This leads to the inherent problem of nesting. More robust methods have been explored, such asbranch and boundand piecewise linear network.
Subset selection evaluates a subset of features as a group for suitability. Subset selection algorithms can be broken up into wrappers, filters, and embedded methods. Wrappers use asearch algorithmto search through the space of possible features and evaluate each subset by running a model on the subset. Wrappers can be computationally expensive and have a risk of over fitting to the model. Filters are similar to wrappers in the search approach, but instead of evaluating against a model, a simpler filter is evaluated. Embedded techniques are embedded in, and specific to, a model.
Many popular search approaches usegreedyhill climbing, which iteratively evaluates a candidate subset of features, then modifies the subset and evaluates if the new subset is an improvement over the old. Evaluation of the subsets requires a scoringmetricthat grades a subset of features. Exhaustive search is generally impractical, so at some implementor (or operator) defined stopping point, the subset of features with the highest score discovered up to that point is selected as the satisfactory feature subset. The stopping criterion varies by algorithm; possible criteria include: a subset score exceeds a threshold, a program's maximum allowed run time has been surpassed, etc.
Alternative search-based techniques are based ontargeted projection pursuitwhich finds low-dimensional projections of the data that score highly: the features that have the largest projections in the lower-dimensional space are then selected.
Search approaches include:
Two popular filter metrics for classification problems arecorrelationandmutual information, although neither are truemetricsor 'distance measures' in the mathematical sense, since they fail to obey thetriangle inequalityand thus do not compute any actual 'distance' – they should rather be regarded as 'scores'. These scores are computed between a candidate feature (or set of features) and the desired output category. There are, however, true metrics that are a simple function of the mutual information;[30]seehere.
Other available filter metrics include:
The choice of optimality criteria is difficult as there are multiple objectives in a feature selection task. Many common criteria incorporate a measure of accuracy, penalised by the number of features selected. Examples includeAkaike information criterion(AIC) andMallows'sCp, which have a penalty of 2 for each added feature. AIC is based oninformation theory, and is effectively derived via themaximum entropy principle.[31][32]
Other criteria areBayesian information criterion(BIC), which uses a penalty oflogn{\displaystyle {\sqrt {\log {n}}}}for each added feature,minimum description length(MDL) which asymptotically useslogn{\displaystyle {\sqrt {\log {n}}}},Bonferroni/ RIC which use2logp{\displaystyle {\sqrt {2\log {p}}}}, maximum dependency feature selection, and a variety of new criteria that are motivated byfalse discovery rate(FDR), which use something close to2logpq{\displaystyle {\sqrt {2\log {\frac {p}{q}}}}}. A maximumentropy ratecriterion may also be used to select the most relevant subset of features.[33]
Filter feature selection is a specific case of a more general paradigm calledstructure learning. Feature selection finds the relevant feature set for a specific target variable whereas structure learning finds the relationships between all the variables, usually by expressing these relationships as a graph. The most common structure learning algorithms assume the data is generated by aBayesian Network, and so the structure is adirectedgraphical model. The optimal solution to the filter feature selection problem is theMarkov blanketof the target node, and in a Bayesian Network, there is a unique Markov Blanket for each node.[34]
There are different Feature Selection mechanisms around that utilizemutual informationfor scoring the different features. They usually use all the same algorithm:
The simplest approach uses themutual informationas the "derived" score.[35]
However, there are different approaches, that try to reduce the redundancy between features.
Penget al.[36]proposed a feature selection method that can use either mutual information, correlation, or distance/similarity scores to select features. The aim is to penalise a feature's relevancy by its redundancy in the presence of the other selected features. The relevance of a feature setSfor the classcis defined by the average value of all mutual information values between the individual featurefiand the classcas follows:
The redundancy of all features in the setSis the average value of all mutual information values between the featurefiand the featurefj:
The mRMR criterion is a combination of two measures given above and is defined as follows:
Suppose that there arenfull-set features. Letxibe the set membershipindicator functionfor featurefi, so thatxi=1indicates presence andxi=0indicates absence of the featurefiin the globally optimal feature set. Letci=I(fi;c){\displaystyle c_{i}=I(f_{i};c)}andaij=I(fi;fj){\displaystyle a_{ij}=I(f_{i};f_{j})}. The above may then be written as an optimization problem:
The mRMR algorithm is an approximation of the theoretically optimal maximum-dependency feature selection algorithm that maximizes the mutual information between the joint distribution of the selected features and the classification variable. As mRMR approximates the combinatorial estimation problem with a series of much smaller problems, each of which only involves two variables, it thus uses pairwise joint probabilities which are more robust. In certain situations the algorithm may underestimate the usefulness of features as it has no way to measure interactions between features which can increase relevancy. This can lead to poor performance[35]when the features are individually useless, but are useful when combined (a pathological case is found when the class is aparity functionof the features). Overall the algorithm is more efficient (in terms of the amount of data required) than the theoretically optimal max-dependency selection, yet produces a feature set with little pairwise redundancy.
mRMR is an instance of a large class of filter methods which trade off between relevancy and redundancy in different ways.[35][37]
mRMR is a typical example of an incremental greedy strategy for feature selection: once a feature has been selected, it cannot be deselected at a later stage. While mRMR could be optimized using floating search to reduce some features, it might also be reformulated as a globalquadratic programmingoptimization problem as follows:[38]
whereFn×1=[I(f1;c),…,I(fn;c)]T{\displaystyle F_{n\times 1}=[I(f_{1};c),\ldots ,I(f_{n};c)]^{T}}is the vector of feature relevancy assuming there arenfeatures in total,Hn×n=[I(fi;fj)]i,j=1…n{\displaystyle H_{n\times n}=[I(f_{i};f_{j})]_{i,j=1\ldots n}}is the matrix of feature pairwise redundancy, andxn×1{\displaystyle \mathbf {x} _{n\times 1}}represents relative feature weights. QPFS is solved via quadratic programming. It is recently shown that QFPS is biased towards features with smaller entropy,[39]due to its placement of the feature self redundancy termI(fi;fi){\displaystyle I(f_{i};f_{i})}on the diagonal ofH.
Another score derived for the mutual information is based on the conditional relevancy:[39]
whereQii=I(fi;c){\displaystyle Q_{ii}=I(f_{i};c)}andQij=(I(fi;c|fj)+I(fj;c|fi))/2,i≠j{\displaystyle Q_{ij}=(I(f_{i};c|f_{j})+I(f_{j};c|f_{i}))/2,i\neq j}.
An advantage ofSPECCMIis that it can be solved simply via finding the dominant eigenvector ofQ, thus is very scalable.SPECCMIalso handles second-order feature interaction.
In a study of different scores Brown et al.[35]recommended thejoint mutual information[40]as a good score for feature selection. The score tries to find the feature, that adds the most new information to the already selected features, in order to avoid redundancy. The score is formulated as follows:
The score uses theconditional mutual informationand themutual informationto estimate the redundancy between the already selected features (fj∈S{\displaystyle f_{j}\in S}) and the feature under investigation (fi{\displaystyle f_{i}}).
For high-dimensional and small sample data (e.g., dimensionality > 105and the number of samples < 103), the Hilbert-Schmidt Independence Criterion Lasso (HSIC Lasso) is useful.[41]HSIC Lasso optimization problem is given as
whereHSIC(fk,c)=tr(K¯(k)L¯){\displaystyle {\mbox{HSIC}}(f_{k},c)={\mbox{tr}}({\bar {\mathbf {K} }}^{(k)}{\bar {\mathbf {L} }})}is a kernel-based independence measure called the (empirical) Hilbert-Schmidt independence criterion (HSIC),tr(⋅){\displaystyle {\mbox{tr}}(\cdot )}denotes thetrace,λ{\displaystyle \lambda }is the regularization parameter,K¯(k)=ΓK(k)Γ{\displaystyle {\bar {\mathbf {K} }}^{(k)}=\mathbf {\Gamma } \mathbf {K} ^{(k)}\mathbf {\Gamma } }andL¯=ΓLΓ{\displaystyle {\bar {\mathbf {L} }}=\mathbf {\Gamma } \mathbf {L} \mathbf {\Gamma } }are input and output centeredGram matrices,Ki,j(k)=K(uk,i,uk,j){\displaystyle K_{i,j}^{(k)}=K(u_{k,i},u_{k,j})}andLi,j=L(ci,cj){\displaystyle L_{i,j}=L(c_{i},c_{j})}are Gram matrices,K(u,u′){\displaystyle K(u,u')}andL(c,c′){\displaystyle L(c,c')}are kernel functions,Γ=Im−1m1m1mT{\displaystyle \mathbf {\Gamma } =\mathbf {I} _{m}-{\frac {1}{m}}\mathbf {1} _{m}\mathbf {1} _{m}^{T}}is the centering matrix,Im{\displaystyle \mathbf {I} _{m}}is them-dimensionalidentity matrix(m: the number of samples),1m{\displaystyle \mathbf {1} _{m}}is them-dimensional vector with all ones, and‖⋅‖1{\displaystyle \|\cdot \|_{1}}is theℓ1{\displaystyle \ell _{1}}-norm. HSIC always takes a non-negative value, and is zero if and only if two random variables are statistically independent when a universal reproducing kernel such as the Gaussian kernel is used.
The HSIC Lasso can be written as
where‖⋅‖F{\displaystyle \|\cdot \|_{F}}is theFrobenius norm. The optimization problem is a Lasso problem, and thus it can be efficiently solved with a state-of-the-art Lasso solver such as the dualaugmented Lagrangian method.
The correlation feature selection (CFS) measure evaluates subsets of features on the basis of the following hypothesis: "Good feature subsets contain features highly correlated with the classification, yet uncorrelated to each other".[42][43]The following equation gives the merit of a feature subsetSconsisting ofkfeatures:
Here,rcf¯{\displaystyle {\overline {r_{cf}}}}is the average value of all feature-classification correlations, andrff¯{\displaystyle {\overline {r_{ff}}}}is the average value of all feature-feature correlations. The CFS criterion is defined as follows:
Thercfi{\displaystyle r_{cf_{i}}}andrfifj{\displaystyle r_{f_{i}f_{j}}}variables are referred to as correlations, but are not necessarilyPearson's correlation coefficientorSpearman's ρ. Hall's dissertation uses neither of these, but uses three different measures of relatedness,minimum description length(MDL),symmetrical uncertainty, andrelief.
Letxibe the set membershipindicator functionfor featurefi; then the above can be rewritten as an optimization problem:
The combinatorial problems above are, in fact, mixed 0–1linear programmingproblems that can be solved by usingbranch-and-bound algorithms.[44]
The features from adecision treeor a treeensembleare shown to be redundant. A recent method called regularized tree[45]can be used for feature subset selection. Regularized trees penalize using a variable similar to the variables selected at previous tree nodes for splitting the current node. Regularized trees only need build one tree model (or one tree ensemble model) and thus are computationally efficient.
Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive tooutliers, and thus, require littledata preprocessingsuch asnormalization. Regularized random forest (RRF)[46]is one type of regularized trees. The guided RRF is an enhanced RRF which is guided by the importance scores from an ordinary random forest.
Ametaheuristicis a general description of an algorithm dedicated to solve difficult (typicallyNP-hardproblem) optimization problems for which there is no classical solving methods. Generally, a metaheuristic is a stochastic algorithm tending to reach a global optimum. There are many metaheuristics, from a simple local search to a complex global search algorithm.
The feature selection methods are typically presented in three classes based on how they combine the selection algorithm and the model building.
Filter type methods select variables regardless of the model. They are based only on general features like the correlation with the variable to predict. Filter methods suppress the least interesting variables. The other variables will be part of a classification or a regression model used to classify or to predict data. These methods are particularly effective in computation time and robust to overfitting.[47]
Filter methods tend to select redundant variables when they do not consider the relationships between variables. However, more elaborate features try to minimize this problem by removing variables highly correlated to each other, such as the Fast Correlation Based Filter (FCBF) algorithm.[48]
Wrapper methods evaluate subsets of variables which allows, unlike filter approaches, to detect the possible interactions amongst variables.[49]The two main disadvantages of these methods are:
Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm.[50]
This is a survey of the application of feature selection metaheuristics lately used in the literature. This survey was realized by J. Hammon in her 2013 thesis.[47]
Some learning algorithms perform feature selection as part of their overall operation. These include:
|
https://en.wikipedia.org/wiki/Feature_selection
|
Instatistical analysis,Freedman's paradox,[1][2]named afterDavid Freedman, is a problem inmodel selectionwherebypredictor variableswith no relationship to the dependent variable can pass tests of significance – both individually via a t-test, and jointly via an F-test for the significance of the regression. Freedman demonstrated (through simulation and asymptotic calculation) that this is a common occurrence when the number of variables is similar to the number of data points.
Specifically, if the dependent variable andkregressors are independent normal variables, and there arenobservations, then askandnjointly go to infinity in the ratiok/n=ρ,
More recently, newinformation-theoreticestimators have been developed in an attempt to reduce this problem,[3]in addition to the accompanying issue of model selection bias,[4]whereby estimators of predictor variables that have a weak relationship with the response variable are biased.
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Freedman%27s_paradox
|
Thegoodness of fitof astatistical modeldescribes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used instatistical hypothesis testing, e.g. totest for normalityofresiduals, to test whether two samples are drawn from identical distributions (seeKolmogorov–Smirnovtest), or whether outcome frequencies follow a specified distribution (seePearson's chi-square test). In theanalysis of variance, one of the components into which the variance is partitioned may be alack-of-fit sum of squares.
In assessing whether a given distribution is suited to a data-set, the followingtestsand their underlying measures of fit can be used:
Inregression analysis, more specificallyregression validation, the following topics relate to goodness of fit:
The following are examples that arise in the context ofcategorical data.
Pearson's chi-square testuses a measure of goodness of fit which is the sum of differences between observed andexpected outcomefrequencies (that is, counts of observations), each squared and divided by the expectation:
χ2=∑i=1n(Oi−Ei)Ei2{\displaystyle \chi ^{2}=\sum _{i=1}^{n}{{\frac {(O_{i}-E_{i})}{E_{i}}}^{2}}}where:
The expected frequency is calculated by:Ei=(F(Yu)−F(Yl))N{\displaystyle E_{i}\,=\,{\bigg (}F(Y_{u})\,-\,F(Y_{l}){\bigg )}\,N}where:
The resulting value can be compared with achi-square distributionto determine the goodness of fit. The chi-square distribution has (k−c)degrees of freedom, wherekis the number of non-empty bins andcis the number of estimated parameters (including location and scale parameters and shape parameters) for the distribution plus one. For example, for a 3-parameterWeibull distribution,c= 4.
A binomial experiment is a sequence of independent trials in which the trials can result in one of two outcomes, success or failure. There arentrials each with probability of success, denoted byp. Provided thatnpi≫ 1 for everyi(wherei= 1, 2, ...,k), then
χ2=∑i=1k(Ni−npi)2npi=∑allbins(O−E)2E.{\displaystyle \chi ^{2}=\sum _{i=1}^{k}{\frac {(N_{i}-np_{i})^{2}}{np_{i}}}=\sum _{\mathrm {all\ bins} }^{}{\frac {(\mathrm {O} -\mathrm {E} )^{2}}{\mathrm {E} }}.}
This has approximately a chi-square distribution withk− 1 degrees of freedom. The fact that there arek− 1 degrees of freedom is a consequence of the restriction∑Ni=n{\textstyle \sum N_{i}=n}. We know there arekobserved bin counts, however, once anyk− 1 are known, the remaining one is uniquely determined. Basically, one can say, there are onlyk− 1 freely determined binn counts, thusk− 1 degrees of freedom.
G-testsarelikelihood-ratiotests ofstatistical significancethat are increasingly being used in situations where Pearson's chi-square tests were previously recommended.[7]
The general formula forGis
whereOi{\textstyle O_{i}}andEi{\textstyle E_{i}}are the same as for the chi-square test,ln{\textstyle \ln }denotes thenatural logarithm, and the sum is taken over all non-empty bins. Furthermore, the total observed count should be equal to the total expected count:∑iOi=∑iEi=N{\displaystyle \sum _{i}O_{i}=\sum _{i}E_{i}=N}whereN{\textstyle N}is the total number of observations.
G-tests have been recommended at least since the 1981 edition of the popular statistics textbook byRobert R. SokalandF. James Rohlf.[8]
|
https://en.wikipedia.org/wiki/Goodness_of_fit
|
Inprobability theoryand related fields, thelife-time of correlationmeasures the timespan over which there is appreciableautocorrelationorcross-correlationinstochastic processes.
Thecorrelation coefficientρ, expressed as anautocorrelation functionorcross-correlation function, depends on the lag-time between the times being considered. Typically such functions,ρ(t), decay to zero with increasing lag-time, but they can assume values across all levels of correlations: strong and weak, and positive and negative as in the table.
The life-time of a correlation is defined as the length of time when the correlation coefficient is at the strong level.[1]The durability of correlation is determined by signal (the strong level of correlation is separated from weak and negative levels). The mean life-time of correlation could measure how the durability of correlation depends on the window width size (the window is the length of time series used to calculate correlation).
Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Life-time_of_correlation
|
Researcher degrees of freedomis a concept referring to the inherent flexibility involved in the process of designing and conducting ascientific experiment, and in analyzing its results. The term reflects the fact that researchers can choose between multiple ways of collecting and analyzing data, and these decisions can be made either arbitrarily or because they, unlike other possible choices, produce a positive andstatistically significantresult.[1]The researcher degrees of freedom has positives such as affording the ability to look at nature from different angles, allowing new discoveries and hypotheses to be generated.[2][3][4]However, researcher degrees of freedom can lead todata dredgingand other questionable research practices where the different interpretations and analyses are taken for granted[5][6]Their widespread use represents an inherent methodological limitation inscientific research, and contributes to an inflated rate offalse-positivefindings.[1]They can also lead to overestimatedeffect sizes.[7]
Though the concept of researcher degrees of freedom has mainly been discussed in the context ofpsychology, it can affect any scientific discipline.[1][8]Likepublication bias, the existence of researcher degrees of freedom has the potential to lead to an inflated degree offunnel plotasymmetry.[9]It is also a potential explanation forp-hacking, as researchers have so many degrees of freedom to draw on, especially in the social and behavioral sciences.Multiverse analysisis a method that helps bring these degrees of freedom to light. Studies with smallersample sizesare more susceptible to the biasing influence of researcher degrees of freedom.[10]
Steegen et al. (2016) showed how, starting from a single raw data set, applying different reasonable data processing decisions can give rise to a multitude of processed data sets (called the data multiverse), often leading to different statistical results.[11]Wicherts et al. (2016) provided a list of 34 degrees of freedom (DFs) researchers have when conducting psychological research. The DFs listed span every stage of the research process, from formulating ahypothesisto the reporting of results. They include conducting exploratory, hypothesis-free research, which the authors note "...pervades many of the researcher DFs that we describe below in the later phases of the study." Other DFs listed in this paper include the creation of multiple manipulatedindependent variablesand the measurement of additional variables that may be selected for analysis later on.[7]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Researcher_degrees_of_freedom
|
Inphilosophy,Occam's razor(also spelledOckham's razororOcham's razor;Latin:novacula Occami) is theproblem-solvingprinciple that recommends searching for explanations constructed with the smallest possible set of elements. It is also known as theprinciple of parsimonyor thelaw of parsimony(Latin:lex parsimoniae). Attributed toWilliam of Ockham, a 14th-century Englishphilosopherandtheologian, it is frequently cited asEntia non sunt multiplicanda praeter necessitatem, which translates as "Entities must not be multiplied beyond necessity",[1][2]although Occam never used these exact words. Popularly, the principle is sometimes paraphrased as "of two competing theories, the simpler explanation of an entity is to be preferred."[3]
Thisphilosophical razoradvocates that when presented with competinghypothesesabout the same prediction and both hypotheses have equal explanatory power, one should prefer the hypothesis that requires the fewest assumptions,[4]and that this is not meant to be a way of choosing between hypotheses that make different predictions. Similarly, in science, Occam's razor is used as anabductiveheuristicin the development of theoretical models rather than as a rigorous arbiter between candidate models.[5][6]
The phraseOccam's razordid not appear until a few centuries after William of Ockham's death in 1347.Libert Froidmont, in his 1649Philosophia Christiana de Anima(On Christian Philosophy of the Soul), gives him credit for the phrase, speaking of "novacula occami".[7]Ockham did not invent this principle, but its fame—and its association with him—may be due to the frequency and effectiveness with which he used it.[8]Ockham stated the principle in various ways, but the most popular version, "Entities are not to be multiplied without necessity" (Non sunt multiplicanda entia sine necessitate) was formulated by the IrishFranciscanphilosopherJohn Punchin his 1639 commentary on the works ofDuns Scotus.[9]
The origins of what has come to be known as Occam's razor are traceable to the works of earlier philosophers such asJohn Duns Scotus(1265–1308),Robert Grosseteste(1175–1253),Maimonides(Moses ben-Maimon, 1138–1204), and evenAristotle(384–322 BC).[10][11]Aristotle writes in hisPosterior Analytics, "We may assume the superiorityceteris paribus[other things being equal] of the demonstration which derives from fewer postulates or hypotheses."Ptolemy(c.AD 90– c.168) stated, "We consider it a good principle to explain the phenomena by the simplest hypothesis possible."[12]
Phrases such as "It is vain to do with more what can be done with fewer" and "A plurality is not to be posited without necessity" were commonplace in 13th-centuryscholasticwriting.[12]Robert Grosseteste, inCommentary on[Aristotle's]the Posterior Analytics Books(Commentarius in Posteriorum Analyticorum Libros) (c.1217–1220), declares: "That is better and more valuable which requires fewer, other circumstances being equal... For if one thing were demonstrated from many and another thing from fewer equally known premises, clearly that is better which is from fewer because it makes us know quickly, just as a universal demonstration is better than particular because it produces knowledge from fewer premises. Similarly in natural science, in moral science, and in metaphysics the best is that which needs no premises and the better that which needs the fewer, other circumstances being equal."[13]
TheSumma TheologicaofThomas Aquinas(1225–1274) states that "it is superfluous to suppose that what can be accounted for by a few principles has been produced by many." Aquinas uses this principle to construct an objection toGod's existence, an objection that he in turn answers and refutes generally (cf.quinque viae), and specifically, through an argument based oncausality.[14]Hence, Aquinas acknowledges the principle that today is known as Occam's razor, but prefers causal explanations to other simple explanations (cf. alsoCorrelation does not imply causation).
William of Ockham(circa1287–1347) was an English Franciscan friar andtheologian, an influential medieval philosopher and anominalist. His popular fame as a great logician rests chiefly on the maxim attributed to him and known as Occam's razor. The termrazorrefers to distinguishing between two hypotheses either by "shaving away" unnecessary assumptions or cutting apart two similar conclusions.
While it has been claimed that Occam's razor is not found in any of William's writings,[15]one can cite statements such asNumquam ponenda est pluralitas sine necessitate("Plurality must never be posited without necessity"), which occurs in his theological work on theSentences of Peter Lombard(Quaestiones et decisiones in quattuor libros Sententiarum Petri Lombardi; ed. Lugd., 1495, i, dist. 27, qu. 2, K).
Nevertheless, the precise words sometimes attributed to William of Ockham,Entia non sunt multiplicanda praeter necessitatem(Entities must not be multiplied beyond necessity),[16]are absent in his extant works;[17]this particular phrasing comes fromJohn Punch,[18]who described the principle as a "common axiom" (axioma vulgare) of the Scholastics.[9]William of Ockham himself seems to restrict the operation of this principle in matters pertaining to miracles and God's power, considering a plurality of miracles possible in theEucharist[further explanation needed]simply because it pleases God.[12]
This principle is sometimes phrased asPluralitas non est ponenda sine necessitate("Plurality should not be posited without necessity").[19]In hisSumma Totius Logicae, i. 12, William of Ockham cites the principle of economy,Frustra fit per plura quod potest fieri per pauciora("It is futile to do with more things that which can be done with fewer"; Thorburn, 1918, pp. 352–53;Knealeand Kneale, 1962, p. 243.)
To quoteIsaac Newton, "We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes."[20][21]In the sentencehypotheses non fingo, Newton affirms the success of this approach.
Bertrand Russelloffers a particular version of Occam's razor: "Whenever possible, substitute constructions out of known entities for inferences to unknown entities."[22]
Around 1960,Ray Solomonofffounded thetheory of universal inductive inference, the theory of prediction based on observations – for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution. This theory is a mathematical formalization of Occam's razor.[23][24][25]
Another technical approach to Occam's razor isontological parsimony.[26]Parsimony means spareness and is also referred to as the Rule of Simplicity. This is considered a strong version of Occam's razor.[27][28]A variation used in medicine is called the "Zebra": a physician should reject an exotic medical diagnosis when a more commonplace explanation is more likely, derived fromTheodore Woodward's dictum "When you hear hoofbeats, think of horses not zebras".[29]
Ernst Machformulated the stronger version of Occam's razor intophysics, which he called the Principle of Economy stating: "Scientists must use the simplest means of arriving at their results and exclude everything not perceived by the senses."[30]
This principle goes back at least as far as Aristotle, who wrote "Nature operates in the shortest way possible."[27]The idea of parsimony or simplicity in deciding between theories, though not the intent of the original expression of Occam's razor, has been assimilated into common culture as the widespread layman's formulation that "the simplest explanation is usually the correct one."[27]
Prior to the 20th century, it was a commonly held belief that nature itself was simple and that simpler hypotheses about nature were thus more likely to be true.This notion was deeply rooted in the aesthetic value that simplicity holds for human thought and the justifications presented for it often drew fromtheology.[clarification needed]Thomas Aquinasmade this argument in the 13th century, writing, "If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments [if] one suffices."[31]
Beginning in the 20th century,epistemologicaljustifications based oninduction,logic,pragmatism, and especiallyprobability theoryhave become more popular among philosophers.[7]
Occam's razor has gained strong empirical support in helping to converge on better theories (seeUsessection below for some examples).
In the related concept ofoverfitting, excessively complex models are affected bystatistical noise(a problem also known as thebias–variance tradeoff), whereas simpler models may capture the underlying structure better and may thus have betterpredictiveperformance. It is, however, often difficult to deduce which part of the data is noise (cf.model selection,test set,minimum description length,Bayesian inference, etc.).
The razor's statement that "other things being equal, simpler explanations are generally better than more complex ones" is amenable to empirical testing. Another interpretation of the razor's statement would be that "simpler hypotheses are generally better than the complex ones". The procedure to test the former interpretation would compare the track records of simple and comparatively complex explanations. If one accepts the first interpretation, the validity of Occam's razor as a tool would then have to be rejected if the more complex explanations were more often correct than the less complex ones (while the converse would lend support to its use). If the latter interpretation is accepted, the validity of Occam's razor as a tool could possibly be accepted if the simpler hypotheses led to correct conclusions more often than not.
Even if some increases in complexity are sometimes necessary, there still remains a justified general bias toward the simpler of two competing explanations. To understand why, consider that for each accepted explanation of a phenomenon, there is always an infinite number of possible, more complex, and ultimately incorrect, alternatives. This is so because one can always burden a failing explanation with anad hoc hypothesis. Ad hoc hypotheses are justifications that prevent theories from being falsified.
For example, if a man, accused of breaking a vase, makessupernaturalclaims thatleprechaunswere responsible for the breakage, a simple explanation might be that the man did it, but ongoing ad hoc justifications (e.g., "... and that's not me breaking it on the film; they tampered with that, too") could successfully prevent complete disproof. This endless supply of elaborate competing explanations, called saving hypotheses, cannot be technically ruled out – except by using Occam's razor.[32][33][34]
Any more complex theory might still possibly be true. A study of the predictive validity of Occam's razor found 32 published papers that included 97 comparisons of economic forecasts from simple and complex forecasting methods. None of the papers provided a balance of evidence that complexity of method improved forecast accuracy. In the 25 papers with quantitative comparisons, complexity increased forecast errors by an average of 27 percent.[35]
One justification of Occam's razor is a direct result of basicprobability theory. By definition, all assumptions introduce possibilities for error; if an assumption does not improve the accuracy of a theory, its only effect is to increase the probability that the overall theory is wrong.
There have also been other attempts to derive Occam's razor from probability theory, including notable attempts made byHarold JeffreysandE. T. Jaynes. The probabilistic (Bayesian) basis for Occam's razor is elaborated byDavid J. C. MacKayin chapter 28 of his bookInformation Theory, Inference, and Learning Algorithms,[36]where he emphasizes that a prior bias in favor of simpler models is not required.
William H. JefferysandJames O. Berger(1991) generalize and quantify the original formulation's "assumptions" concept as the degree to which a proposition is unnecessarily accommodating to possible observable data.[37]They state, "A hypothesis with fewer adjustable parameters will automatically have an enhanced posterior probability, due to the fact that the predictions it makes are sharp."[37]The use of "sharp" here is not only a tongue-in-cheek reference to the idea of a razor, but also indicates that such predictions are moreaccuratethan competing predictions. The model they propose balances the precision of a theory's predictions against their sharpness, preferring theories that sharply make correct predictions over theories that accommodate a wide range of other possible results. This, again, reflects the mathematical relationship between key concepts inBayesian inference(namelymarginal probability,conditional probability, andposterior probability).
Thebias–variance tradeoffis a framework that incorporates the Occam's razor principle in its balance between overfitting (associated with lower bias but higher variance) and underfitting (associated with lower variance but higher bias).[38]
Karl Popperargues that a preference for simple theories need not appeal to practical or aesthetic considerations. Our preference for simplicity may be justified by itsfalsifiabilitycriterion: we prefer simpler theories to more complex ones "because their empirical content is greater; and because they are better testable".[39]The idea here is that a simple theory applies to more cases than a more complex one, and is thus more easily falsifiable. This is again comparing a simple theory to a more complex theory where both explain the data equally well.
The philosopher of scienceElliott Soberonce argued along the same lines as Popper, tying simplicity with "informativeness": The simplest theory is the more informative, in the sense that it requires less information to a question.[40]He has since rejected this account of simplicity, purportedly because it fails to provide anepistemicjustification for simplicity. He now believes that simplicity considerations (and considerations of parsimony in particular) do not count unless they reflect something more fundamental. Philosophers, he suggests, may have made the error of hypostatizing simplicity (i.e., endowed it with asui generisexistence), when it has meaning only when embedded in a specific context (Sober 1992). If we fail to justify simplicity considerations on the basis of the context in which we use them, we may have no non-circular justification: "Just as the question 'why be rational?' may have no non-circular answer, the same may be true of the question 'why should simplicity be considered in evaluating the plausibility of hypotheses?'"[41]
Richard Swinburneargues for simplicity on logical grounds:
... the simplest hypothesis proposed as an explanation of phenomena is more likely to be the true one than is any other available hypothesis, that its predictions are more likely to be true than those of any other available hypothesis, and that it is an ultimatea prioriepistemic principle that simplicity is evidence for truth.
According to Swinburne, since our choice of theory cannot be determined by data (seeUnderdeterminationandDuhem–Quine thesis), we must rely on some criterion to determine which theory to use. Since it is absurd to have no logical method for settling on one hypothesis amongst an infinite number of equally data-compliant hypotheses, we should choose the simplest theory: "Either science is irrational [in the way it judges theories and predictions probable] or the principle of simplicity is a fundamental synthetic a priori truth."[42]
From theTractatus Logico-Philosophicus:
and on the related concept of "simplicity":
Inscience, Occam's razor is used as aheuristicto guide scientists in developing theoretical models rather than as an arbiter between published models.[5][6]Inphysics, parsimony was an important heuristic in the development and application of theprinciple of least actionbyPierre Louis MaupertuisandLeonhard Euler,[43]inAlbert Einstein's formulation ofspecial relativity,[44][45]and in the development ofquantum mechanicsbyMax Planck,Werner HeisenbergandLouis de Broglie.[6][46]
Inchemistry, Occam's razor is often an important heuristic when developing a model of areaction mechanism.[47][48]Although it is useful as a heuristic in developing models of reaction mechanisms, it has been shown to fail as a criterion for selecting among some selected published models.[6]In this context, Einstein himself expressed caution when he formulated Einstein'sConstraint: "It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience."[49][50][51]An often-quoted version of this constraint (which cannot be verified as posited by Einstein himself)[52]reduces this to "Everything should be kept as simple as possible, but not simpler."
In thescientific method, Occam's razor is not considered an irrefutable principle oflogicor a scientific result; the preference for simplicity in the scientific method is based on thefalsifiabilitycriterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives. Since failing explanations can always be burdened withad hochypothesesto prevent them from being falsified, simpler theories are preferable to more complex ones because they tend to be moretestable.[53][54][55]As a logical principle, Occam's razor would demand that scientists accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often support more complex theories than do existing data. Science prefers the simplest explanation that is consistent with the data available at a given time, but the simplest explanation may be ruled out as new data become available.[5][54]That is, science is open to the possibility that future experiments might support more complex theories than demanded by current data and is more interested in designing experiments to discriminate between competing theories than favoring one theory over another based merely on philosophical principles.[53][54][55]
When scientists use the idea of parsimony, it has meaning only in a very specific context of inquiry. Several background assumptions are required for parsimony to connect with plausibility in a particular research problem.[clarification needed]The reasonableness of parsimony in one research context may have nothing to do with its reasonableness in another. It is a mistake to think that there is a single global principle that spans diverse subject matter.[55]
It has been suggested that Occam's razor is a widely accepted example of extraevidential consideration, even though it is entirely a metaphysical assumption. Most of the time, however, Occam's razor is a conservative tool, cutting out "crazy, complicated constructions" and assuring "that hypotheses are grounded in the science of the day", thus yielding "normal" science: models of explanation and prediction.[6]There are, however, notable exceptions where Occam's razor turns a conservative scientist into a reluctant revolutionary. For example,Max Planckinterpolated between theWienandJeansradiation laws and used Occam's razor logic to formulate the quantum hypothesis, even resisting that hypothesis as it became more obvious that it was correct.[6]
Appeals to simplicity were used to argue against the phenomena of meteorites,ball lightning,continental drift, andreverse transcriptase.[56]One can argue for atomic building blocks for matter, because it provides a simpler explanation for the observed reversibility of bothmixing[clarification needed]and chemical reactions as simple separation and rearrangements of atomic building blocks. At the time, however, theatomic theorywas considered more complex because it implied the existence of invisible particles that had not been directly detected.Ernst Machand the logical positivists rejectedJohn Dalton'satomic theoryuntil the reality of atoms was more evident inBrownian motion, as shown byAlbert Einstein.[57]
In the same way, postulating theaetheris more complex than transmission of light through avacuum. At the time, however, all known waves propagated through a physical medium, and it seemed simpler to postulate the existence of a medium than to theorize about wave propagation without a medium. Likewise,Isaac Newton's idea of light particles seemed simpler thanChristiaan Huygens's idea of waves, so many favored it. In this case, as it turned out, neither the wave—nor the particle—explanation alone suffices, aslight behaves like waves and like particles.
Three axioms presupposed by the scientific method are realism (the existence of objective reality), the existence of natural laws, and the constancy of natural law. Rather than depend on provability of these axioms, science depends on the fact that they have not been objectively falsified. Occam's razor and parsimony support, but do not prove, these axioms of science. The general principle of science is that theories (or models) of natural law must be consistent with repeatable experimental observations. This ultimate arbiter (selection criterion) rests upon the axioms mentioned above.[54]
If multiple models of natural law make exactly the same testable predictions, they are equivalent and there is no need for parsimony to choose a preferred one. For example,Newtonian,HamiltonianandLagrangianclassical mechanics are equivalent. Physicists have no interest in using Occam's razor to say the other two are wrong. Likewise, there is no demand for simplicity principles to arbitrate between wave and matrix formulations of quantum mechanics. Science often does not demand arbitration or selection criteria between models that make the same testable predictions.[54]
Biologists or philosophers of biology use Occam's razor in either of two contexts both inevolutionary biology: the units of selection controversy andsystematics.George C. Williamsin his bookAdaptation and Natural Selection(1966) argues that the best way to explainaltruismamong animals is based on low-level (i.e., individual) selection as opposed to high-level group selection. Altruism is defined by some evolutionary biologists (e.g., R. Alexander, 1987; W. D. Hamilton, 1964) as behavior that is beneficial to others (or to the group) at a cost to the individual, and many posit individual selection as the mechanism that explains altruism solely in terms of the behaviors of individual organisms acting in their own self-interest (or in the interest of their genes, via kin selection). Williams was arguing against the perspective of others who propose selection at the level of the group as an evolutionary mechanism that selects for altruistic traits (e.g., D. S. Wilson & E. O. Wilson, 2007). The basis for Williams's contention is that of the two, individual selection is the more parsimonious theory. In doing so he is invoking a variant of Occam's razor known asMorgan's Canon: "In no case is an animal activity to be interpreted in terms of higher psychological processes, if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development." (Morgan 1903).
However, more recent biological analyses, such asRichard Dawkins'sThe Selfish Gene, have contended that Morgan's Canon is not the simplest and most basic explanation. Dawkins argues the way evolution works is that the genes propagated in most copies end up determining the development of that particular species, i.e., natural selection turns out to select specific genes, and this is really the fundamental underlying principle that automatically gives individual and group selection asemergentfeatures of evolution.
Zoologyprovides an example.Muskoxen, when threatened bywolves, form a circle with the males on the outside and the females and young on the inside. This is an example of a behavior by the males that seems to be altruistic. The behavior is disadvantageous to them individually but beneficial to the group as a whole; thus, it was seen by some to support the group selection theory. Another interpretation is kin selection: if the males are protecting their offspring, they are protecting copies of their own alleles. Engaging in this behavior would be favored by individual selection if the cost to the male musk ox is less than half of the benefit received by his calf – which could easily be the case if wolves have an easier time killing calves than adult males. It could also be the case that male musk oxen would be individually less likely to be killed by wolves if they stood in a circle with their horns pointing out, regardless of whether they were protecting the females and offspring. That would be an example of regular natural selection – a phenomenon called "the selfish herd".
Systematicsis the branch ofbiologythat attempts to establish patterns of relationship among biological taxa, today generally thought to reflect evolutionary history. It is also concerned with their classification. There are three primary camps in systematics: cladists, pheneticists, and evolutionary taxonomists. Cladists hold that classification should be based onsynapomorphies(shared, derived character states), pheneticists contend that overall similarity (synapomorphies and complementarysymplesiomorphies) is the determining criterion, while evolutionary taxonomists say that both genealogy and similarity count in classification (in a manner determined by the evolutionary taxonomist).[58][59]
It is among the cladists that Occam's razor is applied, through the method ofcladistic parsimony. Cladistic parsimony (ormaximum parsimony) is a method of phylogenetic inference that yieldsphylogenetic trees(more specifically, cladograms).Cladogramsare branching, diagrams used to represent hypotheses of relative degree of relationship, based onsynapomorphies. Cladistic parsimony is used to select as the preferred hypothesis of relationships the cladogram that requires the fewest implied character state transformations (or smallest weight, if characters are differentially weighted). Critics of the cladistic approach often observe that for some types of data, parsimony could produce the wrong results, regardless of how much data is collected (this is called statistical inconsistency, orlong branch attraction). However, this criticism is also potentially true for any type of phylogenetic inference, unless the model used to estimate the tree reflects the way that evolution actually happened. Because this information is not empirically accessible, the criticism of statistical inconsistency against parsimony holds no force.[60]For a book-length treatment of cladistic parsimony, seeElliott Sober'sReconstructing the Past: Parsimony, Evolution, and Inference(1988). For a discussion of both uses of Occam's razor in biology, see Sober's article "Let's Razor Ockham's Razor" (1990).
Other methods for inferring evolutionary relationships use parsimony in a more general way.Likelihoodmethods for phylogeny use parsimony as they do for all likelihood tests, with hypotheses requiring fewer differing parameters (i.e., numbers or different rates of character change or different frequencies of character state transitions) being treated as null hypotheses relative to hypotheses requiring more differing parameters. Thus, complex hypotheses must predict data much better than do simple hypotheses before researchers reject the simple hypotheses. Recent advances employinformation theory, a close cousin of likelihood, which uses Occam's razor in the same way. The choice of the "shortest tree" relative to a not-so-short tree under any optimality criterion (smallest distance, fewest steps, or maximum likelihood) is always based on parsimony.[61]
Francis Crickhas commented on potential limitations of Occam's razor in biology. He advances the argument that because biological systems are the products of (an ongoing) natural selection, the mechanisms are not necessarily optimal in an obvious sense. He cautions: "While Ockham's razor is a useful tool in the physical sciences, it can be a very dangerous implement in biology. It is thus very rash to use simplicity and elegance as a guide in biological research."[62]This is an ontological critique of parsimony.
Inbiogeography, parsimony is used to infer ancient vicariant events ormigrationsofspeciesorpopulationsby observing the geographic distribution and relationships of existingorganisms. Given the phylogenetic tree, ancestral population subdivisions are inferred to be those that require the minimum amount of change.[citation needed]
In thephilosophy of religion, Occam's razor is sometimes applied to the existence of God. William of Ockham himself was aChristian. He believed in God, and in theauthorityofChristian scripture; he writes that "nothing ought to be posited without a reason given, unless it is self-evident (literally, known through itself) or known by experience or proved by the authority of Sacred Scripture."[63]Ockham believed that an explanation has no sufficient basis in reality when it does not harmonize with reason, experience, or theBible. Unlike many theologians of his time, though, Ockham did not believe God could be logically proven with arguments. To Ockham, science was a matter of discovery;theologywas a matter ofrevelationandfaith. He states: "Only faith gives us access to theological truths. The ways of God are not open to reason, for God has freely chosen to create a world and establish a way of salvation within it apart from any necessary laws that human logic or rationality can uncover."[64]
Thomas Aquinas, in theSumma Theologica, uses a formulation of Occam's razor to construct an objection to the idea that God exists, which he refutes directly with a counterargument:[65]
Further, it is superfluous to suppose that what can be accounted for by a few principles has been produced by many. But it seems that everything we see in the world can be accounted for by other principles, supposing God did not exist. For all natural things can be reduced to one principle which is nature; and all voluntary things can be reduced to one principle which is human reason, or will. Therefore there is no need to suppose God's existence.
In turn, Aquinas answers this with thequinque viae, and addresses the particular objection above with the following answer:
Since nature works for a determinate end under the direction of a higher agent, whatever is done by nature must needs be traced back to God, as to its first cause. So also whatever is done voluntarily must also be traced back to some higher cause other than human reason or will, since these can change or fail; for all things that are changeable and capable of defect must be traced back to an immovable and self-necessary first principle, as was shown in the body of the Article.
Rather than argue for the necessity of a god, sometheistsbase their belief upon grounds independent of, or prior to, reason, making Occam's razor irrelevant. This was the stance ofSøren Kierkegaard, who viewed belief in God as aleap of faiththat sometimes directly opposed reason.[66]This is also the doctrine ofGordon Clark'spresuppositional apologetics, with the exception that Clark never thought the leap of faith was contrary to reason (see alsoFideism).
Variousarguments in favor of Godestablish God as a useful or even necessary assumption. Contrastingly some anti-theists hold firmly to the belief that assuming the existence of God introduces unnecessary complexity (e.g., theUltimate Boeing 747 gambitfrom Dawkins'sThe God Delusion[67]).[68]
Another application of the principle is to be found in the work ofGeorge Berkeley(1685–1753). Berkeley was an idealist who believed that all of reality could be explained in terms of the mind alone. He invoked Occam's razor againstmaterialism, stating that matter was not required by his metaphysics and was thus eliminable. One potential problem with this belief[for whom?]is that it's possible, given Berkeley's position, to findsolipsismitself more in line with the razor than a God-mediated world beyond a single thinker.
Occam's razor may also be recognized in the apocryphal story about an exchange betweenPierre-Simon LaplaceandNapoleon. It is said that in praising Laplace for one of his recent publications, the emperor asked how it was that the name of God, which featured so frequently in the writings ofLagrange, appeared nowhere in Laplace's. At that, he is said to have replied, "It's because I had no need of that hypothesis."[69]Though some points of this story illustrate Laplace'satheism, more careful consideration suggests that he may instead have intended merely to illustrate the power ofmethodological naturalism, or even simply that the fewerlogical premisesone assumes, thestrongeris one's conclusion.
In his article "Sensations and Brain Processes" (1959),J. J. C. Smartinvoked Occam's razor with the aim to justify his preference of themind-brain identity theoryoverspirit-body dualism. Dualists state that there are two kinds of substances in the universe: physical (including the body) and spiritual, which is non-physical. In contrast, identity theorists state that everything is physical, including consciousness, and that there is nothing nonphysical. Though it is impossible to appreciate the spiritual when limiting oneself to the physical,[citation needed]Smart maintained that identity theory explains all phenomena by assuming only a physical reality. Subsequently, Smart has been severely criticized for his use (or misuse) of Occam's razor and ultimately retracted his advocacy of it in this context.Paul Churchland(1984) states that by itself Occam's razor is inconclusive regarding duality. In a similar way, Dale Jacquette (1994) stated that Occam's razor has been used in attempts to justify eliminativism and reductionism in the philosophy of mind. Eliminativism is the thesis that the ontology offolk psychologyincluding such entities as "pain", "joy", "desire", "fear", etc., are eliminable in favor of an ontology of a completed neuroscience.
In penal theory and the philosophy of punishment, parsimony refers specifically to taking care in the distribution ofpunishmentin order to avoid excessive punishment. In theutilitarianapproach to the philosophy of punishment,Jeremy Bentham's "parsimony principle" states that any punishment greater than is required to achieve its end is unjust. The concept is related but not identical to the legal concept ofproportionality. Parsimony is a key consideration of the modernrestorative justice, and is a component of utilitarian approaches to punishment, as well as theprison abolition movement. Bentham believed that true parsimony would require punishment to be individualised to take account of thesensibilityof the individual—an individual more sensitive to punishment should be given a proportionately lesser one, since otherwise needless pain would be inflicted. Later utilitarian writers have tended to abandon this idea, in large part due to the impracticality of determining each alleged criminal's relative sensitivity to specific punishments.[70]
Marcus Hutter's universal artificial intelligence builds uponSolomonoff's mathematical formalization of the razorto calculate the expected value of an action.
There are various papers in scholarly journals deriving formal versions of Occam's razor from probability theory, applying it instatistical inference, and using it to come up with criteria for penalizing complexity in statistical inference. Papers[71][72]have suggested a connection between Occam's razor andKolmogorov complexity.[73]
One of the problems with the original formulation of the razor is that it only applies to models with the same explanatory power (i.e., it only tells us to prefer the simplest of equally good models). A more general form of the razor can be derived from Bayesian model comparison, which is based onBayes factorsand can be used to compare models that do not fit the observations equally well. These methods can sometimes optimally balance the complexity and power of a model. Generally, the exact Occam factor is intractable, but approximations such asAkaike information criterion,Bayesian information criterion,Variational Bayesian methods,false discovery rate, andLaplace's methodare used. Manyartificial intelligenceresearchers are now employing such techniques, for instance through work onOccam Learningor more generally on theFree energy principle.
Statistical versions of Occam's razor have a more rigorous formulation than what philosophical discussions produce. In particular, they must have a specific definition of the termsimplicity, and that definition can vary. For example, in theKolmogorov–Chaitinminimum description lengthapproach, the subject must pick aTuring machinewhose operations describe the basic operationsbelievedto represent "simplicity" by the subject. However, one could always choose a Turing machine with a simple operation that happened to construct one's entire theory and would hence score highly under the razor. This has led to two opposing camps: one that believes Occam's razor is objective, and one that believes it is subjective.
The minimum instruction set of auniversal Turing machinerequires approximately the same length description across different formulations, and is small compared to theKolmogorov complexityof most practical theories.Marcus Hutterhas used this consistency to define a "natural" Turing machine of small size as the proper basis for excluding arbitrarily complex instruction sets in the formulation of razors.[74]Describing the program for the universal program as the "hypothesis", and the representation of the evidence as program data, it has been formally proven underZermelo–Fraenkel set theorythat "the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized."[75]Interpreting this as minimising the total length of a two-part message encoding model followed by data given model gives us theminimum message length(MML) principle.[71][72]
One possible conclusion from mixing the concepts of Kolmogorov complexity and Occam's razor is that an ideal data compressor would also be a scientific explanation/formulation generator. Some attempts have been made to re-derive known laws from considerations of simplicity or compressibility.[24][76]
According toJürgen Schmidhuber, the appropriate mathematical theory of Occam's razor already exists, namely,Solomonoff'stheory of optimal inductive inference[77]and its extensions.[78]See discussions in David L. Dowe's "Foreword re C. S. Wallace"[79]for the subtle distinctions between thealgorithmic probabilitywork of Solomonoff and the MML work ofChris Wallace, and see Dowe's "MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness"[80]both for such discussions and for (in section 4) discussions of MML and Occam's razor. For a specific example of MML as Occam's razor in the problem of decision tree induction, see Dowe and Needham's "Message Length as an Effective Ockham's Razor in Decision Tree Induction".[81]
Theno free lunch(NFL) theorems for inductive inference prove that Occam's razor must rely on ultimately arbitrary assumptions concerning the prior probability distribution found in our world.[82]Specifically, suppose one is given two inductive inference algorithms, A and B, where A is aBayesianprocedure based on the choice of some prior distribution motivated by Occam's razor (e.g., the prior might favor hypotheses with smallerKolmogorov complexity). Suppose that B is the anti-Bayes procedure, which calculates what the Bayesian algorithm A based on Occam's razor will predict – and then predicts the exact opposite. Then there are just as many actual priors (including those different from the Occam's razor prior assumed by A) in which algorithm B outperforms A as priors in which the procedure A based on Occam's razor comes out on top. In particular, the NFL theorems show that the "Occam factors" Bayesian argument for Occam's razor must make ultimately arbitrary modeling assumptions.[83]
In software development, therule of least powerargues the correctprogramming languageto use is the one that is simplest while also solving the targeted software problem. In that form the rule is often credited toTim Berners-Leesince it appeared in his design guidelines for the originalHypertext Transfer Protocol.[84]Complexity in this context is measured either by placing a language into theChomsky hierarchyor by listing idiomatic features of the language and comparing according to some agreed to scale of difficulties between idioms. Many languages once thought to be of lower complexity have evolved or later been discovered to be more complex than originally intended; so, in practice this rule is applied to the relative ease of a programmer to obtain the power of the language, rather than the precise theoretical limits of the language.
Scientists have discovered thatdeep neural networks(DNN) prefer simpler mathematical functions while learning. This simplicity bias enables DNNs to overcomeoverfitting- a scenario where the model gets overwhelmed with noise due to the presence of too many parameters.[85]
Occam's razor is not an embargo against the positing of any kind of entity, or a recommendation of the simplest theory come what may.[a]Occam's razor is used to adjudicate between theories that have already passed "theoretical scrutiny" tests and are equally well-supported by evidence.[b]Furthermore, it may be used to prioritize empirical testing between two equally plausible but unequally testable hypotheses; thereby minimizing costs and wastes while increasing chances of falsification of the simpler-to-test hypothesis.[citation needed]
Another contentious aspect of the razor is that a theory can become more complex in terms of its structure (orsyntax), while itsontology(orsemantics) becomes simpler, or vice versa.[c]Quine, in a discussion on definition, referred to these two perspectives as "economy of practical expression" and "economy in grammar and vocabulary", respectively.[87]
Galileo Galileilampooned themisuseof Occam's razor in hisDialogue. The principle is represented in the dialogue by Simplicio. The telling point that Galileo presented ironically was that if one really wanted to start from a small number of entities, one could always consider the letters of the alphabet as the fundamental entities, since one could construct the whole of human knowledge out of them.
Instances of using Occam's razor to justify belief in less complex and more simple theories have been criticized as using the razor inappropriately. For instanceFrancis Crickstated that "While Occam's razor is a useful tool in the physical sciences, it can be a very dangerous implement in biology. It is thus very rash to use simplicity and elegance as a guide in biological research."[88]
Occam's razor has met some opposition from people who consider it too extreme or rash.Walter Chatton(c.1290–1343) was a contemporary of William of Ockham who took exception to Occam's razor and Ockham's use of it. In response he devised his ownanti-razor: "If three things are not enough to verify an affirmative proposition about things, a fourth must be added and so on." Although there have been several philosophers who have formulated similar anti-razors since Chatton's time, no one anti-razor has perpetuated as notably as Chatton's anti-razor, although this could be the case of the Late Renaissance Italian motto of unknown attributionSe non è vero, è ben trovato("Even if it is not true, it is well conceived") when referred to a particularly artful explanation.
Anti-razors have also been created byGottfried Wilhelm Leibniz(1646–1716),Immanuel Kant(1724–1804), andKarl Menger(1902–1985). Leibniz's version took the form of aprinciple of plenitude, asArthur Lovejoyhas called it: the idea being that God created the most varied and populous of possible worlds. Kant felt a need to moderate the effects of Occam's razor and thus created his own counter-razor: "The variety of beings should not rashly be diminished."[89]
Karl Menger found mathematicians to be too parsimonious with regard to variables so he formulated his Law Against Miserliness, which took one of two forms: "Entities must not be reduced to the point of inadequacy" and "It is vain to do with fewer what requires more." A less serious but even more extremist anti-razor is'Pataphysics, the "science of imaginary solutions" developed byAlfred Jarry(1873–1907). Perhaps the ultimate in anti-reductionism, "'Pataphysics seeks no less than to view each event in the universe as completely unique, subject to no laws but its own." Variations on this theme were subsequently explored by the Argentine writerJorge Luis Borgesin his story/mock-essay "Tlön, Uqbar, Orbis Tertius". PhysicistR. V. Jonescontrived Crabtree's Bludgeon, which states that "[n]o set of mutually inconsistent observations can exist for which some human intellect cannot conceive a coherent explanation, however complicated."[90]
Recently, American physicist Igor Mazin argued that because high-profile physics journals prefer publications offering exotic and unusual interpretations, the Occam's razor principle is being replaced by an "Inverse Occam's razor", implying that the simplest possible explanation is usually rejected.[91]
Since 2012[update],The Skepticmagazine annually awards the Ockham Awards, or simply the Ockhams, named after Occam's razor, atQED.[92]The Ockhams were introduced by editor-in-chiefDeborah Hydeto "recognise the effort and time that have gone into the community's favourite skeptical blogs, skeptical podcasts, skeptical campaigns and outstanding contributors to the skeptical cause."[93]Thetrophies, designed by Neil Davies and Karl Derrick, carry the upper text "Ockham's" and the lower text "The Skeptic. Shaving away unnecessary assumptions since 1285." Between the texts, there is an image of a double-edgedsafety razorblade, and both lower corners feature an image of William of Ockham's face.[93]
|
https://en.wikipedia.org/wiki/Occam%27s_razor
|
Helmut Norpoth(born 1943) is an American political scientist and professor ofpolitical scienceatStony Brook University. Norpoth is best known for developing the Primary Model to predictUnited States presidential elections. Norpoth's model has successfully matched the results of 25 out of 29 United States presidential elections since 1912, with the exceptions being those in 1960, 2000, 2020, and 2024.
Norpoth was born inEssen, Germany, in 1943. He received his undergraduate degree from theFree University of BerlininWest Berlinin 1966. He then attended theUniversity of Michigan, where he received his M.A. andPh.D.in 1967 and 1974, respectively. Before joining Stony Brook University as an assistant professor in 1979, he taught at theUniversity of Arizona(he had been a visiting lecturer in its political science department in 1978), theUniversity of Cologne, and theUniversity of Texas at Austin. In 1980, Norpoth was promoted to associate professor at Stony Brook University and became a tenured full professor there in 1985.[1]
Norpoth's research focuses on multiple subjects in political science, includingpublic opinionand electoral behavior, and predicting the results of elections in the United States, Great Britain, and Germany.[1]Alongside fellow political scientistMichael Lewis-Beck, he is the co-author ofThe American Voter Revisited, a 2008 book published by theUniversity of Michigan Pressthe covering the images of presidential candidates, party identification, and why Americans turn out to vote.[2][3]He also wroteConfidence Regained: Economics, Mrs. Thatcher, and the British Voter, a 1992 book published by the University of Michigan Press about public reactions toMargaret Thatcher, especially her economic and foreign policies.[4][5]Other articles written by Norpoth include "Fighting to Win: Wartime Morale in the American Public" with Andrew H. Sidman (2012), "Yes, Prime Minister: The Key to Forecasting British Elections" with Matthew Lebo (2011), "The New Deal Realignment in Real Time" with Andrew H. Sidman and Clara Suong, "History and Primary: The Obama Re-Election" with Michael Bednarczuk, and "Guns 'N Jobs: The FDR Legacy" with Alexa Bankert.[1]
Norpoth developed the Primary Model, a statistical model of United States presidential elections based on data going back to 1912. Instead ofopinion polling, Norpoth relies on statistics from a candidate's performance in theprimariesand patterns in the electoral cycle to forecast results through the Primary Model.[6][7]The Primary Model is based on two factors: whether the party that has been in power for a long time seems to be about to lose it, and whether a given candidate did better in the primaries than his or her opponent. The Primary Model was first used in the 1996 election,[8]and correctly predictedBarack Obama's re-election as early as February 2012 and the election ofDonald Trumpin 2016.[5]
Norpoth's election model had predicted 25 out of the past 29 elections, with 1960, 2000, 2020, and 2024 as misses.[9]
In February 2015, Norpoth projected that Republicans had a 65 percent chance of winning the presidential election the following year.[10]In February 2016, he had predicted a Trump victory with 97 percent certainty,[11]and by October 2016, citing Trump's performance in the primaries, his election model projected a win for Trump with a certainty of 87 to 99 percent, in contrast to all major election forecast.[7]As a result, Norpoth's election model gained significant media attention because it predicted that Trump would win the election.[12]Despite the attention for predicting Trump would win in 2016, Norpoth's election model only said that Trump would win the two-party popular vote 52.5% to 47.5%; Trump actually lost the 2016 two-party popular vote 48.2% to 46.1%, and the Primary Model for the next elections was modified to predict only the Electoral College votes as a result. In response to critics who cited polls in whichHillary Clintonled Trump by a significant margin,[13]Norpoth said that these polls were not taking into account who will actually vote in November 2016, writing that "nearly all of us say, oh yes, I'll vote, and then many will not follow through."[7]
On March 2, 2020, Norpoth stated that his model gave Trump a 91 percent chance at winning re-election.[14][15]His model also predicted that Trump would win with up to 362 electoral votes. This would have required Trump to have flipped several Clinton states from 2016; however, this prediction proved to be inaccurate. Trump did not flip any states Clinton won in 2016 and ended up losing five states plus one electoral vote in Nebraska that he won in 2016, ultimately losing the election with 232 electoral votes to Biden's 306 electoral votes. Norpoth cited a "perfect storm" of subsequent surprise events following his prediction that were not taken into account, notably theCOVID-19 pandemic in the United States, which led to lockdowns, beginning only a few weeks after his prediction, and an economic downturn, which was not improved due to perceived inadequate response by Trump. The pandemic also led to an increase in mail-in and absentee ballots, which would lean toward the Democratic candidate. TheGeorge Floyd protestswere also cited as a factor.[16]
The Primary Model for 2024 predicted a victory forKamala Harrisat 75 percent. Before the withdrawal ofJoe Bidenfrom the presidential election, the Primary Model had also given Biden a 75 percent chance to defeat Trump;[17]this was because Biden was the incumbent and had won the Democratic primaries in New Hampshire and South Carolina by larger margins than Trump had in the Republican primaries. Norpoth thus predicted an election win for Biden based on the similar positive results for Trump in the 2020 Republican primaries (and which the Primary Model had incorrectly predicted would lead to a Trump victory).[18]Biden would therefore secure 315 electoral votes and Trump 223 electoral votes.[5]Harris ultimately lost to Trump, winning only 226 electoral votes to Trump's 312 electoral votes.
|
https://en.wikipedia.org/wiki/Helmut_Norpoth#"Primary_Model"_for_US_presidential_elections
|
InVapnik–Chervonenkis theory, theVapnik–Chervonenkis (VC) dimensionis a measure of the size (capacity, complexity, expressive power, richness, or flexibility) of a class of sets. The notion can be extended to classes of binary functions. It is defined as thecardinalityof the largest set of points that the algorithm canshatter, which means the algorithm can always learn a perfect classifier for any labeling of at least one configuration of those data points. It was originally defined byVladimir VapnikandAlexey Chervonenkis.[1]
Informally, the capacity of a classification model is related to how complicated it can be. For example, consider thethresholdingof a high-degreepolynomial: if the polynomial evaluates above zero, that point is classified as positive, otherwise as negative. A high-degree polynomial can be wiggly, so that it can fit a given set of training points well. But one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a high capacity. A much simpler alternative is to threshold a linear function. This function may not fit the training set well, because it has a low capacity. This notion of capacity is made rigorous below.
LetH{\displaystyle H}be aset family(a set of sets) andC{\displaystyle C}a set. Theirintersectionis defined as the following set family:
We say that a setC{\displaystyle C}isshatteredbyH{\displaystyle H}ifH∩C{\displaystyle H\cap C}contains all the subsets ofC{\displaystyle C}, i.e.:
TheVC dimensionD{\displaystyle D}ofH{\displaystyle H}is thecardinalityof the largest set that is shattered byH{\displaystyle H}. If arbitrarily large sets can be shattered, the VC dimension is∞{\displaystyle \infty }.
A binary classification modelf{\displaystyle f}with some parameter vectorθ{\displaystyle \theta }is said toshattera set ofgenerally positioneddata points(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\ldots ,x_{n})}if, for every assignment of labels to those points, there exists aθ{\displaystyle \theta }such that the modelf{\displaystyle f}makes no errors when evaluating that set of data points[citation needed].
The VC dimension of a modelf{\displaystyle f}is the maximum number of points that can be arranged so thatf{\displaystyle f}shatters them. More formally, it is the maximum cardinalD{\displaystyle D}such that there exists a generally positioned data point set ofcardinalityD{\displaystyle D}that can be shattered byf{\displaystyle f}.
The VC dimension can predict aprobabilisticupper boundon the test error of a classification model. Vapnik[3]proved that the probability of the test error (i.e., risk with 0–1 loss function) distancing from an upper bound (on data that is drawni.i.d.from the same distribution as the training set) is given by:
whereD{\displaystyle D}is the VC dimension of the classification model,0<η⩽1{\displaystyle 0<\eta \leqslant 1}, andN{\displaystyle N}is the size of the training set (restriction: this formula is valid whenD≪N{\displaystyle D\ll N}. WhenD{\displaystyle D}is larger, the test-error may be much higher than the training-error. This is due tooverfitting).
The VC dimension also appears insample-complexity bounds. A space of binary functions with VC dimensionD{\displaystyle D}can be learned with:[4]: 73
samples, whereε{\displaystyle \varepsilon }is the learning error andδ{\displaystyle \delta }is the failure probability. Thus, the sample-complexity is a linear function of the VC dimension of the hypothesis space.
The VC dimension is one of the critical parameters in the size ofε-nets, which determines the complexity of approximation algorithms based on them; range sets without finite VC dimension may not have finite ε-nets at all.
Afinite projective planeof ordernis a collection ofn2+n+ 1 sets (called "lines") overn2+n+ 1 elements (called "points"), for which:
The VC dimension of a finite projective plane is 2.[5]
Proof: (a) For each pair of distinct points, there is one line that contains both of them, lines that contain only one of them, and lines that contain none of them, so every set of size 2 is shattered. (b) For any triple of three distinct points, if there is a linexthat contain all three, then there is no lineythat contains exactly two (since thenxandywould intersect in two points, which is contrary to the definition of a projective plane). Hence, no set of size 3 is shattered.
Suppose we have a base classB{\displaystyle B}of simple classifiers, whose VC dimension isD{\displaystyle D}.
We can construct a more powerful classifier by combining several different classifiers fromB{\displaystyle B}; this technique is calledboosting. Formally, givenT{\displaystyle T}classifiersh1,…,hT∈B{\displaystyle h_{1},\ldots ,h_{T}\in B}and a weight vectorw∈RT{\displaystyle w\in \mathbb {R} ^{T}}, we can define the following classifier:
The VC dimension of the set of all such classifiers (for all selections ofT{\displaystyle T}classifiers fromB{\displaystyle B}and a weight-vector fromRT{\displaystyle \mathbb {R} ^{T}}), assumingT,D≥3{\displaystyle T,D\geq 3}, is at most:[4]: 108–109
Aneural networkis described by adirected acyclic graphG(V,E), where:
The VC dimension of a neural network is bounded as follows:[4]: 234–235
The VC dimension is defined for spaces of binary functions (functions to {0,1}). Several generalizations have been suggested for spaces of non-binary functions.
|
https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension
|
Inmachine learning, aneural scaling lawis an empiricalscaling lawthat describes howneural networkperformance changes as key factors are scaled up or down. These factors typically include the number of parameters,training datasetsize,[1][2]and training cost.
In general, adeep learningmodel can be characterized by four parameters: model size, training dataset size, training cost, and the post-training error rate (e.g., the test set error rate). Each of these variables can be defined as a real number, usually written asN,D,C,L{\displaystyle N,D,C,L}(respectively: parameter count, dataset size, computing cost, andloss).
A neural scaling law is a theoretical or empiricalstatistical lawbetween these parameters. There are also other parameters with other scaling laws.
In most cases, the model's size is simply the number of parameters. However, one complication arises with the use of sparse models, such asmixture-of-expert models.[3]With sparse models, during inference, only a fraction of their parameters are used. In comparison, most other kinds of neural networks, such astransformermodels, always use all their parameters during inference.
The size of the training dataset is usually quantified by the number of data points within it. Larger training datasets are typically preferred, as they provide a richer and more diverse source of information from which the model can learn. This can lead to improved generalization performance when the model is applied to new, unseen data.[4]However, increasing the size of the training dataset also increases the computational resources and time required for model training.
With the "pretrain, then finetune" method used for mostlarge language models, there are two kinds of training dataset: thepretrainingdataset and thefinetuningdataset. Their sizes have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset.[5]
In some cases, a small amount of high quality data suffices for finetuning, and more data does not necessarily improve performance.[5]
Training cost is typically measured in terms of time (how long it takes to train the model) and computational resources (how much processing power and memory are required). It is important to note that the cost of training can be significantly reduced with efficient training algorithms, optimized software libraries, andparallel computingon specialized hardware such asGPUsorTPUs.
The cost of training a neural network model is a function of several factors, including model size, training dataset size, the training algorithmcomplexity, and the computational resources available.[4]In particular, doubling the training dataset size does not necessarily double the cost of training, because one may train the model for several times over the same dataset (each being an "epoch").
The performance of a neural network model is evaluated based on its ability to accurately predict the output given some input data. Common metrics for evaluating model performance include:[4]
Performance can be improved by using more data, larger models, different training algorithms,regularizingthe model to preventoverfitting, and early stopping using a validation set.
When the performance is a number bounded within the range of[0,1]{\displaystyle [0,1]}, such as accuracy, precision, etc., it often scales as asigmoid functionof cost, as seen in the figures.
The 2017 paper[2]is a common reference point for neural scaling laws fitted by statistical analysis on experimental data. Previous works before the 2000s, as cited in the paper, were either theoretical or orders of magnitude smaller in scale. Whereas previous works generally found the scaling exponent to scale likeL∝D−α{\displaystyle L\propto D^{-\alpha }}, withα∈{0.5,1,2}{\displaystyle \alpha \in \{0.5,1,2\}}, the paper found thatα∈[0.07,0.35]{\displaystyle \alpha \in [0.07,0.35]}.
Of the factors they varied, only task can change the exponentα{\displaystyle \alpha }. Changing the architecture optimizers, regularizers, and loss functions, would only change the proportionality factor, not the exponent. For example, for the same task, one architecture might haveL=1000D−0.3{\displaystyle L=1000D^{-0.3}}while another might haveL=500D−0.3{\displaystyle L=500D^{-0.3}}. They also found that for a given architecture, the number of parameters necessary to reach lowest levels of loss, given a fixed dataset size, grows likeN∝Dβ{\displaystyle N\propto D^{\beta }}for another exponentβ{\displaystyle \beta }.
They studied machine translation with LSTM (α∼0.13{\displaystyle \alpha \sim 0.13}), generative language modelling with LSTM (α∈[0.06,0.09],β≈0.7{\displaystyle \alpha \in [0.06,0.09],\beta \approx 0.7}), ImageNet classification with ResNet (α∈[0.3,0.5],β≈0.6{\displaystyle \alpha \in [0.3,0.5],\beta \approx 0.6}), and speech recognition with two hybrid (LSTMs complemented by either CNNs or an attention decoder) architectures (α≈0.3{\displaystyle \alpha \approx 0.3}).
A 2020 analysis[10]studied statistical relations betweenC,N,D,L{\displaystyle C,N,D,L}over a wide range of values and found similar scaling laws, over the range ofN∈[103,109]{\displaystyle N\in [10^{3},10^{9}]},C∈[1012,1021]{\displaystyle C\in [10^{12},10^{21}]}, and over multiple modalities (text, video, image, text to image, etc.).[10]
In particular, the scaling laws it found are (Table 1 of[10]):
The scaling law ofL=L0+(C0/C)0.048{\displaystyle L=L_{0}+(C_{0}/C)^{0.048}}was confirmed during the training ofGPT-3(Figure 3.1[11]).
One particular scaling law ("Chinchilla scaling") states that, for alarge language model(LLM) autoregressively trained for one epoch, with a cosinelearning rateschedule, we have:[13]{C=C0NDL=ANα+BDβ+L0{\displaystyle {\begin{cases}C=C_{0}ND\\L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}\end{cases}}}where the variables are
and the statistical parameters are
Although Besiroglu et al.[15]claims that the statistical estimation is slightly off, and should beα=0.35,β=0.37,A=482.01,B=2085.43,L0=1.82{\displaystyle \alpha =0.35,\beta =0.37,A=482.01,B=2085.43,L_{0}=1.82}.
The statistical laws were fitted over experimental data withN∈[7×107,1.6×1010],D∈[5×109,5×1011],C∈[1018,1024]{\displaystyle N\in [7\times 10^{7},1.6\times 10^{10}],D\in [5\times 10^{9},5\times 10^{11}],C\in [10^{18},10^{24}]}.
Since there are 4 variables related by 2 equations, imposing 1 additional constraint and 1 additionaloptimization objectiveallows us to solve for all four variables. In particular, for any fixedC{\displaystyle C}, we can uniquely solve for all 4 variables that minimizesL{\displaystyle L}. This provides us with the optimalDopt(C),Nopt(C){\displaystyle D_{opt}(C),N_{opt}(C)}for any fixedC{\displaystyle C}:Nopt(C)=G(C6)a,Dopt(C)=G−1(C6)b,whereG=(αAβB)1α+β,a=βα+β, andb=αα+β.{\displaystyle N_{opt}(C)=G\left({\frac {C}{6}}\right)^{a},\quad D_{opt}(C)=G^{-1}\left({\frac {C}{6}}\right)^{b},\quad {\text{ where }}\quad G=\left({\frac {\alpha A}{\beta B}}\right)^{\frac {1}{\alpha +\beta }},\quad a={\frac {\beta }{\alpha +\beta }}{\text{, and }}b={\frac {\alpha }{\alpha +\beta }}{\text{. }}}Plugging in the numerical values, we obtain the "Chinchilla efficient" model size and training dataset size, as well as the test loss achievable:{Nopt(C)=0.6C0.45Dopt(C)=0.3C0.55Lopt(C)=1070C−0.154+1.7{\displaystyle {\begin{cases}N_{opt}(C)=0.6\;C^{0.45}\\D_{opt}(C)=0.3\;C^{0.55}\\L_{opt}(C)=1070\;C^{-0.154}+1.7\end{cases}}}Similarly, we may find the optimal training dataset size and training compute budget for any fixed model parameter size, and so on.
There are other estimates for "Chinchilla efficient" model size and training dataset size. The above is based on a statistical model ofL=ANα+BDβ+L0{\displaystyle L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}}. One can also directly fit a statistical law forDopt(C),Nopt(C){\displaystyle D_{opt}(C),N_{opt}(C)}without going through the detour, for which one obtains:{Nopt(C)=0.1C0.5Dopt(C)=1.7C0.5{\displaystyle {\begin{cases}N_{opt}(C)=0.1\;C^{0.5}\\D_{opt}(C)=1.7\;C^{0.5}\end{cases}}}or as tabulated:
The Chinchilla scaling law analysis for trainingtransformerlanguage models suggests that for a given training compute budget (C{\displaystyle C}), to achieve the minimal pretraining loss for that budget, the number of model parameters (N{\displaystyle N}) and the number of training tokens (D{\displaystyle D}) should be scaled in equal proportions,Nopt(C)∝C0.5,Dopt(C)∝C0.5{\displaystyle N_{opt}(C)\propto C^{0.5},D_{opt}(C)\propto C^{0.5}}.
This conclusion differs from analysis conducted by Kaplan et al.,[14]which found thatN{\displaystyle N}should be increased more quickly thanD{\displaystyle D},Nopt(C)∝C0.73,Dopt(C)∝C0.27{\displaystyle N_{opt}(C)\propto C^{0.73},D_{opt}(C)\propto C^{0.27}}.
This discrepancy can primarily be attributed to the two studies using different methods for measuring model size. Kaplan et al.:[16]
Secondary effects also arise due to differences in hyperparameter tuning and learning rate schedules. Kaplan et al.:[17]
As Chinchilla scaling has been the reference point for many large-scaling training runs, there had been a concurrent effort to go "beyond Chinchilla scaling", meaning to modify some of the training pipeline in order to obtain the same loss with less effort, or deliberately train for longer than what is "Chinchilla optimal".
Usually, the goal is to make the scaling law exponent larger, which means the same loss can be trained for much less compute. For instance, filtering data can make the scaling law exponent larger.[18]
Another strand of research studies how to deal with limited data, as according to Chinchilla scaling laws, the training dataset size for the largest language models already approaches what is available on the internet.[19]found that augmenting the dataset with a mix of "denoising objectives" constructed from the dataset improves performance.[20]studies optimal scaling when all available data is already exhausted (such as in rare languages), so one must train multiple epoches over the same dataset (whereas Chinchilla scaling requires only one epoch). The Phi series of small language models were trained on textbook-like data generated by large language models, for which data is only limited by amount of compute available.[21]
Chinchilla optimality was defined as "optimal for training compute", whereas in actual production-quality models, there will be a lot of inference after training is complete. "Overtraining" during training means better performance during inference.[22]LLaMAmodels were overtrained for this reason. Subsequent studies discovered scaling laws in the overtraining regime, for dataset sizes up to 32x more than Chinchilla-optimal.[23]
A 2022 analysis[24]found that many scaling behaviors of artificial neural networks follow asmoothly broken power lawfunctional form:
y=a+(bx−c0)∏i=1n(1+(xdi)1/fi)−ci∗fi{\displaystyle y=a+{\bigg (}bx^{-c_{0}}{\bigg )}\prod _{i=1}^{n}\left(1+\left({\frac {x}{d_{i}}}\right)^{1/f_{i}}\right)^{-c_{i}*f_{i}}}
in whichx{\displaystyle x}refers to the quantity being scaled (i.e.C{\displaystyle C},N{\displaystyle N},D{\displaystyle D}, number of training steps, number of inference steps, or model input size) andy{\displaystyle y}refers to thedownstream(or upstream) performance evaluation metric of interest (e.g. prediction error,cross entropy, calibration error,AUROC,BLEU score percentage,F1 score, reward,Elo rating, solve rate, orFIDscore) inzero-shot,prompted, orfine-tunedsettings. The parametersa,b,c0,c1...cn,d1...dn,f1...fn{\displaystyle a,b,c_{0},c_{1}...c_{n},d_{1}...d_{n},f_{1}...f_{n}}are found by statistical fitting.
On alog–log plot, whenfi{\displaystyle f_{i}}is not too large anda{\displaystyle a}is subtracted out from the y-axis, this functional form looks like a series of linear segments connected by arcs; then{\displaystyle n}transitions between the segments are called "breaks", hence the namebroken neural scaling laws (BNSL).
The scenarios in which the scaling behaviors of artificial neural networks were found to follow this functional form include large-scalevision,language, audio, video,diffusion,generative modeling,multimodal learning,contrastive learning,AI alignment, AI capabilities,robotics, out-of-distribution (OOD) generalization, continual learning,transfer learning,uncertainty estimation/calibration,out-of-distribution detection,adversarial robustness,distillation, sparsity, retrieval, quantization,pruning,fairness, molecules, computer programming/coding, math word problems, arithmetic,emergent abilities,double descent,supervised learning,unsupervised/self-supervisedlearning, andreinforcement learning(single agent andmulti-agent).
The architectures for which the scaling behaviors of artificial neural networks were found to follow this functional form includeresidual neural networks,transformers,MLPs,MLP-mixers,recurrent neural networks,convolutional neural networks,graph neural networks,U-nets,encoder-decoder(andencoder-only) (and decoder-only) models,ensembles(and non-ensembles),MoE(mixture of experts) (and non-MoE) models, andsparse pruned(and non-sparse unpruned) models.
Other than scaling up training compute, one can also scale up inference compute (or "test-time compute"[25]). As an example, theElo ratingofAlphaGoimproves steadily as it is allowed to spend more time on itsMonte Carlo Tree Searchper play.[26]: Fig 4ForAlphaGo Zero, increasing Elo by 120 requires either 2x model size and training, or 2x test-time search.[27]Similarly, a language model for solving competition-level coding challenges, AlphaCode, consistently improved (log-linearly) in performance with more search time.[28]
ForHex, 10x training-time compute trades for 15x test-time compute.[8]ForLibratusfor heads upno-limitTexas hold 'em, andCiceroforDiplomacy, and many other abstract games of partial information, inference-time searching improves performance at a similar tradeoff ratio, for up to 100,000x effective increase in training-time compute.[27]
In 2024, theOpenAI o1report documented that o1's performance consistently improved with both increased train-time compute and test-time compute, and gave numerous examples of test-time compute scaling in mathematics, scientific reasoning, and coding tasks.[29][30]
One method for scaling up test-time compute isprocess-based supervision, where a model generates a step-by-step reasoning chain to answer a question, and another model (either human or AI) provides a reward score on some of the intermediate steps, not just the final answer. Process-based supervision can be scaled arbitrarily by using synthetic reward score without another model, for example, by running Monte Carlo rollouts and scoring each step in the reasoning according to how likely it leads to the right answer. Another method is byrevision models, which are models trained to solve a problem multiple times, each time revising the previous attempt.[31]
Vision transformers, similar to language transformers, exhibit scaling laws. A 2022 research trained vision transformers, with parameter countsN∈[5×106,2×109]{\displaystyle N\in [5\times 10^{6},2\times 10^{9}]}, on image sets of sizesD∈[3×107,3×109]{\displaystyle D\in [3\times 10^{7},3\times 10^{9}]}, for computingC∈[0.2,104]{\displaystyle C\in [0.2,10^{4}]}(in units of TPUv3-core-days).[32]
After training the model, it is finetuned onImageNettraining set. LetL{\displaystyle L}be the error probability of the finetuned model classifying ImageNet test set. They foundminN,DL=0.09+0.26(C+0.01)0.35{\displaystyle \min _{N,D}L=0.09+{\frac {0.26}{(C+0.01)^{0.35}}}}.
Ghorbani, Behrooz et al.[33]studied scaling laws forneural machine translation(specifically, English as source, and German as target) in encoder-decoderTransformermodels, trained until convergence on the same datasets (thus they did not fit scaling laws for computing costC{\displaystyle C}or dataset sizeD{\displaystyle D}). They variedN∈[108,3.5×109]{\displaystyle N\in [10^{8},3.5\times 10^{9}]}They found three results:
The authors hypothesize that source-natural datasets have uniform and dull target sentences, and so a model that is trained to predict the target sentences would quickly overfit.
[35]trained Transformers for machine translations with sizesN∈[4×105,5.6×107]{\displaystyle N\in [4\times 10^{5},5.6\times 10^{7}]}on dataset sizesD∈[6×105,6×109]{\displaystyle D\in [6\times 10^{5},6\times 10^{9}]}. They found the Kaplan et al. (2020)[14]scaling law applied to machine translation:L(N,D)=[(NCN)αNαD+DCD]αD{\displaystyle L(N,D)=\left[\left({\frac {N_{C}}{N}}\right)^{\frac {\alpha _{N}}{\alpha _{D}}}+{\frac {D_{C}}{D}}\right]^{\alpha _{D}}}. They also found the BLEU score scaling asBLEU≈Ce−kL{\displaystyle BLEU\approx Ce^{-kL}}.
Hernandez, Danny et al.[36]studied scaling laws fortransfer learningin language models. They trained a family of Transformers in three ways:
The idea is that pretraining on English should help the model achieve low loss on a test set of Python text. Suppose the model has parameter countN{\displaystyle N}, and after being finetuned onDF{\displaystyle D_{F}}Python tokens, it achieves some lossL{\displaystyle L}. We say that its "transferred token count" isDT{\displaystyle D_{T}}, if another model with the sameN{\displaystyle N}achieves the sameL{\displaystyle L}after training onDF+DT{\displaystyle D_{F}+D_{T}}Python tokens.
They foundDT=1.9e4(DF).18(N).38{\displaystyle D_{T}=1.9e4\left(D_{F}\right)^{.18}(N)^{.38}}for pretraining on English text, andDT=2.1e5(DF).096(N).38{\displaystyle D_{T}=2.1e5\left(D_{F}\right)^{.096}(N)^{.38}}for pretraining on English and non-Python code.
Kumar et al.[37]study scaling laws for numerical precision in the training of language models. They train a family of language models with weights, activations, and KV cache in varying numerical precision in both integer and floating-point type to measure the effects on loss as a function of precision. For training, their scaling law accounts for lower precision by wrapping the effects of precision into an overall "effective parameter count" that governs loss scaling, using the parameterizationN↦Neff(P)=N(1−e−P/γ){\displaystyle N\mapsto N_{\text{eff}}(P)=N(1-e^{-P/\gamma })}. This illustrates how training in lower precision degrades performance by reducing the true capacity of the model in a manner that varies exponentially with bits.
For inference, they find that extreme overtraining of language models past Chinchilla-optimality can lead to models being more sensitive to quantization, a standard technique for efficient deep learning. This is demonstrated by observing that the degradation in loss due to weight quantization increases as an approximate power law in the token/parameter ratioD/N{\displaystyle D/N}seen during pretraining, so that models pretrained on extreme token budgets can perform worse in terms of validation loss than those trained on more modest token budgets if post-training quantization is applied. Other work examining the effects of overtraining include Sardana et al.[38]and Gadre et al.[39]
Xiao et al.[7]considered the parameter efficiency ("density") of models over time. The idea is that over time, researchers would discover models that use their parameters more efficiently, in that models with the same performance can have fewer parameters.
A model can have an actual parameter countN{\displaystyle N}, defined as the actual number of parameters in the model, and an "effective" parameter countN^{\displaystyle {\hat {N}}}, defined as how many parameters it would have taken a previous well-known model to reach he same performance on some benchmarks, such asMMLU.N^{\displaystyle {\hat {N}}}is not measured directly, but rather by measuring the actual model performanceS{\displaystyle S}, then plugging it back to a previously fitted scaling law, such as the Chinchilla scaling law, to obtain whatN^{\displaystyle {\hat {N}}}would be required to reach that performanceS{\displaystyle S}, according to that previously fitted scaling laws.
A densing law states thatln(N^N)max=At+B{\displaystyle \ln \left({\frac {\hat {N}}{N}}\right)_{max}=At+B}, wheret{\displaystyle t}is real-world time, measured in days.
|
https://en.wikipedia.org/wiki/Broken_Neural_Scaling_Law
|
Coordinate descentis anoptimization algorithmthat successively minimizes along coordinate directions to find the minimum of afunction. At each iteration, the algorithm determines acoordinateor coordinate block via a coordinate selection rule, then exactly or inexactly minimizes over the corresponding coordinate hyperplane while fixing all other coordinates or coordinate blocks. Aline searchalong the coordinate direction can be performed at the current iterate to determine the appropriate step size. Coordinate descent is applicable in both differentiable and derivative-free contexts.
Coordinate descent is based on the idea that the minimization of a multivariable functionF(x){\displaystyle F(\mathbf {x} )}can be achieved by minimizing it along one direction at a time, i.e., solving univariate (or at least much simpler) optimization problems in a loop.[1]In the simplest case ofcyclic coordinate descent, one cyclically iterates through the directions, one at a time, minimizing the objective function with respect to each coordinate direction at a time. That is, starting with initial variable values
roundk+1{\displaystyle k+1}definesxk+1{\displaystyle \mathbf {x} ^{k+1}}fromxk{\displaystyle \mathbf {x} ^{k}}by iteratively solving the single variable optimization problems
for each variablexi{\displaystyle x_{i}}ofx{\displaystyle \mathbf {x} }, fori{\displaystyle i}from 1 ton{\displaystyle n}.
Thus, one begins with an initial guessx0{\displaystyle \mathbf {x} ^{0}}for alocal minimumofF{\displaystyle F}, and gets a sequencex0,x1,x2,…{\displaystyle \mathbf {x} ^{0},\mathbf {x} ^{1},\mathbf {x} ^{2},\dots }iteratively.
By doingline searchin each iteration, one automatically has
It can be shown that this sequence has similar convergence properties as steepest descent. No improvement after one cycle ofline searchalong coordinate directions implies a stationary point is reached.
This process is illustrated below.
In the case of acontinuously differentiablefunctionF, a coordinate descent algorithm can besketchedas:[1]
The step size can be chosen in various ways, e.g., by solving for the exact minimizer off(xi) =F(x)(i.e.,Fwith all variables butxifixed), or by traditional line search criteria.[1]
Coordinate descent has two problems. One of them is the case of a non-smoothobjective function. The following picture shows that coordinate descent iteration may get stuck at a non-stationary pointif the level curves of the function are not smooth. Suppose that the algorithm is at the point(−2, −2); then there are two axis-aligned directions it can consider for taking a step, indicated by the red arrows. However, every step along these two directions will increase the objective function's value (assuming a minimization problem), so the algorithm will not take any step, even though both steps together would bring the algorithm closer to the optimum. While this example shows that coordinate descent does not necessarily converge to the optimum, it is possible to show formal convergence under reasonable conditions.[3]
The other problem is difficulty in parallelism. Since the nature of coordinate descent is to cycle through the directions and minimize the objective function with respect to each coordinate direction, coordinate descent is not an obvious candidate for massive parallelism. Recent research works have shown that massive parallelism is applicable to coordinate descent by relaxing the change of the objective function with respect to each coordinate direction.[4][5][6]
Coordinate descent algorithms are popular with practitioners owing to their simplicity, but the same property has led optimization researchers to largely ignore them in favor of more interesting (complicated) methods.[1]An early application of coordinate descent optimization was in the area of computed tomography[7]where it has been found to have rapid convergence[8]and was subsequently used for clinical multi-slice helical scan CT reconstruction.[9]A cyclic coordinate descent algorithm (CCD) has been applied in protein structure prediction.[10]Moreover, there has been increased interest in the use of coordinate descent with the advent of large-scale problems inmachine learning, where coordinate descent has been shown competitive to other methods when applied to such problems as training linearsupport vector machines[11](seeLIBLINEAR) andnon-negative matrix factorization.[12]They are attractive for problems where computing gradients is infeasible, perhaps because the data required to do so are distributed across computer networks.[13]
|
https://en.wikipedia.org/wiki/Coordinate_descent
|
Incomputer science,online machine learningis a method ofmachine learningin which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entiretraining data setat once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need ofout-of-corealgorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., prediction of prices in the financial international markets. Online learning algorithms may be prone tocatastrophic interference, a problem that can be addressed byincremental learningapproaches.
In the setting ofsupervised learning, a function off:X→Y{\displaystyle f:X\to Y}is to be learned, whereX{\displaystyle X}is thought of as a space of inputs andY{\displaystyle Y}as a space of outputs, that predicts well on instances that are drawn from ajoint probability distributionp(x,y){\displaystyle p(x,y)}onX×Y{\displaystyle X\times Y}. In reality, the learner never knows the true distributionp(x,y){\displaystyle p(x,y)}over instances. Instead, the learner usually has access to atraining setof examples(x1,y1),…,(xn,yn){\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})}. In this setting, theloss functionis given asV:Y×Y→R{\displaystyle V:Y\times Y\to \mathbb {R} }, such thatV(f(x),y){\displaystyle V(f(x),y)}measures the difference between the predicted valuef(x){\displaystyle f(x)}and the true valuey{\displaystyle y}. The ideal goal is to select a functionf∈H{\displaystyle f\in {\mathcal {H}}}, whereH{\displaystyle {\mathcal {H}}}is a space of functions called a hypothesis space, so that some notion of total loss is minimized. Depending on the type of model (statistical or adversarial), one can devise different notions of loss, which lead to different learning algorithms.
In statistical learning models, the training sample(xi,yi){\displaystyle (x_{i},y_{i})}are assumed to have been drawn from the true distributionp(x,y){\displaystyle p(x,y)}and the objective is to minimize the expected "risk"I[f]=E[V(f(x),y)]=∫V(f(x),y)dp(x,y).{\displaystyle I[f]=\mathbb {E} [V(f(x),y)]=\int V(f(x),y)\,dp(x,y)\ .}A common paradigm in this situation is to estimate a functionf^{\displaystyle {\hat {f}}}throughempirical risk minimizationor regularized empirical risk minimization (usuallyTikhonov regularization). The choice of loss function here gives rise to several well-known learning algorithms such as regularizedleast squaresandsupport vector machines.
A purely online model in this category would learn based on just the new input(xt+1,yt+1){\displaystyle (x_{t+1},y_{t+1})}, the current best predictorft{\displaystyle f_{t}}and some extra stored information (which is usually expected to have storage requirements independent of training data size). For many formulations, for example nonlinearkernel methods, true online learning is not possible, though a form of hybrid online learning with recursive algorithms can be used whereft+1{\displaystyle f_{t+1}}is permitted to depend onft{\displaystyle f_{t}}and all previous data points(x1,y1),…,(xt,yt){\displaystyle (x_{1},y_{1}),\ldots ,(x_{t},y_{t})}. In this case, the space requirements are no longer guaranteed to be constant since it requires storing all previous data points, but the solution may take less time to compute with the addition of a new data point, as compared to batch learning techniques.
A common strategy to overcome the above issues is to learn using mini-batches, which process a small batch ofb≥1{\displaystyle b\geq 1}data points at a time, this can be considered as pseudo-online learning forb{\displaystyle b}much smaller than the total number of training points. Mini-batch techniques are used with repeated passing over the training data to obtain optimizedout-of-coreversions of machine learning algorithms, for example,stochastic gradient descent. When combined withbackpropagation, this is currently the de facto training method for trainingartificial neural networks.
The simple example of linear least squares is used to explain a variety of ideas in online learning. The ideas are general enough to be applied to other settings, for example, with other convex loss functions.
Consider the setting of supervised learning withf{\displaystyle f}being a linear function to be learned:f(xj)=⟨w,xj⟩=w⋅xj{\displaystyle f(x_{j})=\langle w,x_{j}\rangle =w\cdot x_{j}}wherexj∈Rd{\displaystyle x_{j}\in \mathbb {R} ^{d}}is a vector of inputs (data points) andw∈Rd{\displaystyle w\in \mathbb {R} ^{d}}is a linear filter vector.
The goal is to compute the filter vectorw{\displaystyle w}.
To this end, a square loss functionV(f(xj),yj)=(f(xj)−yj)2=(⟨w,xj⟩−yj)2{\displaystyle V(f(x_{j}),y_{j})=(f(x_{j})-y_{j})^{2}=(\langle w,x_{j}\rangle -y_{j})^{2}}is used to compute the vectorw{\displaystyle w}that minimizes the empirical lossIn[w]=∑j=1nV(⟨w,xj⟩,yj)=∑j=1n(xjTw−yj)2{\displaystyle I_{n}[w]=\sum _{j=1}^{n}V(\langle w,x_{j}\rangle ,y_{j})=\sum _{j=1}^{n}(x_{j}^{\mathsf {T}}w-y_{j})^{2}}whereyj∈R.{\displaystyle y_{j}\in \mathbb {R} .}
LetX{\displaystyle X}be thei×d{\displaystyle i\times d}data matrix andy∈Ri{\displaystyle y\in \mathbb {R} ^{i}}is the column vector of target values after the arrival of the firsti{\displaystyle i}data points.
Assuming that the covariance matrixΣi=XTX{\displaystyle \Sigma _{i}=X^{\mathsf {T}}X}is invertible (otherwise it is preferential to proceed in a similar fashion with Tikhonov regularization), the best solutionf∗(x)=⟨w∗,x⟩{\displaystyle f^{*}(x)=\langle w^{*},x\rangle }to the linear least squares problem is given byw∗=(XTX)−1XTy=Σi−1∑j=1ixjyj.{\displaystyle w^{*}=(X^{\mathsf {T}}X)^{-1}X^{\mathsf {T}}y=\Sigma _{i}^{-1}\sum _{j=1}^{i}x_{j}y_{j}.}
Now, calculating the covariance matrixΣi=∑j=1ixjxjT{\displaystyle \Sigma _{i}=\sum _{j=1}^{i}x_{j}x_{j}^{\mathsf {T}}}takes timeO(id2){\displaystyle O(id^{2})}, inverting thed×d{\displaystyle d\times d}matrix takes timeO(d3){\displaystyle O(d^{3})}, while the rest of the multiplication takes timeO(d2){\displaystyle O(d^{2})}, giving a total time ofO(id2+d3){\displaystyle O(id^{2}+d^{3})}. When there aren{\displaystyle n}total points in the dataset, to recompute the solution after the arrival of every datapointi=1,…,n{\displaystyle i=1,\ldots ,n}, the naive approach will have a total complexityO(n2d2+nd3){\displaystyle O(n^{2}d^{2}+nd^{3})}. Note that when storing the matrixΣi{\displaystyle \Sigma _{i}}, then updating it at each step needs only addingxi+1xi+1T{\displaystyle x_{i+1}x_{i+1}^{\mathsf {T}}}, which takesO(d2){\displaystyle O(d^{2})}time, reducing the total time toO(nd2+nd3)=O(nd3){\displaystyle O(nd^{2}+nd^{3})=O(nd^{3})}, but with an additional storage space ofO(d2){\displaystyle O(d^{2})}to storeΣi{\displaystyle \Sigma _{i}}.[1]
The recursive least squares (RLS) algorithm considers an online approach to the least squares problem. It can be shown that by initialisingw0=0∈Rd{\displaystyle \textstyle w_{0}=0\in \mathbb {R} ^{d}}andΓ0=I∈Rd×d{\displaystyle \textstyle \Gamma _{0}=I\in \mathbb {R} ^{d\times d}}, the solution of the linear least squares problem given in the previous section can be computed by the following iteration:Γi=Γi−1−Γi−1xixiTΓi−11+xiTΓi−1xi{\displaystyle \Gamma _{i}=\Gamma _{i-1}-{\frac {\Gamma _{i-1}x_{i}x_{i}^{\mathsf {T}}\Gamma _{i-1}}{1+x_{i}^{\mathsf {T}}\Gamma _{i-1}x_{i}}}}wi=wi−1−Γixi(xiTwi−1−yi){\displaystyle w_{i}=w_{i-1}-\Gamma _{i}x_{i}\left(x_{i}^{\mathsf {T}}w_{i-1}-y_{i}\right)}The above iteration algorithm can be proved using induction oni{\displaystyle i}.[2]The proof also shows thatΓi=Σi−1{\displaystyle \Gamma _{i}=\Sigma _{i}^{-1}}. One can look at RLS also in the context of adaptive filters (seeRLS).
The complexity forn{\displaystyle n}steps of this algorithm isO(nd2){\displaystyle O(nd^{2})}, which is an order of magnitude faster than the corresponding batch learning complexity. The storage requirements at every stepi{\displaystyle i}here are to store the matrixΓi{\displaystyle \Gamma _{i}}, which is constant atO(d2){\displaystyle O(d^{2})}. For the case whenΣi{\displaystyle \Sigma _{i}}is not invertible, consider the regularised version of the problem loss function∑j=1n(xjTw−yj)2+λ‖w‖22{\displaystyle \sum _{j=1}^{n}\left(x_{j}^{\mathsf {T}}w-y_{j}\right)^{2}+\lambda \left\|w\right\|_{2}^{2}}. Then, it's easy to show that the same algorithm works withΓ0=(I+λI)−1{\displaystyle \Gamma _{0}=(I+\lambda I)^{-1}}, and the iterations proceed to giveΓi=(Σi+λI)−1{\displaystyle \Gamma _{i}=(\Sigma _{i}+\lambda I)^{-1}}.[1]
When thiswi=wi−1−Γixi(xiTwi−1−yi){\displaystyle w_{i}=w_{i-1}-\Gamma _{i}x_{i}\left(x_{i}^{\mathsf {T}}w_{i-1}-y_{i}\right)}is replaced bywi=wi−1−γixi(xiTwi−1−yi)=wi−1−γi∇V(⟨wi−1,xi⟩,yi){\displaystyle w_{i}=w_{i-1}-\gamma _{i}x_{i}\left(x_{i}^{\mathsf {T}}w_{i-1}-y_{i}\right)=w_{i-1}-\gamma _{i}\nabla V(\langle w_{i-1},x_{i}\rangle ,y_{i})}orΓi∈Rd×d{\displaystyle \Gamma _{i}\in \mathbb {R} ^{d\times d}}byγi∈R{\displaystyle \gamma _{i}\in \mathbb {R} }, this becomes the stochastic gradient descent algorithm. In this case, the complexity forn{\displaystyle n}steps of this algorithm reduces toO(nd){\displaystyle O(nd)}. The storage requirements at every stepi{\displaystyle i}are constant atO(d){\displaystyle O(d)}.
However, the stepsizeγi{\displaystyle \gamma _{i}}needs to be chosen carefully to solve the expected risk minimization problem, as detailed above. By choosing a decaying step sizeγi≈1i,{\displaystyle \gamma _{i}\approx {\frac {1}{\sqrt {i}}},}one can prove the convergence of the average iteratew¯n=1n∑i=1nwi{\textstyle {\overline {w}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}}. This setting is a special case ofstochastic optimization, a well known problem in optimization.[1]
In practice, one can perform multiple stochastic gradient passes (also called cycles or epochs) over the data. The algorithm thus obtained is called incremental gradient method and corresponds to an iterationwi=wi−1−γi∇V(⟨wi−1,xti⟩,yti){\displaystyle w_{i}=w_{i-1}-\gamma _{i}\nabla V(\langle w_{i-1},x_{t_{i}}\rangle ,y_{t_{i}})}The main difference with the stochastic gradient method is that here a sequenceti{\displaystyle t_{i}}is chosen to decide which training point is visited in thei{\displaystyle i}-th step. Such a sequence can be stochastic or deterministic. The number of iterations is then decoupled to the number of points (each point can be considered more than once). The incremental gradient method can be shown to provide a minimizer to the empirical risk.[3]Incremental techniques can be advantageous when considering objective functions made up of a sum of many terms e.g. an empirical error corresponding to a very large dataset.[1]
Kernels can be used to extend the above algorithms to non-parametric models (or models where the parameters form an infinite dimensional space). The corresponding procedure will no longer be truly online and instead involve storing all the data points, but is still faster than the brute force method. This discussion is restricted to the case of the square loss, though it can be extended to any convex loss. It can be shown by an easy induction[1]that ifXi{\displaystyle X_{i}}is the data matrix andwi{\displaystyle w_{i}}is the output afteri{\displaystyle i}steps of theSGDalgorithm, then,wi=XiTci{\displaystyle w_{i}=X_{i}^{\mathsf {T}}c_{i}}whereci=((ci)1,(ci)2,...,(ci)i)∈Ri{\displaystyle c_{i}=((c_{i})_{1},(c_{i})_{2},...,(c_{i})_{i})\in \mathbb {R} ^{i}}and the sequenceci{\displaystyle c_{i}}satisfies the recursion:c0=0{\displaystyle c_{0}=0}(ci)j=(ci−1)j,j=1,2,...,i−1{\displaystyle (c_{i})_{j}=(c_{i-1})_{j},j=1,2,...,i-1}and(ci)i=γi(yi−∑j=1i−1(ci−1)j⟨xj,xi⟩){\displaystyle (c_{i})_{i}=\gamma _{i}{\Big (}y_{i}-\sum _{j=1}^{i-1}(c_{i-1})_{j}\langle x_{j},x_{i}\rangle {\Big )}}Notice that here⟨xj,xi⟩{\displaystyle \langle x_{j},x_{i}\rangle }is just the standard Kernel onRd{\displaystyle \mathbb {R} ^{d}}, and the predictor is of the formfi(x)=⟨wi−1,x⟩=∑j=1i−1(ci−1)j⟨xj,x⟩.{\displaystyle f_{i}(x)=\langle w_{i-1},x\rangle =\sum _{j=1}^{i-1}(c_{i-1})_{j}\langle x_{j},x\rangle .}
Now, if a general kernelK{\displaystyle K}is introduced instead and let the predictor befi(x)=∑j=1i−1(ci−1)jK(xj,x){\displaystyle f_{i}(x)=\sum _{j=1}^{i-1}(c_{i-1})_{j}K(x_{j},x)}then the same proof will also show that predictor minimising the least squares loss is obtained by changing the above recursion to(ci)i=γi(yi−∑j=1i−1(ci−1)jK(xj,xi)){\displaystyle (c_{i})_{i}=\gamma _{i}{\Big (}y_{i}-\sum _{j=1}^{i-1}(c_{i-1})_{j}K(x_{j},x_{i}){\Big )}}The above expression requires storing all the data for updatingci{\displaystyle c_{i}}. The total time complexity for the recursion when evaluating for then{\displaystyle n}-th datapoint isO(n2dk){\displaystyle O(n^{2}dk)}, wherek{\displaystyle k}is the cost of evaluating the kernel on a single pair of points.[1]Thus, the use of the kernel has allowed the movement from a finite dimensional parameter spacewi∈Rd{\displaystyle \textstyle w_{i}\in \mathbb {R} ^{d}}to a possibly infinite dimensional feature represented by a kernelK{\displaystyle K}by instead performing the recursion on the space of parametersci∈Ri{\displaystyle \textstyle c_{i}\in \mathbb {R} ^{i}}, whose dimension is the same as the size of the training dataset. In general, this is a consequence of therepresenter theorem.[1]
Online convex optimization (OCO)[4]is a general framework for decision making which leveragesconvex optimizationto allow for efficient algorithms. The framework is that of repeated game playing as follows:
Fort=1,2,...,T{\displaystyle t=1,2,...,T}
The goal is to minimizeregret, or the difference between cumulative loss and the loss of the best fixed pointu∈S{\displaystyle u\in S}in hindsight. As an example, consider the case of online least squares linear regression. Here, the weight vectors come from the convex setS=Rd{\displaystyle S=\mathbb {R} ^{d}}, and nature sends back the convex loss functionvt(w)=(⟨w,xt⟩−yt)2{\displaystyle v_{t}(w)=(\langle w,x_{t}\rangle -y_{t})^{2}}. Note here thatyt{\displaystyle y_{t}}is implicitly sent withvt{\displaystyle v_{t}}.
Some online prediction problems however cannot fit in the framework of OCO. For example, in online classification, the prediction domain and the loss functions are not convex. In such scenarios, two simple techniques forconvexificationare used:randomisationand surrogate loss functions.[citation needed]
Some simple online convex optimisation algorithms are:
The simplest learning rule to try is to select (at the current step) the hypothesis that has the least loss over all past rounds. This algorithm is called Follow the leader, and roundt{\displaystyle t}is simply given by:wt=argminw∈S∑i=1t−1vi(w){\displaystyle w_{t}=\mathop {\operatorname {arg\,min} } _{w\in S}\sum _{i=1}^{t-1}v_{i}(w)}This method can thus be looked as agreedy algorithm. For the case of online quadratic optimization (where the loss function isvt(w)=‖w−xt‖22{\displaystyle v_{t}(w)=\left\|w-x_{t}\right\|_{2}^{2}}), one can show a regret bound that grows aslog(T){\displaystyle \log(T)}. However, similar bounds cannot be obtained for the FTL algorithm for other important families of models like online linear optimization. To do so, one modifies FTL by adding regularisation.
This is a natural modification of FTL that is used to stabilise the FTL solutions and obtain better regret bounds. A regularisation functionR:S→R{\displaystyle R:S\to \mathbb {R} }is chosen and learning performed in roundtas follows:wt=argminw∈S∑i=1t−1vi(w)+R(w){\displaystyle w_{t}=\mathop {\operatorname {arg\,min} } _{w\in S}\sum _{i=1}^{t-1}v_{i}(w)+R(w)}As a special example, consider the case of online linear optimisation i.e. where nature sends back loss functions of the formvt(w)=⟨w,zt⟩{\displaystyle v_{t}(w)=\langle w,z_{t}\rangle }. Also, letS=Rd{\displaystyle S=\mathbb {R} ^{d}}. Suppose the regularisation functionR(w)=12η‖w‖22{\textstyle R(w)={\frac {1}{2\eta }}\left\|w\right\|_{2}^{2}}is chosen for some positive numberη{\displaystyle \eta }. Then, one can show that the regret minimising iteration becomeswt+1=−η∑i=1tzi=wt−ηzt{\displaystyle w_{t+1}=-\eta \sum _{i=1}^{t}z_{i}=w_{t}-\eta z_{t}}Note that this can be rewritten aswt+1=wt−η∇vt(wt){\displaystyle w_{t+1}=w_{t}-\eta \nabla v_{t}(w_{t})}, which looks exactly like online gradient descent.
IfSis instead some convex subspace ofRd{\displaystyle \mathbb {R} ^{d}},Swould need to be projected onto, leading to the modified update rulewt+1=ΠS(−η∑i=1tzi)=ΠS(ηθt+1){\displaystyle w_{t+1}=\Pi _{S}(-\eta \sum _{i=1}^{t}z_{i})=\Pi _{S}(\eta \theta _{t+1})}This algorithm is known as lazy projection, as the vectorθt+1{\displaystyle \theta _{t+1}}accumulates the gradients. It is also known as Nesterov's dual averaging algorithm. In this scenario of linear loss functions and quadratic regularisation, the regret is bounded byO(T){\displaystyle O({\sqrt {T}})}, and thus the average regret goes to0as desired.
The above proved a regret bound for linear loss functionsvt(w)=⟨w,zt⟩{\displaystyle v_{t}(w)=\langle w,z_{t}\rangle }. To generalise the algorithm to any convex loss function, thesubgradient∂vt(wt){\displaystyle \partial v_{t}(w_{t})}ofvt{\displaystyle v_{t}}is used as a linear approximation tovt{\displaystyle v_{t}}nearwt{\displaystyle w_{t}}, leading to the online subgradient descent algorithm:
Initialise parameterη,w1=0{\displaystyle \eta ,w_{1}=0}
Fort=1,2,...,T{\displaystyle t=1,2,...,T}
One can use the OSD algorithm to deriveO(T){\displaystyle O({\sqrt {T}})}regret bounds for the online version ofSVM'sfor classification, which use thehinge lossvt(w)=max{0,1−yt(w⋅xt)}{\displaystyle v_{t}(w)=\max\{0,1-y_{t}(w\cdot x_{t})\}}
Quadratically regularised FTRL algorithms lead to lazily projected gradient algorithms as described above. To use the above for arbitrary convex functions and regularisers, one usesonline mirror descent. The optimal regularization in hindsight can be derived for linear loss functions, this leads to theAdaGradalgorithm. For the Euclidean regularisation, one can show a regret bound ofO(T){\displaystyle O({\sqrt {T}})}, which can be improved further to aO(logT){\displaystyle O(\log T)}for strongly convex and exp-concave loss functions.
Continual learningmeans constantly improving the learned model by processing continuous streams of information.[5]Continual learning capabilities are essential for software systems and autonomous agents interacting in an ever changing real world. However, continual learning is a challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads tocatastrophic forgetting.
The paradigm of online learning has different interpretations depending on the choice of the learning model, each of which has distinct implications about the predictive quality of the sequence of functionsf1,f2,…,fn{\displaystyle f_{1},f_{2},\ldots ,f_{n}}. The prototypical stochastic gradient descent algorithm is used for this discussion. As noted above, its recursion is given bywt=wt−1−γt∇V(⟨wt−1,xt⟩,yt){\displaystyle w_{t}=w_{t-1}-\gamma _{t}\nabla V(\langle w_{t-1},x_{t}\rangle ,y_{t})}
The first interpretation consider thestochastic gradient descentmethod as applied to the problem of minimizing the expected riskI[w]{\displaystyle I[w]}defined above.[6]Indeed, in the case of an infinite stream of data, since the examples(x1,y1),(x2,y2),…{\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\ldots }are assumed to be drawn i.i.d. from the distributionp(x,y){\displaystyle p(x,y)}, the sequence of gradients ofV(⋅,⋅){\displaystyle V(\cdot ,\cdot )}in the above iteration are an i.i.d. sample of stochastic estimates of the gradient of the expected riskI[w]{\displaystyle I[w]}and therefore one can apply complexity results for the stochastic gradient descent method to bound the deviationI[wt]−I[w∗]{\displaystyle I[w_{t}]-I[w^{\ast }]}, wherew∗{\displaystyle w^{\ast }}is the minimizer ofI[w]{\displaystyle I[w]}.[7]This interpretation is also valid in the case of a finite training set; although with multiple passes through the data the gradients are no longer independent, still complexity results can be obtained in special cases.
The second interpretation applies to the case of a finite training set and considers the SGD algorithm as an instance of incremental gradient descent method.[3]In this case, one instead looks at the empirical risk:In[w]=1n∑i=1nV(⟨w,xi⟩,yi).{\displaystyle I_{n}[w]={\frac {1}{n}}\sum _{i=1}^{n}V(\langle w,x_{i}\rangle ,y_{i})\ .}Since the gradients ofV(⋅,⋅){\displaystyle V(\cdot ,\cdot )}in the incremental gradient descent iterations are also stochastic estimates of the gradient ofIn[w]{\displaystyle I_{n}[w]}, this interpretation is also related to the stochastic gradient descent method, but applied to minimize the empirical risk as opposed to the expected risk. Since this interpretation concerns the empirical risk and not the expected risk, multiple passes through the data are readily allowed and actually lead to tighter bounds on the deviationsIn[wt]−In[wn∗]{\displaystyle I_{n}[w_{t}]-I_{n}[w_{n}^{\ast }]}, wherewn∗{\displaystyle w_{n}^{\ast }}is the minimizer ofIn[w]{\displaystyle I_{n}[w]}.
Learning paradigms
General algorithms
Learning models
|
https://en.wikipedia.org/wiki/Online_machine_learning
|
Stochastic hill climbingis a variant of the basichill climbingmethod. While basic hill climbing always chooses the steepest uphill move, "stochastic hill climbing chooses atrandomfrom among the uphill moves; the probability of selection can vary with thesteepnessof the uphill move."[1]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Stochastic_hill_climbing
|
(Stochastic) variance reductionis an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classicalStochastic approximationsetting.
Variance reduction approaches are widely used for training machine learning models such aslogistic regressionandsupport vector machines[1]as these problems have finite-sum structure and uniformconditioningthat make them ideal candidates for variance reduction.
A functionf{\displaystyle f}is considered to have finite sum structure if it can be decomposed into a summation or average:
where the function value and derivative of eachfi{\displaystyle f_{i}}can be queried independently. Although variance reduction methods can be applied for any positiven{\displaystyle n}and anyfi{\displaystyle f_{i}}structure, their favorable theoretical and practical properties arise whenn{\displaystyle n}is large compared to thecondition numberof eachfi{\displaystyle f_{i}}, and when thefi{\displaystyle f_{i}}have similar (but not necessarily identical)Lipschitz smoothnessandstrong convexityconstants.
The finite sum structure should be contrasted with the stochastic approximation setting which deals with functions of the formf(θ)=Eξ[F(θ,ξ)]{\textstyle f(\theta )=\operatorname {E} _{\xi }[F(\theta ,\xi )]}which is the expected value of a function depending on arandom variableξ{\textstyle \xi }. Any finite sum problem can be optimized using a stochastic approximation algorithm by usingF(⋅,ξ)=fξ{\displaystyle F(\cdot ,\xi )=f_{\xi }}.
Stochastic variance reduced methods without acceleration are able to find a minima off{\displaystyle f}within accuracyϵ>{\displaystyle \epsilon >}, i.e.f(x)−f(x∗)≤ϵ{\displaystyle f(x)-f(x_{*})\leq \epsilon }in a number of steps of the order:
The number of steps depends only logarithmically on the level of accuracy required, in contrast to the stochastic approximation framework, where the number of stepsO(L/(μϵ)){\displaystyle O{\bigl (}L/(\mu \epsilon ){\bigr )}}required grows proportionally to the accuracy required.
Stochastic variance reduction methods converge almost as fast as the gradient descent method'sO((L/μ)log(1/ϵ)){\displaystyle O{\bigl (}(L/\mu )\log(1/\epsilon ){\bigr )}}rate, despite using only a stochastic gradient, at a1/n{\displaystyle 1/n}lower cost than gradient descent.
Accelerated methods in the stochastic variance reduction framework achieve even faster convergence rates, requiring only
steps to reachϵ{\displaystyle \epsilon }accuracy, potentiallyn{\displaystyle {\sqrt {n}}}faster than non-accelerated methods. Lower complexity bounds.[2]for the finite sum class establish that this rate is the fastest possible for smooth strongly convex problems.
Variance reduction approaches fall within 3 main categories: table averaging methods, full-gradient snapshot methods and dual methods. Each category contains methods designed for dealing with convex, non-smooth, and non-convex problems, each differing in hyper-parameter settings and other algorithmic details.
In the SAGA method,[3]the prototypical table averaging approach, a table of sizen{\displaystyle n}is maintained that contains the last gradient witnessed for eachfi{\displaystyle f_{i}}term, which we denotegi{\displaystyle g_{i}}. At each step, an indexi{\displaystyle i}is sampled, and a new gradient∇fi(xk){\displaystyle \nabla f_{i}(x_{k})}is computed. The iteratexk{\displaystyle x_{k}}is updated with:
and afterwards table entryi{\displaystyle i}is updated withgi=∇fi(xk){\displaystyle g_{i}=\nabla f_{i}(x_{k})}.
SAGA is among the most popular of the variance reduction methods due to its simplicity, easily adaptable theory, and excellent performance. It is the successor of the SAG method,[4]improving on its flexibility and performance.
The stochastic variance reduced gradient method (SVRG),[5]the prototypical snapshot method, uses a similar update except instead of using the average of a table it instead uses a full-gradient that is reevaluated at a snapshot pointx~{\displaystyle {\tilde {x}}}at regular intervals ofm≥n{\displaystyle m\geq n}iterations. The update becomes:
This approach requires two stochastic gradient evaluations per step, one to compute∇fi(xk){\displaystyle \nabla f_{i}(x_{k})}and one to compute∇fi(x~),{\displaystyle \nabla f_{i}({\tilde {x}}),}where-as table averaging approaches need only one.
Despite the high computational cost, SVRG is popular as its simple convergence theory is highly adaptable to new optimization settings. It also has lower storage requirements than tabular averaging approaches, which make it applicable in many settings where tabular methods can not be used.
Exploiting thedual representationof the objective leads to another variance reduction approach that is particularly suited to finite-sums where each term has a structure that makes computing theconvex conjugatefi∗,{\displaystyle f_{i}^{*},}or itsproximal operatortractable. The standard SDCA method[6]considers finite sums that have additional structure compared to generic finite sum setting:
where eachfi{\displaystyle f_{i}}is 1 dimensional and eachvi{\displaystyle v_{i}}is a data point associated withfi{\displaystyle f_{i}}.
SDCA solves the dual problem:
by a stochasticcoordinate ascentprocedure, where at each step the objective is optimized with respect to a randomly chosen coordinateαi{\displaystyle \alpha _{i}}, leaving all other coordinates the same. An approximate primal solutionx{\displaystyle x}can be recovered from theα{\displaystyle \alpha }values:
This method obtains similar theoretical rates of convergence to other stochastic variance reduced methods, while avoiding the need to specify a step-size parameter. It is fast in practice whenλ{\displaystyle \lambda }is large, but significantly slower than the other approaches whenλ{\displaystyle \lambda }is small.
Accelerated variance reduction methods are built upon the standard methods above. The earliest approaches make use of proximal operators to accelerate convergence, either approximately or exactly. Direct acceleration approaches have also been developed[7]
The catalyst framework[8]uses any of the standard methods above as an inner optimizer to approximately solve aproximal operator:
after which it uses an extrapolation step to determine the nexty{\displaystyle y}:
The catalyst method's flexibility and simplicity make it a popular baseline approach. It doesn't achieve the optimal rate of convergence among accelerated methods, it is potentially slower by up to a log factor in the hyper-parameters.
Proximal operations may also be applied directly to thefi{\displaystyle f_{i}}terms to yield an accelerated method. The Point-SAGA method[9]replaces the gradient operations in SAGA with proximal operator evaluations, result in a simple, direct acceleration method:
with the table updategj=1γ(zk−xk+1){\displaystyle g_{j}={\frac {1}{\gamma }}(z_{k}-x_{k+1})}performed after each step. Hereproxjγ{\displaystyle {\text{prox}}_{j}^{\gamma }}is defined as the proximal operator for thej{\displaystyle j}th term:
Unlike other known accelerated methods, Point-SAGA requires only a single iterate sequencex{\displaystyle x}to be maintained between steps, and it has the advantage of only having a single tunable parameterγ{\displaystyle \gamma }. It obtains the optimal accelerated rate of convergence for strongly convex finite-sum minimization without additional log factors.
|
https://en.wikipedia.org/wiki/Stochastic_variance_reduction
|
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour
Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks
Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics
Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication
Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety
Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation
Rational choice theoryBounded rationality
Inphilosophy,systems theory,science, andart,emergenceoccurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole.
Emergence plays a central role in theories ofintegrative levelsand ofcomplex systems. For instance, the phenomenon oflifeas studied inbiologyis an emergent property ofchemistryandphysics.
In philosophy, theories that emphasize emergent properties have been calledemergentism.[1]
Philosophers often understand emergence as a claim about theetiologyof asystem's properties. An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole.Nicolai Hartmann(1882–1950), one of the first modern philosophers to write on emergence, termed this acategorial novum(new category).[2]
This concept of emergence dates from at least the time ofAristotle.[3]Many scientists and philosophers[4]have written on the concept, includingJohn Stuart Mill(Composition of Causes, 1843)[5]andJulian Huxley[6](1887–1975).
The philosopherG. H. Lewescoined the term "emergent" in 1875, distinguishing it from the merely "resultant":
Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same – their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these arehomogeneousandcommensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference.[7][8]
Usage of the notion "emergence" may generally be subdivided into two perspectives, that of "weak emergence" and "strong emergence". One paper discussing this division isWeak Emergence, by philosopherMark Bedau. In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation or similar forms of after-the-fact analysis (for example, the formation of a traffic jam, the structure of a flock of starlings in flight or a school of fish, or the formation of galaxies). Crucial in these simulations is that the interacting members retain their independence. If not, a new entity is formed with new, emergent properties: this is called strong emergence, which it is argued cannot be simulated, analysed or reduced.[9]
David Chalmerswrites that emergence often causes confusion in philosophy and science due to a failure to demarcate strong and weak emergence, which are "quite different concepts".[10]
Some common points between the two notions are that emergence concerns new properties produced as the system grows, which is to say ones which are not shared with its components or prior states. Also, it is assumed that the properties aresupervenientrather than metaphysically primitive.[9]
Weak emergence describes new properties arising in systems as a result of the interactions at a fundamental level. However, Bedau stipulates that the properties can be determined only by observing or simulating the system, and not by any process of areductionistanalysis. As a consequence the emerging properties arescale dependent: they are only observable if the system is large enough to exhibit the phenomenon. Chaotic, unpredictable behaviour can be seen as an emergent phenomenon, while at a microscopic scale the behaviour of the constituent parts can be fullydeterministic.[citation needed]
Bedaunotes that weak emergence is not a universal metaphysical solvent, as the hypothesis thatconsciousnessis weakly emergent would not resolve the traditionalphilosophical questionsabout the physicality of consciousness. However, Bedau concludes that adopting this view would provide a precise notion that emergence is involved in consciousness, and second, the notion of weak emergence is metaphysically benign.[9]
Strong emergence describes the direct causal action of a high-level system on its components; qualities produced this way areirreducibleto the system's constituent parts.[11]The whole is other than the sum of its parts. It is argued then that no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts.[9]Physics lacks well-established examples of strong emergence, unless it is interpreted as the impossibilityin practiceto explain the whole in terms of the parts. Practical impossibility may be a more useful distinction than one in principle, since it is easier to determine and quantify, and does not imply the use of mysterious forces, but simply reflects the limits of our capability.[12]
One of the reasons for the importance of distinguishing these two concepts with respect to their difference concerns the relationship of purported emergent properties to science. Some thinkers question the plausibility of strong emergence as contravening our usual understanding of physics. Mark A. Bedau observes:
Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.[9]
The concern that strong emergence does so entail is that such a consequence must be incompatible with metaphysical principles such as theprinciple of sufficient reasonor the Latin dictumex nihilo nihil fit, often translated as "nothing comes from nothing".[13]
Strong emergence can be criticized for leading to causaloverdetermination. The canonical example concerns emergent mental states (M and M∗) that supervene on physical states (P and P∗) respectively. Let M and M∗ be emergent properties. Let M∗ supervene on base property P∗. What happens when M causes M∗?Jaegwon Kimsays:
In our schematic example above, we concluded that M causes M∗ by causing P∗. So M causes P∗. Now, M, as an emergent, must itself have an emergence base property, say P. Now we face a critical question: if an emergent, M, emerges from basal condition P, why cannot P displace M as a cause of any putative effect of M? Why cannot P do all the work in explaining why any alleged effect of M occurred? If causation is understood asnomological(law-based) sufficiency, P, as M's emergence base, is nomologically sufficient for it, and M, as P∗'s cause, is nomologically sufficient for P∗. It follows that P is nomologically sufficient for P∗ and hence qualifies as its cause...If M is somehow retained as a cause, we are faced with the highly implausible consequence that every case of downward causation involves overdetermination (since P remains a cause of P∗ as well). Moreover, this goes against the spirit of emergentism in any case: emergents are supposed to make distinctive and novel causal contributions.[14]
If M is the cause of M∗, then M∗ is overdetermined because M∗ can also be thought of as being determined by P. One escape-route that a strong emergentist could take would be to denydownward causation. However, this would remove the proposed reason that emergent mental states must supervene on physical states, which in turn would callphysicalisminto question, and thus be unpalatable for some philosophers and physicists.
Carroll and Parola propose a taxonomy that classifies emergent phenomena by how the macro-description relates to the underlying micro-dynamics.[15]
Crutchfield regards the properties of complexity and organization of any system assubjectivequalitiesdetermined by the observer.
Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analysed in terms of how model-building observers infer from measurements the computational capabilities embedded in non-linear processes. An observer's notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtly, though, on how those resources are organized. The descriptive power of the observer's chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data.[16]
The lowentropyof an ordered system can be viewed as an example of subjective emergence: the observer sees an ordered system by ignoring the underlying microstructure (i.e. movement of molecules or elementary particles) and concludes that the system has a low entropy.[17]On the other hand, chaotic, unpredictable behaviour can also be seen as subjective emergent, while at a microscopic scale the movement of the constituent parts can be fully deterministic.
Inphysics, emergence is used to describe a property, law, or phenomenon which occurs at macroscopic scales (in space or time) but not at microscopic scales, despite the fact that a macroscopic system can be viewed as a very large ensemble of microscopic systems.[18][19]
An emergent behavior of a physical system is a qualitative property that can only occur in the limit that the number of microscopic constituents tends to infinity.[20]
According toRobert Laughlin,[11]for many-particle systems, nothing can be calculated exactly from the microscopic equations, and macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details.
NovelistArthur Koestlerused the metaphor ofJanus(a symbol of the unity underlying complements like open/shut, peace/war) to illustrate how the two perspectives (strong vs. weak orholisticvs.reductionistic) should be treated as non-exclusive, and should work together to address the issues of emergence.[21]Theoretical physicistPhilip W. Andersonstates it this way:
The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts.[22]
Meanwhile, others have worked towards developing analytical evidence of strong emergence.Renormalizationmethods in theoretical physics enable physicists to study critical phenomena that are not tractable as the combination of their parts.[23]In 2009, Guet al.presented a class of infinite physical systems that exhibits non-computable macroscopic properties.[24][25]More precisely, if one could compute certain macroscopic properties of these systems from the microscopic description of these systems, then one would be able to solve computational problems known to be undecidable in computer science. These results concern infinite systems, finite systems being considered computable. However, macroscopic concepts which only apply in the limit of infinite systems, such asphase transitionsand therenormalization group, are important for understanding and modeling real, finite physical systems. Guet al.concluded that
Although macroscopic concepts are essential for understanding our world, much of fundamental physics has been devoted to the search for a 'theory of everything', a set of equations that perfectly describe the behavior of all fundamental particles. The view that this is the goal of science rests in part on the rationale that such a theory would allow us to derive the behavior of all macroscopic concepts, at least in principle. The evidence we have presented suggests that this view may be overly optimistic. A 'theory of everything' is one of many components necessary for complete understanding of the universe, but is not necessarily the only one. The development of macroscopic laws from first principles may involve more than just systematic logic, and could require conjectures suggested by experiments, simulations or insight.[24]
Human beings are the basic elements of social systems, which perpetually interact and create, maintain, or untangle mutual social bonds. Social bonds in social systems are perpetually changing in the sense of the ongoing reconfiguration of their structure.[26]An early argument (1904–05) for the emergence of social formations can be found inMax Weber's most famous work,The Protestant Ethic and the Spirit of Capitalism.[27]Recently, the emergence of a new social system is linked with the emergence of order from nonlinear relationships among multiple interacting units, where multiple interacting units are individual thoughts, consciousness, and actions.[28]In the case of the global economic system, undercapitalism, growth, accumulation and innovation can be considered emergent processes where not only does technological processes sustain growth, but growth becomes the source of further innovations in a recursive, self-expanding spiral. In this sense, the exponential trend of the growth curve reveals the presence of a long-term positivefeedbackamong growth, accumulation, and innovation; and the emergence of new structures and institutions connected to the multi-scale process of growth.[29]This is reflected in the work ofKarl Polanyi, who traces the process by which labor and nature are converted into commodities in the passage from an economic system based on agriculture to one based on industry.[30]This shift, along with the idea of the self-regulating market, set the stage not only for another economy but also for another society. The principle of emergence is also brought forth when thinking about alternatives to the current economic system based on growth facing social andecologicallimits. Bothdegrowthand socialecological economicshave argued in favor of a co-evolutionary perspective for theorizing about transformations that overcome the dependence of human wellbeing oneconomic growth.[31][32]
Economic trends and patterns which emerge are studied intensively by economists.[33]Within the field of group facilitation and organization development, there have been a number of new group processes that are designed to maximize emergence and self-organization, by offering a minimal set of effective initial conditions. Examples of these processes includeSEED-SCALE,appreciative inquiry, Future Search, the world cafe orknowledge cafe,Open Space Technology, and others (Holman, 2010[34]). In international development, concepts of emergence have been used within a theory of social change termedSEED-SCALEto show how standard principles interact to bring forward socio-economic development fitted to cultural values, community economics, and natural environment (local solutions emerging from the larger socio-econo-biosphere). These principles can be implemented utilizing a sequence of standardized tasks thatself-assemblein individually specific ways utilizing recursive evaluative criteria.[35]
Looking at emergence in the context of social andsystemschange, invites us to reframe our thinking on parts and wholes and their interrelation. Unlike machines,living systemsat all levels of recursion - be it a sentient body, a tree, a family, an organisation, the education system, the economy, the health system, the political system etc - are continuously creating themselves. They are continually growing and changing along with their surrounding elements, and therefore are more than the sum of their parts. As Peter Senge and co-authors put forward in the bookPresence: Exploring profound change in People, Organizations and Society, "as long as our thinking is governed by habit - notably industrial, "machine age" concepts such as control, predictability, standardization, and "faster is better" - we will continue to recreate institutions as they have been, despite their disharmony with the larger world, and the need for all living systems to evolve."[36]While change is predictably constant, it is unpredictable in direction and often occurs at second and nth orders of systemic relationality.[37]Understanding emergence and what creates the conditions for different forms of emergence to occur, either insidious or nourishing vitality, is essential in the search for deep transformations.
The works of Nora Bateson and her colleagues at the International Bateson Institute delve into this. Since 2012, they have been researching questions such aswhat makes a living system ready to change? Can unforeseen ready-ness for change be nourished?Here being ready is not thought of as being prepared, but rather as nourishing theflexibilitywe do not yet know will be needed. These inquiries challenge the common view that a theory of change is produced from an identified preferred goal or outcome. As explained in their paperAn essay on ready-ing: Tending the prelude to change:[37]"While linear managing or controlling of the direction of change may appear desirable, tending to how the system becomes ready allows for pathways of possibility previously unimagined." This brings a new lens to the field of emergence in social and systems change as it looks to tending the pre-emergent process. Warm Data Labs are the fruit of theirpraxis, they are spaces for transcontextual mutual learning in which aphanipoetic phenomena unfold.[38]Having hosted hundreds of Warm Data processes with 1000s of participants, they have found that these spaces of shared poly-learning across contexts lead to a realm of potential change, a necessarily obscured zone of wild interaction of unseen, unsaid, unknown flexibility.[37]It is such flexibility that nourishes the ready-ing living systems require to respond to complex situations in new ways and change. In other words, this readying process preludes what will emerge. When exploring questions of social change, it is important to ask ourselves, what is submerging in the current social imaginary and perhaps, rather than focus all our resources and energy on driving direct order responses, to nourish flexibility with ourselves, and the systems we are a part of.
Another approach that engages with the concept of emergence for social change is Theory U, where "deep emergence" is the result of self-transcending knowledge after a successful journey along the U through layers of awareness.[39]This practice nourishes transformation at the inner-being level, which enables new ways of being, seeing and relating to emerge. The concept of emergence has also been employed in the field offacilitation. InEmergent Strategy,adrienne maree browndefines emergent strategies as "ways for humans to practice complexity and grow the future through relatively simple interactions".[40]
Inlinguistics, the concept of emergence has been applied in the domain ofstylometryto explain the interrelation between the syntactical structures of the text and the author style (Slautina, Marusenko, 2014).[41]It has also been argued that the structure and regularity oflanguagegrammar, or at leastlanguage change, is an emergent phenomenon.[42]While each speaker merely tries to reach their own communicative goals, they use language in a particular way. If enough speakers behave in that way, language is changed.[43]In a wider sense, the norms of a language, i.e. the linguistic conventions of its speech society, can be seen as a system emerging from long-time participation in communicative problem-solving in various social circumstances.[44]
The bulk conductive response of binary (RC) electrical networks with random arrangements, known as theUniversal dielectric response(UDR), can be seen as emergent properties of such physical systems. Such arrangements can be used as simple physical prototypes for deriving mathematical formulae for the emergent responses of complex systems.[45]Internet traffic can also exhibit some seemingly emergent properties. In the congestion control mechanism,TCPflows can become globally synchronized at bottlenecks, simultaneously increasing and then decreasing throughput in coordination. Congestion, widely regarded as a nuisance, is possibly an emergent property of the spreading of bottlenecks across a network in high traffic flows which can be considered as aphase transition.[46]Some artificially intelligent (AI) computer applications simulate emergent behavior.[47]One example isBoids, which mimics theswarming behaviorof birds.[48]
In religion, emergence grounds expressions ofreligious naturalismandsyntheismin which a sense of thesacredis perceived in the workings of entirely naturalistic processes by which morecomplexforms arise or evolve from simpler forms. Examples are detailed inThe Sacred Emergence of NaturebyUrsula Goodenough&Terrence DeaconandBeyond Reductionism: Reinventing the SacredbyStuart Kauffman, both from 2006, as well asSyntheism – Creating God in The Internet AgebyAlexander Bard&Jan Söderqvistfrom 2014 andEmergentism: A Religion of Complexity for the Metamodern Worldby Brendan Graham Dempsey (2022).[citation needed]
Michael J. Pearcehas used emergence to describe the experience of works of art in relation to contemporary neuroscience.[49]Practicing artistLeonel Moura, in turn, attributes to his "artbots" a real, if nonetheless rudimentary, creativity based on emergent principles.[50]
|
https://en.wikipedia.org/wiki/Emergence
|
Biocyberneticsis the application ofcyberneticsto biological science disciplines such asneurologyand multicellular systems. Biocybernetics plays a major role insystems biology, seeking to integrate different levels of information to understand how biological systems function. The field ofcyberneticsitself has origins in biological disciplines such as neurophysiology. Biocybernetics is an abstract science and is a fundamental part oftheoretical biology, based upon the principles ofsystemics. Biocybernetics is a psychological study that aims to understand how the human body functions as a biological system and performs complex mental functions like thought processing, motion, and maintaining homeostasis.(PsychologyDictionary.org)Within this field, many distinct qualities allow for different distinctions within the cybernetic groups such as humans and insects such as beehives and ants. Humans work together but they also have individual thoughts that allow them to act on their own, while worker bees follow the commands of the queen bee. (Seeley, 1989). Although humans often work together, they can also separate from the group and think for themselves.(Gackenbach, J. 2007) A unique example of this within the human sector of biocybernetics would be in society during the colonization period, when Great Britain established their colonies in North America and Australia. Many of the traits and qualities of the mother country were inherited by the colonies, as well as niche qualities that were unique to them based on their areas like language and personality—similar vines and grasses, where the parent plant produces offshoots, spreading from the core. Once the shoots grow their roots and get separated from the mother plant, they will survive independently and be considered their plant. Society is more closely related to plants than to animals since, like plants, there is no distinct separation between parent and offspring. The branching of society is more similar to plant reproduction than to animal reproduction. Humans are a k- selected species that typically have fewer offspring that they nurture for longer periods than r -selected species. It could be argued that when Britain created colonies in regions like North America and Australia, these colonies, once they became independent, should be seen as offspring of British society. Like all children, the colonies inherited many characteristics, such as language, customs and technologies, from their parents, but still developed their own personality. This form of reproduction is most similar to the type of vegetative reproduction used by many plants, such as vines and grasses, where the parent plant produces offshoots, spreading ever further from the core. When such a shoot, once it has produced its own roots, gets separated from the mother plant, it will survive independently and define a new plant. Thus, the growth of society is more like that of plants than like that of the higher animals that we are most familiar with, there is not a clear distinction between a parent and its offspring. Superorganisms are also capable of the so-called "distributed intelligence," a system composed of individual agents with limited intelligence and information. These can pool resources to complete goals beyond the individuals' reach on their own. Similar to the concept of "Game theory." (Durlauf, S.N., Blume, L.E. 2010) In this concept, individuals and organisms make choices based on the behaviors of the other player to deem the most profitable outcome for them as an individual rather than a group.
Biocybernetics is a conjoined word from bio (Greek: βίο / life) andcybernetics(Greek: κυβερνητική / controlling-governing). Although the extended form of the word is biological cybernetics, the field is most commonly referred to as biocybernetics in scientific papers.
Early proponents of biocybernetics include Ross Ashby, Hans Drischel, and Norbert Wiener among others. Popular papers published by each scientist are listed below.
Ross Ashby, "Introduction to Cybernetics", 1956[1]Hans Drischel, "Einführung in die Biokybernetik." 1972[2]Norbert Wiener, "Cybernetics or Control and Communication in the Animal and the Machine", 1948[3]
Papers and research that delve into topics involving biocybernetics may be found under a multitude of similar names, including molecular cybernetics, neurocybernetics, and cellular cybernetics. Such fields involve disciplines that specify certain aspects of the study of the living organism (for example, neurocybernetics focuses on the study neurological models in organisms).
|
https://en.wikipedia.org/wiki/Biological_cybernetics
|
Bio-inspired computing, short forbiologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates toconnectionism,social behavior, andemergence. Withincomputer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset ofnatural computation.
Early Ideas
The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as aTuring machine.Turingfirstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes.[1]He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.[2]
Neural Networks
First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms.[3]They first mathematically described that a system of simplistic neurons was able to produce simplelogical operationssuch aslogical conjunction,disjunctionandnegation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969bookby Marvin Minsky and Seymour Papert as the main cause.[4][5]Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known asthreshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.[6]
Ant Colonies
Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent.[7]More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms.[8]Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony.[9]A very important result since it suggested that group selectionevolutionary algorithmscoupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.
Some areas of study in biologically inspired computing, and their biological counterparts:
Bio-inspired computing, which work on a population of possible solutions in the context ofevolutionary algorithmsor in the context ofswarm intelligencealgorithms, are subdivided intoPopulation Based Bio-Inspired Algorithms(PBBIA).[10]They includeEvolutionary Algorithms,Particle Swarm Optimization,Ant colony optimization algorithmsandArtificial bee colony algorithms.
Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:
The virtual insect controlled by the trainedspiking neural networkcan find food after training in any unknown terrain.[11]After several generations of rule application it is usually the case that some forms of complex behaviouremerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (seecomplex systems). For this reason, when modeling theneural network, it is necessary to accurately model anin vivonetwork, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.[12]
Natural evolution is a good analogy to this method–the rules of evolution (selection,recombination/reproduction,mutationand more recentlytransposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used ingenetic algorithms.
Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.
Artificial intelligenceresearchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.
Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopicnetwork connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.[13]
Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphicchip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backedSpiNNakerand BrainScaleS, Stanford'sNeurogrid, IBM'sTrueNorth, and Qualcomm'sZeroth.
TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities.[14]The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.
In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian".[15]The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.
The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.[16]Recent advances in brain simulation linked individual variability in human cognitiveprocessing speedandfluid intelligenceto thebalance of excitation and inhibitioninstructural brain networks,functional connectivity,winner-take-all decision-makingandattractorworking memory.[17]
In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.
Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.
(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)
|
https://en.wikipedia.org/wiki/Biologically-inspired_computing
|
Incoding theory, aconstant-weight code, also called anm-of-ncodeorm-out-of-ncode, is anerror detection and correctioncode where all codewords share the sameHamming weight.
Theone-hotcode and thebalanced codeare two widely used kinds of constant-weight code.
The theory is closely connected to that ofdesigns(such ast-designsandSteiner systems). Most of the work on this field ofdiscrete mathematicsis concerned withbinaryconstant-weight codes.
Binary constant-weight codes have several applications, includingfrequency hoppinginGSMnetworks.[1]Mostbarcodesuse a binary constant-weight code to simplify automatically setting the brightness threshold that distinguishes black and white stripes.
Mostline codesuse either a constant-weight code, or a nearly-constant-weightpaired disparity code.
In addition to use as error correction codes, the large space between code words can also be used in the design ofasynchronous circuitssuch asdelay insensitive circuits.
Constant-weight codes, likeBerger codes, can detect all unidirectional errors.
The central problem regarding constant-weight codes is the following: what is the maximum number of codewords in a binary constant-weight code with lengthn{\displaystyle n},Hamming distanced{\displaystyle d}, and weightw{\displaystyle w}? This number is calledA(n,d,w){\displaystyle A(n,d,w)}.
Apart from some trivial observations, it is generally impossible to compute these numbers in a straightforward way. Upper bounds are given by several important theorems such as thefirstandsecond Johnson bounds,[2]and better upper bounds can sometimes be found in other ways. Lower bounds are most often found by exhibiting specific codes, either with use of a variety of methods from discrete mathematics, or through heavy computer searching. A large table of such record-breaking codes was published in 1990,[3]and an extension to longer codes (but only for those values ofd{\displaystyle d}andw{\displaystyle w}which are relevant for the GSM application) was published in 2006.[1]
A special case of constant weight codes are the one-of-Ncodes, that encodelog2N{\displaystyle \log _{2}N}bits in a code-word ofN{\displaystyle N}bits. The one-of-two code uses the code words 01 and 10 to encode the bits '0' and '1'. A one-of-four code can use the words 0001, 0010, 0100, 1000 in order to encode two bits 00, 01, 10, and 11. An example isdual rail encoding, and chain link[4]used in delay insensitive circuits. For these codes,n=N,d=2,w=1{\displaystyle n=N,~d=2,~w=1}andA(n,d,w)=n{\displaystyle A(n,d,w)=n}.
Some of the more notable uses of one-hot codes includebiphase mark codeuses a 1-of-2 code;pulse-position modulationuses a 1-of-ncode;address decoder,
etc.
Incoding theory, abalanced codeis abinaryforward error correctioncode for which each codeword contains an equal number of zero and one bits. Balanced codes have been introduced byDonald Knuth;[5]they are a subset of so-called unordered codes, which are codes having the property that the positions of ones in a codeword are never a subset of the positions of the ones in another codeword. Like all unordered codes, balanced codes are suitable for the detection of allunidirectional errorsin an encoded message. Balanced codes allow for particularly efficient decoding, which can be carried out in parallel.[5][6][7]
Some of the more notable uses of balanced-weight codes includebiphase mark codeuses a 1 of 2 code;6b/8b encodinguses a 4 of 8 code;
theHadamard codeis a2k−1{\displaystyle 2^{k-1}}of2k{\displaystyle 2^{k}}code (except for the zero codeword),
thethree-of-sixcode;
etc.
The 3-wire lane encoding used inMIPIC-PHY can be considered a generalization of constant-weight code to ternary -- each wire transmits aternary signal, and at any one instant one of the 3 wires is transmitting a low, one is transmitting a middle, and one is transmitting a high signal.[8]
Anm-of-ncodeis a separableerror detectioncode with a code word length ofnbits, where each code word contains exactlyminstances of a "one". A single bit error will cause the code word to have eitherm+ 1orm− 1"ones". An examplem-of-ncode is the2-of-5 codeused by theUnited States Postal Service.
The simplest implementation is to append a string of ones to the original data until it containsmones, then append zeros to create a code of lengthn.
Example:
Some of the more notable uses of constant-weight codes, other than the one-hot and balanced-weight codes already mentioned above, includeCode 39uses a 3-of-9 code;bi-quinary coded decimalcode uses a 2-of-7 code,
the2-of-5 code,
etc.
|
https://en.wikipedia.org/wiki/Constant-weight_code
|
Atwo-out-of-five codeis aconstant-weight codethat provides exactly ten possible combinations of two bits, and is thus used for representing thedecimal digitsusing fivebits.[1]Each bit is assigned a weight, such that the set bits sum to the desired value, with an exception for zero.
According toFederal Standard 1037C:
The weights give a unique encoding for most digits, but allow two encodings for 3: 0+3 or 10010 and 1+2 or 01100. The former is used to encode the digit 3, and the latter is used to represent the otherwise unrepresentable zero.
TheIBM 7070,IBM 7072, andIBM 7074computers used this code to represent each of the ten decimal digits in a machine word, although they numbered the bit positions 0-1-2-3-4, rather than with weights. Each word also had a sign flag, encoded using a two-out-of-three code, that could beAAlphanumeric,−Minus, or+Plus. When copied to a digit, the three bits were placed in bit positions 0-3-4. (Thus producing the numeric values 3, 6 and 9, respectively.)
A variant is theUnited States Postal ServicePOSTNETbarcode, used to represent theZIP Codefor automated mail sorting and routing equipment. This uses two tall bars as ones and three short bars as zeros. Here, the weights assigned to the bit positions are 7-4-2-1-0. Again, zero is encoded specially, using the 7+4 combination (binary 11000) that would naturally encode 11. This method was also used in North American telephonemulti-frequencyandcrossbar switchingsystems.[3]
The USPSPostal Alpha Numeric Encoding Technique(PLANET) uses the same weights, but with the opposite bar-height convention.
TheCode 39barcode uses weights 1-2-4-7-0 (i.e.LSBfirst,Parity bitlast) for the widths of its bars, but it also encodes two bits of extra information in the spacing between bars. The || ||| spacing is used for digits.
The following table representsdecimaldigits from 0 to 9 in various two-out-of-five code systems:
The requirement that exactly two bits be set is strictly stronger than aparity check; like allconstant-weight codes, a two-out-of-five code can detect not only any single-bit error, but anyunidirectional error-- cases in which all the individual bit errors are of a single type (all 0→1 or all 1→0).
|
https://en.wikipedia.org/wiki/Two-out-of-five_code
|
Bi-quinary coded decimalis anumeral encoding schemeused in manyabacusesand in someearly computers, notably theColossus.[2]The termbi-quinaryindicates that the code comprises both a two-state (bi) and a five-state (quinary) component. The encoding resembles that used by many abacuses, with four beads indicating the five values either from 0 through 4 or from 5 through 9 and another bead indicating which of those ranges (which can alternatively be thought of as +5).
Several human languages, most notablyFulaandWolofalso use biquinary systems. For example, the Fula word for 6,jowi e go'o, literally meansfive [plus] one.Roman numeralsuse a symbolic, rather than positional, bi-quinary base, even thoughLatinis completely decimal.
The Korean finger counting systemChisanbopuses a bi-quinary system, where each finger represents a one and a thumb represents a five, allowing one to count from 0 to 99 with two hands.
One advantage of one bi-quinary encoding scheme on digital computers is that it must have two bits set (one in the binary field and one in the quinary field), providing a built-inchecksumto verify if the number is valid or not. (Stuck bits happened frequently with computers usingmechanical relays.)
Several different representations of bi-quinary coded decimal have been used by different machines. The two-state component is encoded as one or twobits, and the five-state component is encoded using three to five bits. Some examples are:
TheIBM 650uses seven bits: twobibits (0 and 5) and fivequinarybits (0, 1, 2, 3, 4), with error checking.
Exactly onebibit and onequinarybit is set in a valid digit. The bi-quinary encoding of the internal workings of the machine are evident in the arrangement of its lights – thebibits form the top of a T for each digit, and thequinarybits form the vertical stem.
TheRemington Rand 409has five bits: onequinarybit (tube) for each of 1, 3, 5, and 7 - only one of these would be on at the time. The fifthbibit represented 9 if none of the others were on; otherwise it added 1 to the value represented by the otherquinarybit. The machine was sold in the two modelsUNIVAC 60andUNIVAC 120.
TheUNIVAC Solid Stateuses four bits: onebibit (5), three binary codedquinarybits (4 2 1)[4][5][6][7][8][9]and oneparity check bit
TheUNIVAC LARChas four bits:[9]onebibit (5), threeJohnson counter-codedquinarybits and one parity check bit.
|
https://en.wikipedia.org/wiki/Bi-quinary_coded_decimal
|
Thereflected binary code(RBC), also known asreflected binary(RB) orGray codeafterFrank Gray, is an ordering of thebinary numeral systemsuch that two successive values differ in only onebit(binary digit).
For example, the representation of the decimal value "1" in binary would normally be "001", and "2" would be "010". In Gray code, these values are represented as "001" and "011". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output fromelectromechanicalswitchesand to facilitateerror correctionin digital communications such asdigital terrestrial televisionand somecable TVsystems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice.[3]
Many devices indicate position by closing and opening switches. If that device usesnatural binary codes, positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even withoutkeybounce, the transition might look like011—001—101—100. When the switches appear to be in position001, the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into asequentialsystem, possibly viacombinational logic, then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set ofintegers, or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known asunit-distance,[4][5][6][7][8]single-distance,single-step,monostrophic[9][10][7][8]orsyncopic codes,[9]in reference to theHamming distanceof 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particularbinarycode for non-negative integers, thebinary-reflected Gray code, orBRGC.Bell LabsresearcherGeorge R. Stibitzdescribed such a code in a 1941 patent application, granted in 1943.[11][12][13]Frank Grayintroduced the termreflected binary codein his 1947 patent application, remarking that the code had "as yet no recognized name".[14]He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off(...11001100...);the next digit a pattern of 4 on, 4 off; thei-th least significant bit a pattern of 2ion 2ioff. The most significant digit is an exception to this: for ann-bit Gray code, the most significant digit follows the pattern 2n−1on, 2n−1off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2n−2places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called thecyclicoradjacency propertyof the code.[15]
In moderndigital communications, Gray codes play an important role inerror correction. For example, in adigital modulationscheme such asQAMwhere data is typically transmitted insymbolsof 4 bits or more, the signal'sconstellation diagramis arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this withforward error correctioncapable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible tonoise.
Despite the fact that Stibitz described this code[11][12][13]before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code";[16][17]one of those also lists "minimum error code" and "cyclic permutation code" among the names.[17]A 1954 patent application refers to "the Bell Telephone Gray code".[18]Other names include "cyclic binary code",[12]"cyclic progression code",[19][12]"cyclic permuting binary"[20]or "cyclic permuted binary" (CPB).[21][22]
The Gray code is sometimes misattributed to 19th century electrical device inventorElisha Gray.[13][23][24][25]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classicalChinese rings puzzle, a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872.[26][13]
It can serve as a solution guide for theTowers of Hanoiproblem, based on a game by the FrenchÉdouard Lucasin 1883.[27][28][29][30]Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yieldternary and pentaryGray codes.[31]
Martin Gardnerwrote a popular account of the Gray code in his August 1972"Mathematical Games" columninScientific American.[32]
The code also forms aHamiltonian cycleon ahypercube, where each bit is seen as one dimension.
When the French engineerÉmile Baudotchanged from using a 6-unit (6-bit) code to 5-unit code for hisprinting telegraphsystem, in 1875[33]or 1876,[34][35]he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order,[36][37][38]and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code.[13]This code became known asBaudot code[39]and, with minor changes, was eventually adopted asInternational Telegraph Alphabet No. 1(ITA1, CCITT-1) in 1932.[40][41][38]
About the same time, the German-AustrianOtto Schäffler[de][42]demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874.[43][13]
Frank Gray, who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups usingvacuum tube-based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953,[14]and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code.[44]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders (absolute encodersandquadrature encoders) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to theHamming distanceproperties of Gray codes, they are sometimes used ingenetic algorithms.[15]They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes ofKarnaugh mapssince 1953[45][46][47]as well as inHändler circle graphssince 1958,[48][49][50][51]both graphical methods forlogic circuit minimization.
In moderndigital communications, 1D- and 2D-Gray codes play an important role in error prevention before applying anerror correction. For example, in adigital modulationscheme such asQAMwhere data is typically transmitted insymbolsof 4 bits or more, the signal'sconstellation diagramis arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this withforward error correctioncapable of correcting single-bit errors, it is possible for areceiverto correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible tonoise.
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
Abalanced Gray codecan be constructed,[52]that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitzutilized a reflected binary code in a binary pulse counting device in 1941 already.[11][12][13]
A typical use of Gray code counters is building aFIFO(first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains.[53]The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code,[nb 1]it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous.[54]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code,[55]add one to it with a standard binary adder, and then convert the result back to Gray code.[56]Other methods of counting in Gray code are discussed in a report byRobert W. Doran, including taking the output from the first latches of the master-slave flip flops in a binary ripple counter.[57]
As the execution ofprogram codetypically causes an instruction memory access pattern of locally consecutive addresses,bus encodingsusing Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing theCPU power consumptionin some low-power designs.[58][59]
The binary-reflected Gray code list fornbits can be generatedrecursivelyfrom the list forn− 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary0, prefixing the entries in the reflected list with a binary1, and then concatenating the original list with the reversed list.[13]For example, generating then= 3 list from then= 2 list:
The one-bit Gray code isG1= (0,1). This can be thought of as built recursively as above from a zero-bit Gray codeG0= (Λ) consisting of a single entry of zero length. This iterative process of generatingGn+1fromGnmakes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: thenth Gray code is obtained by computingn⊕⌊n2⌋{\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor }. Prepending a0bit leaves the order of the code words unchanged, prepending a1bit reverses the order of the code words. If the bits at positioni{\displaystyle i}of codewords are inverted, the order of neighbouring blocks of2i{\displaystyle 2^{i}}codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing anexclusive oron a bitbi{\displaystyle b_{i}}at positioni{\displaystyle i}with the bitbi+1{\displaystyle b_{i+1}}at positioni+1{\displaystyle i+1}leaves the order of codewords intact ifbi+1=0{\displaystyle b_{i+1}={\mathtt {0}}}, and reverses the order of blocks of2i+1{\displaystyle 2^{i+1}}codewords ifbi+1=1{\displaystyle b_{i+1}={\mathtt {1}}}. Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuminggi{\displaystyle g_{i}}is thei{\displaystyle i}th Gray-coded bit (g0{\displaystyle g_{0}}being the most significant bit), andbi{\displaystyle b_{i}}is thei{\displaystyle i}th binary-coded bit (b0{\displaystyle b_{0}}being the most-significant bit), the reverse translation can be given recursively:b0=g0{\displaystyle b_{0}=g_{0}}, andbi=gi⊕bi−1{\displaystyle b_{i}=g_{i}\oplus b_{i-1}}. Alternatively, decoding a Gray code into a binary number can be described as aprefix sumof the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with thecode0=0{\displaystyle \mathrm {code} _{0}={\mathtt {0}}}, and at stepi>0{\displaystyle i>0}find the bit position of the least significant1in the binary representation ofi{\displaystyle i}and flip the bit at that position in the previous codecodei−1{\displaystyle \mathrm {code} _{i-1}}to get the next codecodei{\displaystyle \mathrm {code} _{i}}. The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ...[nb 2]Seefind first setfor efficient algorithms to compute these values.
The following functions inCconvert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist.[60][55][nb 1]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of theCLMUL instruction set. If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has aHamming distanceof 1 from the next word).
It is possible to construct binary Gray codes withnbits with a length of less than2n, if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle.[61]OEISsequence A290772[62]gives the number of possible Gray sequences of length2nthat include zero and use the minimum number of bits.
0 → 0001 → 0012 → 00210 → 01211 → 01112 → 01020 → 02021 → 02122 → 022100 → 122101 → 121102 → 120110 → 110111 → 111112 → 112120 → 102121 → 101122 → 100200 → 200201 → 201202 → 202210 → 212211 → 211212 → 210220 → 220221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is then-ary Gray code, also known as anon-Boolean Gray code. As the name implies, this type of Gray code uses non-Booleanvalues in its encodings.
For example, a 3-ary (ternary) Gray code would use the values 0,1,2.[31]The (n,k)-Gray codeis then-ary Gray code withkdigits.[63]The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The (n,k)-Gray code may be constructed recursively, as the BRGC, or may be constructediteratively. Analgorithmto iteratively generate the (N,k)-Gray code is presented (inC):
There are other Gray code algorithms for (n,k)-Gray codes. The (n,k)-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan,[63]lack this property whenkis odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping fromn− 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See alsoSkew binary number system, a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digitcarryoperation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity".[52]Inbalanced Gray codes, the number of changes in different coordinate positions are as close as possible. To make this more precise, letGbe anR-ary complete Gray cycle having transition sequence(δk){\displaystyle (\delta _{k})}; thetransition counts(spectrum) ofGare the collection of integers defined by
λk=|{j∈ZRn:δj=k}|,fork∈Zn{\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code isuniformoruniformly balancedif its transition counts are all equal, in which case we haveλk=Rnn{\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}}for allk. Clearly, whenR=2{\displaystyle R=2}, such codes exist only ifnis a power of 2.[64]Ifnis not a power of 2, it is possible to constructwell-balancedbinary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either2⌊2n2n⌋{\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor }or2⌈2n2n⌉{\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil }.[52]Gray codes can also beexponentially balancedif all of their transition counts are adjacent powers of two, and such codes exist for every power of two.[65]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced:[52]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight:[52]
We will now show a construction[66]and implementation[67]for well-balanced binary Gray codes which allows us to generate ann-digit balanced Gray code for everyn. The main principle is to inductively construct an (n+ 2)-digit Gray codeG′{\displaystyle G'}given ann-digit Gray codeGin such a way that the balanced property is preserved. To do this, we consider partitions ofG=g0,…,g2n−1{\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}}into an even numberLof non-empty blocks of the form
{g0},{g1,…,gk2},{gk2+1,…,gk3},…,{gkL−2+1,…,g−2},{g−1}{\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
wherek1=0{\displaystyle k_{1}=0},kL−1=−2{\displaystyle k_{L-1}=-2}, andkL≡−1(mod2n){\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}}). This partition induces an(n+2){\displaystyle (n+2)}-digit Gray code given by
If we define thetransition multiplicities
mi=|{j:δkj=i,1≤j≤L}|{\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in positionichanges between consecutive blocks in a partition, then for the (n+ 2)-digit Gray code induced by this partition the transition spectrumλi′{\displaystyle \lambda '_{i}}is
λi′={4λi−2mi,if0≤i<nL,otherwise{\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balancedn-digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digiti{\displaystyle i}transition and splitting another block at another digiti{\displaystyle i}transition produces a different Gray code with exactly the same transition spectrumλi′{\displaystyle \lambda '_{i}}, so one may for example[65]designate the firstmi{\displaystyle m_{i}}transitions at digiti{\displaystyle i}as those that fall between two blocks. Uniform codes can be found whenR≡0(mod4){\displaystyle R\equiv 0{\pmod {4}}}andRn≡0(modn){\displaystyle R^{n}\equiv 0{\pmod {n}}}, and this construction can be extended to theR-ary case as well.[66]
Long run (ormaximum gap) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible.[68]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors.[69]If we define theweightof a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercubeQn=(Vn,En){\displaystyle Q_{n}=(V_{n},E_{n})}intolevelsof vertices that have equal weight, i.e.
Vn(i)={v∈Vn:vhas weighti}{\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for0≤i≤n{\displaystyle 0\leq i\leq n}. These levels satisfy|Vn(i)|=(ni){\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}}. LetQn(i){\displaystyle Q_{n}(i)}be the subgraph ofQn{\displaystyle Q_{n}}induced byVn(i)∪Vn(i+1){\displaystyle V_{n}(i)\cup V_{n}(i+1)}, and letEn(i){\displaystyle E_{n}(i)}be the edges inQn(i){\displaystyle Q_{n}(i)}. A monotonic Gray code is then a Hamiltonian path inQn{\displaystyle Q_{n}}such that wheneverδ1∈En(i){\displaystyle \delta _{1}\in E_{n}(i)}comes beforeδ2∈En(j){\displaystyle \delta _{2}\in E_{n}(j)}in the path, theni≤j{\displaystyle i\leq j}.
An elegant construction of monotonicn-digit Gray codes for anynis based on the idea of recursively building subpathsPn,j{\displaystyle P_{n,j}}of length2(nj){\displaystyle 2\textstyle {\binom {n}{j}}}having edges inEn(j){\displaystyle E_{n}(j)}.[69]We defineP1,0=(0,1){\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})},Pn,j=∅{\displaystyle P_{n,j}=\emptyset }wheneverj<0{\displaystyle j<0}orj≥n{\displaystyle j\geq n}, and
Pn+1,j=1Pn,j−1πn,0Pn,j{\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here,πn{\displaystyle \pi _{n}}is a suitably defined permutation andPπ{\displaystyle P^{\pi }}refers to the pathPwith its coordinates permuted byπ{\displaystyle \pi }. These paths give rise to two monotonicn-digit Gray codesGn(1){\displaystyle G_{n}^{(1)}}andGn(2){\displaystyle G_{n}^{(2)}}given by
Gn(1)=Pn,0Pn,1RPn,2Pn,3R⋯andGn(2)=Pn,0RPn,1Pn,2RPn,3⋯{\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice ofπn{\displaystyle \pi _{n}}which ensures that these codes are indeed Gray codes turns out to beπn=E−1(πn−12){\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)}. The first few values ofPn,j{\displaystyle P_{n,j}}are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated inO(n) time. The algorithm is most easily described usingcoroutines.
Monotonic codes have an interesting connection to theLovász conjecture, which states that every connectedvertex-transitive graphcontains a Hamiltonian path. The "middle-level" subgraphQ2n+1(n){\displaystyle Q_{2n+1}(n)}isvertex-transitive(that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain anautomorphism) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively forn≤15{\displaystyle n\leq 15}, and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839N, whereNis the number of vertices in the middle-level subgraph.[70]
Another type of Gray code, theBeckett–Gray code, is named for Irish playwrightSamuel Beckett, who was interested insymmetry. His play "Quad" features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once.[71]Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by afirst in, first outqueue, so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first.[71]Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists forn= 4. It is known today that such codes do exist forn= 2, 5, 6, 7, and 8, and do not exist forn= 3 or 4. An example of an 8-bit Beckett–Gray code can be found inDonald Knuth'sArt of Computer Programming.[13]According to Sawada and Wong, the search space forn= 6 can be explored in 15 hours, and more than9500solutions for the casen= 7 have been found.[72]
Snake-in-the-boxcodes, orsnakes, are the sequences of nodes ofinduced pathsin ann-dimensionalhypercube graph, and coil-in-the-box codes,[73]orcoils, are the sequences of nodes of inducedcyclesin a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described byWilliam H. Kautzin the late 1950s;[5]since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is thesingle-track Gray code(STGC) developed by Norman B. Spedding[74][75]and refined by Hiltgen, Paterson and Brandestini inSingle-track Gray Codes(1996).[76][77]The STGC is a cyclical list ofPunique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as aP×nmatrix, each column is a cyclic shift of the first column.[78]
The name comes from their use withrotary encoders, where a number of tracks are being sensed by contacts, resulting for each in an output of0or1. To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke[79]and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible.[74]Although it is not possible to distinguish 2npositions withnsensors on a single track, itispossible to distinguish close to that many. Etzion and Paterson conjecture that whennis itself a power of 2,nsensors can distinguish at most 2n− 2npositions and that for primenthe limit is 2n− 2 positions.[80]The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 28= 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC forP= 30 andn= 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes.[81]The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared tochain codes, also calledDe Bruijn sequences), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving.[82]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams,[83][user-generated source?]based on previous work,[80]discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the siteThingiverse. This device[84]was designed by etzenseep (Florian Bauer) in September 2022.
An STGC forP= 360 andn= 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors inquadrature amplitude modulation(QAM) adjacent points in theconstellation. In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits.[85]
Two-dimensional Gray codes also have uses inlocation identificationsschemes, where the code would be applied to area maps such as aMercator projectionof the earth's surface and an appropriate cyclic two-dimensional distance function such as theMannheim metricbe used to calculate the distance between two encoded locations, thereby combining the characteristics of theHamming distancewith the cyclic continuation of a Mercator projection.[86]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔00, 1 ↔01, 2 ↔11, 3 ↔10} establishes anisometrybetween themetric spaceover thefinite fieldZ22{\displaystyle \mathbb {Z} _{2}^{2}}with the metric given by theHamming distanceand the metric space over thefinite ringZ4{\displaystyle \mathbb {Z} _{4}}(the usualmodular arithmetic) with the metric given by theLee distance. The mapping is suitably extended to an isometry of theHamming spacesZ22m{\displaystyle \mathbb {Z} _{2}^{2m}}andZ4m{\displaystyle \mathbb {Z} _{4}^{m}}. Its importance lies in establishing a correspondence between various "good" but not necessarilylinear codesas Gray-map images inZ22{\displaystyle \mathbb {Z} _{2}^{2}}ofring-linear codesfromZ4{\displaystyle \mathbb {Z} _{4}}.[87][88]
There are a number of binary codes similar to Gray codes, including:
The followingbinary-coded decimal(BCD) codes are Gray code variants as well:
|
https://en.wikipedia.org/wiki/Gray_code
|
Inmathematics, theKronecker delta(named afterLeopold Kronecker) is afunctionof twovariables, usually just non-negativeintegers. The function is 1 if the variables are equal, and 0 otherwise:δij={0ifi≠j,1ifi=j.{\displaystyle \delta _{ij}={\begin{cases}0&{\text{if }}i\neq j,\\1&{\text{if }}i=j.\end{cases}}}or with use ofIverson brackets:δij=[i=j]{\displaystyle \delta _{ij}=[i=j]\,}For example,δ12=0{\displaystyle \delta _{12}=0}because1≠2{\displaystyle 1\neq 2}, whereasδ33=1{\displaystyle \delta _{33}=1}because3=3{\displaystyle 3=3}.
The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above.
Inlinear algebra, then×n{\displaystyle n\times n}identity matrixI{\displaystyle \mathbf {I} }has entries equal to the Kronecker delta:Iij=δij{\displaystyle I_{ij}=\delta _{ij}}wherei{\displaystyle i}andj{\displaystyle j}take the values1,2,⋯,n{\displaystyle 1,2,\cdots ,n}, and theinner productofvectorscan be written asa⋅b=∑i,j=1naiδijbj=∑i=1naibi.{\displaystyle \mathbf {a} \cdot \mathbf {b} =\sum _{i,j=1}^{n}a_{i}\delta _{ij}b_{j}=\sum _{i=1}^{n}a_{i}b_{i}.}Here theEuclidean vectorsare defined asn-tuples:a=(a1,a2,…,an){\displaystyle \mathbf {a} =(a_{1},a_{2},\dots ,a_{n})}andb=(b1,b2,...,bn){\displaystyle \mathbf {b} =(b_{1},b_{2},...,b_{n})}and the last step is obtained by using the values of the Kronecker delta to reduce the summation overj{\displaystyle j}.
It is common foriandjto be restricted to a set of the form{1, 2, ...,n}or{0, 1, ...,n− 1}, but the Kronecker delta can be defined on an arbitrary set.
The following equations are satisfied:∑jδijaj=ai,∑iaiδij=aj,∑kδikδkj=δij.{\displaystyle {\begin{aligned}\sum _{j}\delta _{ij}a_{j}&=a_{i},\\\sum _{i}a_{i}\delta _{ij}&=a_{j},\\\sum _{k}\delta _{ik}\delta _{kj}&=\delta _{ij}.\end{aligned}}}Therefore, the matrixδcan be considered as an identity matrix.
Another useful representation is the following form:δnm=limN→∞1N∑k=1Ne2πikN(n−m){\displaystyle \delta _{nm}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{k=1}^{N}e^{2\pi i{\frac {k}{N}}(n-m)}}This can be derived using the formula for thegeometric series.
Using theIverson bracket:δij=[i=j].{\displaystyle \delta _{ij}=[i=j].}
Often, a single-argument notationδi{\displaystyle \delta _{i}}is used, which is equivalent to settingj=0{\displaystyle j=0}:δi=δi0={0,ifi≠01,ifi=0{\displaystyle \delta _{i}=\delta _{i0}={\begin{cases}0,&{\text{if }}i\neq 0\\1,&{\text{if }}i=0\end{cases}}}
Inlinear algebra, it can be thought of as atensor, and is writtenδji{\displaystyle \delta _{j}^{i}}. Sometimes the Kronecker delta is called the substitution tensor.[1]
In the study ofdigital signal processing(DSP), the Kronecker delta function sometimes means the unit sample functionδ[n]{\displaystyle \delta [n]}, which represents a special case of the 2-dimensional Kronecker delta functionδij{\displaystyle \delta _{ij}}where the Kronecker indices include the number zero, and where one of the indices is zero:δ[n]≡δn0≡δ0nwhere−∞<n<∞{\displaystyle \delta [n]\equiv \delta _{n0}\equiv \delta _{0n}~~~{\text{where}}-\infty <n<\infty }
Or more generally where:δ[n−k]≡δ[k−n]≡δnk≡δknwhere−∞<n<∞,−∞<k<∞{\displaystyle \delta [n-k]\equiv \delta [k-n]\equiv \delta _{nk}\equiv \delta _{kn}{\text{where}}-\infty <n<\infty ,-\infty <k<\infty }
For discrete-time signals, it is conventional to place a single integer index in square braces; in contrast the Kronecker delta,δij{\displaystyle \delta _{ij}}, can have any number of indexes. InLTI systemtheory, the discrete unit sample function is typically used as an input to a discrete-time system for determining theimpulse responsefunction of the system which characterizes the system for any general imput. In contrast, the typical purpose of the Kronecker delta function is for filtering terms from anEinstein summation convention.
The discrete unit sample function is more simply defined as:δ[n]={1n=00nis another integer{\displaystyle \delta [n]={\begin{cases}1&n=0\\0&n{\text{ is another integer}}\end{cases}}}
In comparison, incontinuous-time systemstheDirac delta functionis often confused for both the Kronecker delta function and the unit sample function. The Dirac delta is defined as:{∫−ε+εδ(t)dt=1∀ε>0δ(t)=0∀t≠0{\displaystyle {\begin{cases}\int _{-\varepsilon }^{+\varepsilon }\delta (t)dt=1&\forall \varepsilon >0\\\delta (t)=0&\forall t\neq 0\end{cases}}}
Unlike the Kronecker delta functionδij{\displaystyle \delta _{ij}}and the unit sample functionδ[n]{\displaystyle \delta [n]}, the Dirac delta functionδ(t){\displaystyle \delta (t)}does not have an integer index, it has a single continuous non-integer valuet.
In continuous-time systems, the term "unit impulse function" is used to refer to theDirac delta functionδ(t){\displaystyle \delta (t)}or, in discrete-time systems, the Kronecker delta functionδ[n]{\displaystyle \delta [n]}.
The Kronecker delta has the so-calledsiftingproperty that forj∈Z{\displaystyle j\in \mathbb {Z} }:∑i=−∞∞aiδij=aj.{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ij}=a_{j}.}and if the integers are viewed as ameasure space, endowed with thecounting measure, then this property coincides with the defining property of theDirac delta function∫−∞∞δ(x−y)f(x)dx=f(y),{\displaystyle \int _{-\infty }^{\infty }\delta (x-y)f(x)\,dx=f(y),}and in fact Dirac's delta was named after the Kronecker delta because of this analogous property.[2]In signal processing it is usually the context (discrete or continuous time) that distinguishes the Kronecker and Dirac "functions". And by convention,δ(t){\displaystyle \delta (t)}generally indicates continuous time (Dirac), whereas arguments likei{\displaystyle i},j{\displaystyle j},k{\displaystyle k},l{\displaystyle l},m{\displaystyle m}, andn{\displaystyle n}are usually reserved for discrete time (Kronecker). Another common practice is to represent discrete sequences with square brackets; thus:δ[n]{\displaystyle \delta [n]}. The Kronecker delta is not the result of directly sampling the Dirac delta function.
The Kronecker delta forms the multiplicativeidentity elementof anincidence algebra.[3]
Inprobability theoryandstatistics, the Kronecker delta andDirac delta functioncan both be used to represent adiscrete distribution. If thesupportof a distribution consists of pointsx={x1,⋯,xn}{\displaystyle \mathbf {x} =\{x_{1},\cdots ,x_{n}\}}, with corresponding probabilitiesp1,⋯,pn{\displaystyle p_{1},\cdots ,p_{n}}, then theprobability mass functionp(x){\displaystyle p(x)}of the distribution overx{\displaystyle \mathbf {x} }can be written, using the Kronecker delta, asp(x)=∑i=1npiδxxi.{\displaystyle p(x)=\sum _{i=1}^{n}p_{i}\delta _{xx_{i}}.}
Equivalently, theprobability density functionf(x){\displaystyle f(x)}of the distribution can be written using the Dirac delta function asf(x)=∑i=1npiδ(x−xi).{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}
Under certain conditions, the Kronecker delta can arise from sampling a Dirac delta function. For example, if a Dirac delta impulse occurs exactly at a sampling point and is ideally lowpass-filtered (with cutoff at the critical frequency) per theNyquist–Shannon sampling theorem, the resulting discrete-time signal will be a Kronecker delta function.
If it is considered as a type(1,1){\displaystyle (1,1)}tensor, the Kronecker tensor can be writtenδji{\displaystyle \delta _{j}^{i}}with acovariantindexj{\displaystyle j}andcontravariantindexi{\displaystyle i}:δji={0(i≠j),1(i=j).{\displaystyle \delta _{j}^{i}={\begin{cases}0&(i\neq j),\\1&(i=j).\end{cases}}}
This tensor represents:
Thegeneralized Kronecker deltaormulti-index Kronecker deltaof order2p{\displaystyle 2p}is a type(p,p){\displaystyle (p,p)}tensor that is completelyantisymmetricin itsp{\displaystyle p}upper indices, and also in itsp{\displaystyle p}lower indices.
Two definitions that differ by a factor ofp!{\displaystyle p!}are in use. Below, the version is presented has nonzero components scaled to be±1{\displaystyle \pm 1}. The second version has nonzero components that are±1/p!{\displaystyle \pm 1/p!}, with consequent changes scaling factors in formulae, such as the scaling factors of1/p!{\displaystyle 1/p!}in§ Properties of the generalized Kronecker deltabelow disappearing.[4]
In terms of the indices, the generalized Kronecker delta is defined as:[5][6]δν1…νpμ1…μp={−1ifν1…νpare distinct integers and are an even permutation ofμ1…μp−1ifν1…νpare distinct integers and are an odd permutation ofμ1…μp−0in all other cases.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\begin{cases}{\phantom {-}}1&\quad {\text{if }}\nu _{1}\dots \nu _{p}{\text{ are distinct integers and are an even permutation of }}\mu _{1}\dots \mu _{p}\\-1&\quad {\text{if }}\nu _{1}\dots \nu _{p}{\text{ are distinct integers and are an odd permutation of }}\mu _{1}\dots \mu _{p}\\{\phantom {-}}0&\quad {\text{in all other cases}}.\end{cases}}}
LetSp{\displaystyle \mathrm {S} _{p}}be thesymmetric groupof degreep{\displaystyle p}, then:δν1…νpμ1…μp=∑σ∈Spsgn(σ)δνσ(1)μ1⋯δνσ(p)μp=∑σ∈Spsgn(σ)δν1μσ(1)⋯δνpμσ(p).{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=\sum _{\sigma \in \mathrm {S} _{p}}\operatorname {sgn}(\sigma )\,\delta _{\nu _{\sigma (1)}}^{\mu _{1}}\cdots \delta _{\nu _{\sigma (p)}}^{\mu _{p}}=\sum _{\sigma \in \mathrm {S} _{p}}\operatorname {sgn}(\sigma )\,\delta _{\nu _{1}}^{\mu _{\sigma (1)}}\cdots \delta _{\nu _{p}}^{\mu _{\sigma (p)}}.}
Usinganti-symmetrization:δν1…νpμ1…μp=p!δ[ν1μ1…δνp]μp=p!δν1[μ1…δνpμp].{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=p!\delta _{[\nu _{1}}^{\mu _{1}}\dots \delta _{\nu _{p}]}^{\mu _{p}}=p!\delta _{\nu _{1}}^{[\mu _{1}}\dots \delta _{\nu _{p}}^{\mu _{p}]}.}
In terms of ap×p{\displaystyle p\times p}determinant:[7]δν1…νpμ1…μp=|δν1μ1⋯δνpμ1⋮⋱⋮δν1μp⋯δνpμp|.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\begin{vmatrix}\delta _{\nu _{1}}^{\mu _{1}}&\cdots &\delta _{\nu _{p}}^{\mu _{1}}\\\vdots &\ddots &\vdots \\\delta _{\nu _{1}}^{\mu _{p}}&\cdots &\delta _{\nu _{p}}^{\mu _{p}}\end{vmatrix}}.}
Using theLaplace expansion(Laplace's formula) of determinant, it may be definedrecursively:[8]δν1…νpμ1…μp=∑k=1p(−1)p+kδνkμpδν1…νˇk…νpμ1…μk…μˇp=δνpμpδν1…νp−1μ1…μp−1−∑k=1p−1δνkμpδν1…νk−1νpνk+1…νp−1μ1…μk−1μkμk+1…μp−1,{\displaystyle {\begin{aligned}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}&=\sum _{k=1}^{p}(-1)^{p+k}\delta _{\nu _{k}}^{\mu _{p}}\delta _{\nu _{1}\dots {\check {\nu }}_{k}\dots \nu _{p}}^{\mu _{1}\dots \mu _{k}\dots {\check {\mu }}_{p}}\\&=\delta _{\nu _{p}}^{\mu _{p}}\delta _{\nu _{1}\dots \nu _{p-1}}^{\mu _{1}\dots \mu _{p-1}}-\sum _{k=1}^{p-1}\delta _{\nu _{k}}^{\mu _{p}}\delta _{\nu _{1}\dots \nu _{k-1}\,\nu _{p}\,\nu _{k+1}\dots \nu _{p-1}}^{\mu _{1}\dots \mu _{k-1}\,\mu _{k}\,\mu _{k+1}\dots \mu _{p-1}},\end{aligned}}}where the caron,ˇ{\displaystyle {\check {}}}, indicates an index that is omitted from the sequence.
Whenp=n{\displaystyle p=n}(the dimension of the vector space), in terms of theLevi-Civita symbol:δν1…νnμ1…μn=εμ1…μnεν1…νn.{\displaystyle \delta _{\nu _{1}\dots \nu _{n}}^{\mu _{1}\dots \mu _{n}}=\varepsilon ^{\mu _{1}\dots \mu _{n}}\varepsilon _{\nu _{1}\dots \nu _{n}}\,.}More generally, form=n−p{\displaystyle m=n-p}, using theEinstein summation convention:δν1…νpμ1…μp=1m!εκ1…κmμ1…μpεκ1…κmν1…νp.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\tfrac {1}{m!}}\varepsilon ^{\kappa _{1}\dots \kappa _{m}\mu _{1}\dots \mu _{p}}\varepsilon _{\kappa _{1}\dots \kappa _{m}\nu _{1}\dots \nu _{p}}\,.}
Kronecker Delta contractions depend on the dimension of the space. For example,δμ1ν1δν1ν2μ1μ2=(d−1)δν2μ2,{\displaystyle \delta _{\mu _{1}}^{\nu _{1}}\delta _{\nu _{1}\nu _{2}}^{\mu _{1}\mu _{2}}=(d-1)\delta _{\nu _{2}}^{\mu _{2}},}wheredis the dimension of the space. From this relation the full contracted delta is obtained asδμ1μ2ν1ν2δν1ν2μ1μ2=2d(d−1).{\displaystyle \delta _{\mu _{1}\mu _{2}}^{\nu _{1}\nu _{2}}\delta _{\nu _{1}\nu _{2}}^{\mu _{1}\mu _{2}}=2d(d-1).}The generalization of the preceding formulas is[citation needed]δμ1…μnν1…νnδν1…νpμ1…μp=n!(d−p+n)!(d−p)!δνn+1…νpμn+1…μp.{\displaystyle \delta _{\mu _{1}\dots \mu _{n}}^{\nu _{1}\dots \nu _{n}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=n!{\frac {(d-p+n)!}{(d-p)!}}\delta _{\nu _{n+1}\dots \nu _{p}}^{\mu _{n+1}\dots \mu _{p}}.}
The generalized Kronecker delta may be used foranti-symmetrization:1p!δν1…νpμ1…μpaν1…νp=a[μ1…μp],1p!δν1…νpμ1…μpaμ1…μp=a[ν1…νp].{\displaystyle {\begin{aligned}{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a^{\nu _{1}\dots \nu _{p}}&=a^{[\mu _{1}\dots \mu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a_{\mu _{1}\dots \mu _{p}}&=a_{[\nu _{1}\dots \nu _{p}]}.\end{aligned}}}
From the above equations and the properties ofanti-symmetric tensors, we can derive the properties of the generalized Kronecker delta:1p!δν1…νpμ1…μpa[ν1…νp]=a[μ1…μp],1p!δν1…νpμ1…μpa[μ1…μp]=a[ν1…νp],1p!δν1…νpμ1…μpδκ1…κpν1…νp=δκ1…κpμ1…μp,{\displaystyle {\begin{aligned}{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a^{[\nu _{1}\dots \nu _{p}]}&=a^{[\mu _{1}\dots \mu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a_{[\mu _{1}\dots \mu _{p}]}&=a_{[\nu _{1}\dots \nu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}\delta _{\kappa _{1}\dots \kappa _{p}}^{\nu _{1}\dots \nu _{p}}&=\delta _{\kappa _{1}\dots \kappa _{p}}^{\mu _{1}\dots \mu _{p}},\end{aligned}}}which are the generalized version of formulae written in§ Properties. The last formula is equivalent to theCauchy–Binet formula.
Reducing the order via summation of the indices may be expressed by the identity[9]δν1…νsμs+1…μpμ1…μsμs+1…μp=(n−s)!(n−p)!δν1…νsμ1…μs.{\displaystyle \delta _{\nu _{1}\dots \nu _{s}\,\mu _{s+1}\dots \mu _{p}}^{\mu _{1}\dots \mu _{s}\,\mu _{s+1}\dots \mu _{p}}={\frac {(n-s)!}{(n-p)!}}\delta _{\nu _{1}\dots \nu _{s}}^{\mu _{1}\dots \mu _{s}}.}
Using both the summation rule for the casep=n{\displaystyle p=n}and the relation with the Levi-Civita symbol,the summation rule of the Levi-Civita symbolis derived:δν1…νpμ1…μp=1(n−p)!εμ1…μpκp+1…κnεν1…νpκp+1…κn.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\frac {1}{(n-p)!}}\varepsilon ^{\mu _{1}\dots \mu _{p}\,\kappa _{p+1}\dots \kappa _{n}}\varepsilon _{\nu _{1}\dots \nu _{p}\,\kappa _{p+1}\dots \kappa _{n}}.}The 4D version of the last relation appears in Penrose'sspinor approach to general relativity[10]that he later generalized, while he was developing Aitken's diagrams,[11]to become part of the technique ofPenrose graphical notation.[12]Also, this relation is extensively used inS-dualitytheories, especially when written in the language ofdifferential formsandHodge duals.
For any integersj{\displaystyle j}andk{\displaystyle k}, the Kronecker delta can be written as a complexcontour integralusing a standardresiduecalculation. The integral is taken over theunit circlein thecomplex plane, oriented counterclockwise. An equivalent representation of the integral arises by parameterizing the contour by an angle around the origin.δjk=12πi∮|z|=1zj−k−1dz=12π∫02πei(j−k)φdφ{\displaystyle \delta _{jk}={\frac {1}{2\pi i}}\oint _{|z|=1}z^{j-k-1}\,dz={\frac {1}{2\pi }}\int _{0}^{2\pi }e^{i(j-k)\varphi }\,d\varphi }
The Kronecker comb function with periodN{\displaystyle N}is defined (usingDSPnotation) as:ΔN[n]=∑k=−∞∞δ[n−kN],{\displaystyle \Delta _{N}[n]=\sum _{k=-\infty }^{\infty }\delta [n-kN],}whereN≠0{\displaystyle N\neq 0}andn{\displaystyle n}are integers. The Kronecker comb thus consists of an infinite series of unit impulses that areNunits apart, aligned so one of the impulses occurs at zero. It may be considered to be the discrete analog of theDirac comb.
|
https://en.wikipedia.org/wiki/Kronecker_delta
|
In mathematics, theindicator vector,characteristic vector, orincidence vectorof asubsetTof asetSis the vectorxT:=(xs)s∈S{\displaystyle x_{T}:=(x_{s})_{s\in S}}such thatxs=1{\displaystyle x_{s}=1}ifs∈T{\displaystyle s\in T}andxs=0{\displaystyle x_{s}=0}ifs∉T.{\displaystyle s\notin T.}
IfSiscountableand its elements are numbered so thatS={s1,s2,…,sn}{\displaystyle S=\{s_{1},s_{2},\ldots ,s_{n}\}}, thenxT=(x1,x2,…,xn){\displaystyle x_{T}=(x_{1},x_{2},\ldots ,x_{n})}wherexi=1{\displaystyle x_{i}=1}ifsi∈T{\displaystyle s_{i}\in T}andxi=0{\displaystyle x_{i}=0}ifsi∉T.{\displaystyle s_{i}\notin T.}
To put it more simply, the indicator vector ofTis a vector with one element for each element inS, with that element being one if the corresponding element ofSis inT, and zero if it is not.[1][2][3]
An indicator vector is a special (countable) case of anindicator function.
IfSis the set ofnatural numbersN{\displaystyle \mathbb {N} }, andTis some subset of the natural numbers, then the indicator vector is naturally a single point in theCantor space: that is, an infinite sequence of 1's and 0's, indicating membership, or lack thereof, inT. Such vectors commonly occur in the study ofarithmetical hierarchy.
|
https://en.wikipedia.org/wiki/Indicator_vector
|
Inlinear algebra, amatrix unitis amatrixwith only one nonzero entry with value 1.[1][2]The matrix unit with a 1 in theith row andjth column is denoted asEij{\displaystyle E_{ij}}. For example, the 3 by 3 matrix unit withi= 1 andj= 2 isE12=[010000000]{\displaystyle E_{12}={\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}}}Avector unitis astandard unit vector.
Asingle-entry matrixgeneralizes the matrix unit for matrices with only one nonzero entry of any value, not necessarily of value 1.
The set ofmbynmatrix units is abasisof the space ofmbynmatrices.[2]
The product of two matrix units of the same square shapen×n{\displaystyle n\times n}satisfies the relationEijEkl=δjkEil,{\displaystyle E_{ij}E_{kl}=\delta _{jk}E_{il},}whereδjk{\displaystyle \delta _{jk}}is theKronecker delta.[2]
The group ofscalarn-by-nmatrices over a ringRis thecentralizerof the subset ofn-by-nmatrix units in the set ofn-by-nmatrices overR.[2]
Thematrix norm(induced by the same two vector norms) of a matrix unit is equal to 1.
When multiplied by another matrix, it isolates a specific row or column in arbitrary position. For example, for any 3-by-3 matrixA:[3]
Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Single-entry_vector
|
Theunary numeral systemis the simplest numeral system to representnatural numbers:[1]to represent a numberN, a symbol representing 1 is repeatedNtimes.[2]
In the unary system, the number0(zero) is represented by theempty string, that is, the absence of a symbol. Numbers 1, 2, 3, 4, 5, 6, ... are represented in unary as 1, 11, 111, 1111, 11111, 111111, ...[3]
Unary is abijective numeral system. However, although it has sometimes been described as "base 1",[4]it differs in some important ways frompositional notations, in which the value of a digit depends on its position within a number. For instance, the unary form of a number can be exponentially longer than its representation in other bases.[5]
The use oftally marksin counting is an application of the unary numeral system. For example, using the tally mark|(𝍷), the number 3 is represented as|||. InEast Asiancultures, the number 3 is represented as三, a character drawn with three strokes.[6](One and two are represented similarly.) In China and Japan, the character 正, drawn with 5 strokes, is sometimes used to represent 5 as a tally.[7][8]
Unary numbers should be distinguished fromrepunits, which are also written as sequences of ones but have their usualdecimalnumerical interpretation.
Additionandsubtractionare particularly simple in the unary system, as they involve little more thanstring concatenation.[9]TheHamming weightor population count operation that counts the number of nonzero bits in a sequence of binary values may also be interpreted as a conversion from unary tobinary numbers.[10]However,multiplicationis more cumbersome and has often been used as a test case for the design ofTuring machines.[11][12][13]
Compared to standardpositional numeral systems, the unary system is inconvenient and hence is not used in practice for large calculations. It occurs in somedecision problemdescriptions intheoretical computer science(e.g. someP-completeproblems), where it is used to "artificially" decrease the run-time or space requirements of a problem. For instance, the problem ofinteger factorizationis suspected to require more than a polynomial function of the length of the input as run-time if the input is given inbinary, but it only needs linear runtime if the input is presented in unary.[14]However, this is potentially misleading. Using a unary input is slower for any given number, not faster; the distinction is that a binary (or larger base) input is proportional to the base 2 (or larger base) logarithm of the number while unary input is proportional to the number itself. Therefore, while the run-time and space requirement in unary looks better as function of the input size, it does not represent a more efficient solution.[15]
Incomputational complexity theory, unary numbering is used to distinguishstrongly NP-completeproblems from problems that areNP-completebut not strongly NP-complete. A problem in which the input includes some numerical parameters is strongly NP-complete if it remains NP-complete even when the size of the input is made artificially larger by representing the parameters in unary. For such a problem, there exist hard instances for which all parameter values are at most polynomially large.[16]
In addition to the application in tally marks, unary numbering is used as part of some data compression algorithms such asGolomb coding.[17]It also forms the basis for thePeano axiomsfor formalizing arithmetic withinmathematical logic.[18]A form of unary notation calledChurch encodingis used to represent numbers withinlambda calculus.[19]
Someemailspam filterstag messages with a number ofasterisksin ane-mail headersuch asX-Spam-BarorX-SPAM-LEVEL. The larger the number, the more likely the email is considered spam. Using a unary representation instead of a decimal number lets the user search for messages with a given rating or higher. For example, searching for****yield messages with a rating of at least 4.[20]
|
https://en.wikipedia.org/wiki/Unary_numeral_system
|
Nonparametric regressionis a form ofregression analysiswhere the predictor does not take a predetermined form but is completely constructed using information derived from the data. That is, noparametric equationis assumed for the relationship betweenpredictorsand dependent variable. A largersamplesize is needed to build a nonparametric model having a level ofuncertaintyas aparametric modelbecause the data must supply both the model structure and the parameter estimates.
Nonparametric regression assumes the following relationship, given the random variablesX{\displaystyle X}andY{\displaystyle Y}:
wherem(x){\displaystyle m(x)}is some deterministic function.Linear regressionis a restricted case of nonparametric regression wherem(x){\displaystyle m(x)}is assumed to be a linear function of the data.
Sometimes a slightly stronger assumption of additive noise is used:
where the random variableU{\displaystyle U}is the `noise term', with mean 0.
Without the assumption thatm{\displaystyle m}belongs to a specific parametric family of functions it is impossible to get an unbiased estimate form{\displaystyle m}, however most estimators areconsistentunder suitable conditions.
This is a non-exhaustive list of non-parametric models for regression.
In Gaussian process regression, also known as Kriging, a Gaussian prior is assumed for the regression curve. The errors are assumed to have amultivariate normal distributionand the regression curve is estimated by itsposterior mode. The Gaussian prior may depend on unknown hyperparameters, which are usually estimated viaempirical Bayes.
The hyperparameters typically specify a prior covariance kernel. In case the kernel should also be inferred nonparametrically from the data, thecritical filtercan be used.
Smoothing splineshave an interpretation as the posterior mode of a Gaussian process regression.
Kernel regression estimates the continuous dependent variable from a limited set of data points byconvolvingthe data points' locations with akernel function—approximately speaking, the kernel function specifies how to "blur" the influence of the data points so that their values can be used to predict the value for nearby locations.
Decision tree learning algorithms can be applied to learn to predict a dependent variable from data.[2]Although the original Classification And Regression Tree (CART) formulation applied only to predicting univariate data, the framework can be used to predict multivariate data, including time series.[3]
|
https://en.wikipedia.org/wiki/Nonparametric_regression
|
Ridge regression(also known asTikhonov regularization, named forAndrey Tikhonov) is a method of estimating thecoefficientsof multiple-regression modelsin scenarios where the independent variables are highly correlated.[1]It has been used in many fields including econometrics, chemistry, and engineering.[2]It is a method ofregularizationofill-posed problems.[a]It is particularly useful to mitigate the problem ofmulticollinearityinlinear regression, which commonly occurs in models with large numbers of parameters.[3]In general, the method provides improvedefficiencyin parameter estimation problems in exchange for a tolerable amount ofbias(seebias–variance tradeoff).[4]
The theory was first introduced by Hoerl and Kennard in 1970 in theirTechnometricspapers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems".[5][6][1]
Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.[7][2]
In the simplest case, the problem of anear-singularmoment matrixXTX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {X} }is alleviated by adding positive elements to thediagonals, thereby decreasing itscondition number. Analogous to theordinary least squaresestimator, the simple ridge estimator is then given byβ^R=(XTX+λI)−1XTy{\displaystyle {\hat {\boldsymbol {\beta }}}_{R}=\left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\lambda \mathbf {I} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} }wherey{\displaystyle \mathbf {y} }is theregressand,X{\displaystyle \mathbf {X} }is thedesign matrix,I{\displaystyle \mathbf {I} }is theidentity matrix, and the ridge parameterλ≥0{\displaystyle \lambda \geq 0}serves as the constant shifting the diagonals of the moment matrix.[8]It can be shown that this estimator is the solution to theleast squaresproblem subject to theconstraintβTβ=c{\displaystyle {\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}=c}, which can be expressed as a Lagrangian minimization:β^R=argminβ(y−Xβ)T(y−Xβ)+λ(βTβ−c){\displaystyle {\hat {\boldsymbol {\beta }}}_{R}={\text{argmin}}_{\boldsymbol {\beta }}\,\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}\right)+\lambda \left({\boldsymbol {\beta }}^{\mathsf {T}}{\boldsymbol {\beta }}-c\right)}which shows thatλ{\displaystyle \lambda }is nothing but theLagrange multiplierof the constraint.[9]In fact, there is a one-to-one relationship betweenc{\displaystyle c}andβ{\displaystyle \beta }and since, in practice, we do not knowc{\displaystyle c}, we defineλ{\displaystyle \lambda }heuristically or find it via additional data-fitting strategies, seeDetermination of the Tikhonov factor.
Note that, whenλ=0{\displaystyle \lambda =0}, in which case theconstraint is non-binding, the ridge estimator reduces toordinary least squares. A more general approach to Tikhonov regularization is discussed below.
Tikhonov regularization was invented independently in many different contexts.
It became widely known through its application to integral equations in the works ofAndrey Tikhonov[10][11][12][13][14]and David L. Phillips.[15]Some authors use the termTikhonov–Phillips regularization.
The finite-dimensional case was expounded byArthur E. Hoerl, who took a statistical approach,[16]and by Manus Foster, who interpreted this method as aWiener–Kolmogorov (Kriging)filter.[17]Following Hoerl, it is known in the statistical literature as ridge regression,[18]named after ridge analysis ("ridge" refers to the path from the constrained maximum).[19]
Suppose that for a knownreal matrixA{\displaystyle A}and vectorb{\displaystyle \mathbf {b} }, we wish to find a vectorx{\displaystyle \mathbf {x} }such thatAx=b,{\displaystyle A\mathbf {x} =\mathbf {b} ,}wherex{\displaystyle \mathbf {x} }andb{\displaystyle \mathbf {b} }may be of different sizes andA{\displaystyle A}may be non-square.
The standard approach isordinary least squareslinear regression.[clarification needed]However, if nox{\displaystyle \mathbf {x} }satisfies the equation or more than onex{\displaystyle \mathbf {x} }does—that is, the solution is not unique—the problem is said to beill posed. In such cases, ordinary least squares estimation leads to anoverdetermined, or more often anunderdeterminedsystem of equations. Most real-world phenomena have the effect oflow-pass filters[clarification needed]in the forward direction whereA{\displaystyle A}mapsx{\displaystyle \mathbf {x} }tob{\displaystyle \mathbf {b} }. Therefore, in solving the inverse-problem, the inverse mapping operates as ahigh-pass filterthat has the undesirable tendency of amplifying noise (eigenvalues/ singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version ofx{\displaystyle \mathbf {x} }that is in the null-space ofA{\displaystyle A}, rather than allowing for a model to be used as a prior forx{\displaystyle \mathbf {x} }.
Ordinary least squares seeks to minimize the sum of squaredresiduals, which can be compactly written as‖Ax−b‖22,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2},}where‖⋅‖2{\displaystyle \|\cdot \|_{2}}is theEuclidean norm.
In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:‖Ax−b‖22+‖Γx‖22=‖(AΓ)x−(b0)‖22{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{2}^{2}+\left\|\Gamma \mathbf {x} \right\|_{2}^{2}=\left\|{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\mathbf {x} -{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}\right\|_{2}^{2}}for some suitably chosenTikhonov matrixΓ{\displaystyle \Gamma }. In many cases, this matrix is chosen as a scalar multiple of theidentity matrix(Γ=αI{\displaystyle \Gamma =\alpha I}), giving preference to solutions with smallernorms; this is known asL2regularization.[20]In other cases, high-pass operators (e.g., adifference operatoror a weightedFourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous.
This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted byx^{\displaystyle {\hat {\mathbf {x} }}}, is given byx^=(ATA+ΓTΓ)−1ATb=((AΓ)T(AΓ))−1(AΓ)T(b0).{\displaystyle {\hat {\mathbf {x} }}=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}\mathbf {b} =\left({\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}\right)^{-1}{\begin{pmatrix}A\\\Gamma \end{pmatrix}}^{\mathsf {T}}{\begin{pmatrix}\mathbf {b} \\{\boldsymbol {0}}\end{pmatrix}}.}The effect of regularization may be varied by the scale of matrixΓ{\displaystyle \Gamma }. ForΓ=0{\displaystyle \Gamma =0}this reduces to the unregularized least-squares solution, provided that (ATA)−1exists. Note that in case of acomplex matrixA{\displaystyle A}, as usual the transposeAT{\displaystyle A^{\mathsf {T}}}has to be replaced by theHermitian transposeAH{\displaystyle A^{\mathsf {H}}}.
L2regularization is used in many contexts aside from linear regression, such asclassificationwithlogistic regressionorsupport vector machines,[21]and matrix factorization.[22]
Since Tikhonov Regularization simply adds a quadratic term to the objective function in optimization problems,
it is possible to do so after the unregularised optimisation has taken place.
E.g., if the above problem withΓ=0{\displaystyle \Gamma =0}yields the solutionx^0{\displaystyle {\hat {\mathbf {x} }}_{0}},
the solution in the presence ofΓ≠0{\displaystyle \Gamma \neq 0}can be expressed as:x^=Bx^0,{\displaystyle {\hat {\mathbf {x} }}=B{\hat {\mathbf {x} }}_{0},}with the "regularisation matrix"B=(ATA+ΓTΓ)−1ATA{\displaystyle B=\left(A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma \right)^{-1}A^{\mathsf {T}}A}.
If the parameter fit comes with a covariance matrix of the estimated parameter uncertaintiesV0{\displaystyle V_{0}},
then the regularisation matrix will beB=(V0−1+ΓTΓ)−1V0−1,{\displaystyle B=(V_{0}^{-1}+\Gamma ^{\mathsf {T}}\Gamma )^{-1}V_{0}^{-1},}and the regularised result will have a new covarianceV=BV0BT.{\displaystyle V=BV_{0}B^{\mathsf {T}}.}
In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed.[23]
For general multivariate normal distributions forx{\displaystyle \mathbf {x} }and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek anx{\displaystyle \mathbf {x} }to minimize‖Ax−b‖P2+‖x−x0‖Q2,{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{P}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2},}where we have used‖x‖Q2{\displaystyle \left\|\mathbf {x} \right\|_{Q}^{2}}to stand for the weighted norm squaredxTQx{\displaystyle \mathbf {x} ^{\mathsf {T}}Q\mathbf {x} }(compare with theMahalanobis distance). In the Bayesian interpretationP{\displaystyle P}is the inversecovariance matrixofb{\displaystyle \mathbf {b} },x0{\displaystyle \mathbf {x} _{0}}is theexpected valueofx{\displaystyle \mathbf {x} }, andQ{\displaystyle Q}is the inverse covariance matrix ofx{\displaystyle \mathbf {x} }. The Tikhonov matrix is then given as a factorization of the matrixQ=ΓTΓ{\displaystyle Q=\Gamma ^{\mathsf {T}}\Gamma }(e.g. theCholesky factorization) and is considered awhitening filter.
This generalized problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(ATPA+Q)−1(ATPb+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\mathbf {b} +Q\mathbf {x} _{0}\right),}or equivalently, whenQisnota null matrix:x∗=x0+(ATPA+Q)−1(ATP(b−Ax0)).{\displaystyle \mathbf {x} ^{*}=\mathbf {x} _{0}+\left(A^{\mathsf {T}}PA+Q\right)^{-1}\left(A^{\mathsf {T}}P\left(\mathbf {b} -A\mathbf {x} _{0}\right)\right).}
In some situations, one can avoid using the transposeAT{\displaystyle A^{\mathsf {T}}}, as proposed byMikhail Lavrentyev.[24]For example, ifA{\displaystyle A}is symmetric positive definite, i.e.A=AT>0{\displaystyle A=A^{\mathsf {T}}>0}, so is its inverseA−1{\displaystyle A^{-1}}, which can thus be used to set up the weighted norm squared‖x‖P2=xTA−1x{\displaystyle \left\|\mathbf {x} \right\|_{P}^{2}=\mathbf {x} ^{\mathsf {T}}A^{-1}\mathbf {x} }in the generalized Tikhonov regularization, leading to minimizing‖Ax−b‖A−12+‖x−x0‖Q2{\displaystyle \left\|A\mathbf {x} -\mathbf {b} \right\|_{A^{-1}}^{2}+\left\|\mathbf {x} -\mathbf {x} _{0}\right\|_{Q}^{2}}or, equivalently up to a constant term,xT(A+Q)x−2xT(b+Qx0).{\displaystyle \mathbf {x} ^{\mathsf {T}}\left(A+Q\right)\mathbf {x} -2\mathbf {x} ^{\mathsf {T}}\left(\mathbf {b} +Q\mathbf {x} _{0}\right).}
This minimization problem has an optimal solutionx∗{\displaystyle \mathbf {x} ^{*}}which can be written explicitly using the formulax∗=(A+Q)−1(b+Qx0),{\displaystyle \mathbf {x} ^{*}=\left(A+Q\right)^{-1}\left(\mathbf {b} +Q\mathbf {x} _{0}\right),}which is nothing but the solution of the generalized Tikhonov problem whereA=AT=P−1.{\displaystyle A=A^{\mathsf {T}}=P^{-1}.}
The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrixA+Q{\displaystyle A+Q}can be better conditioned, i.e., have a smallercondition number, compared to the Tikhonov matrixATA+ΓTΓ.{\displaystyle A^{\mathsf {T}}A+\Gamma ^{\mathsf {T}}\Gamma .}
Typically discrete linear ill-conditioned problems result from discretization ofintegral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpretA{\displaystyle A}as acompact operatoronHilbert spaces, andx{\displaystyle x}andb{\displaystyle b}as elements in the domain and range ofA{\displaystyle A}. The operatorA∗A+ΓTΓ{\displaystyle A^{*}A+\Gamma ^{\mathsf {T}}\Gamma }is then aself-adjointbounded invertible operator.
WithΓ=αI{\displaystyle \Gamma =\alpha I}, this least-squares solution can be analyzed in a special way using thesingular-value decomposition. Given the singular value decompositionA=UΣVT{\displaystyle A=U\Sigma V^{\mathsf {T}}}with singular valuesσi{\displaystyle \sigma _{i}}, the Tikhonov regularized solution can be expressed asx^=VDUTb,{\displaystyle {\hat {x}}=VDU^{\mathsf {T}}b,}whereD{\displaystyle D}has diagonal valuesDii=σiσi2+α2{\displaystyle D_{ii}={\frac {\sigma _{i}}{\sigma _{i}^{2}+\alpha ^{2}}}}and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on thecondition numberof the regularized problem. For the generalized case, a similar representation can be derived using ageneralized singular-value decomposition.[25]
Finally, it is related to theWiener filter:x^=∑i=1qfiuiTbσivi,{\displaystyle {\hat {x}}=\sum _{i=1}^{q}f_{i}{\frac {u_{i}^{\mathsf {T}}b}{\sigma _{i}}}v_{i},}where the Wiener weights arefi=σi2σi2+α2{\displaystyle f_{i}={\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}}andq{\displaystyle q}is therankofA{\displaystyle A}.
The optimal regularization parameterα{\displaystyle \alpha }is usually unknown and often in practical problems is determined by anad hocmethod. A possible approach relies on the Bayesian interpretation described below. Other approaches include thediscrepancy principle,cross-validation,L-curve method,[26]restricted maximum likelihoodandunbiased predictive risk estimator.Grace Wahbaproved that the optimal parameter, in the sense ofleave-one-out cross-validationminimizes[27][28]G=RSSτ2=‖Xβ^−y‖2[Tr(I−X(XTX+α2I)−1XT)]2,{\displaystyle G={\frac {\operatorname {RSS} }{\tau ^{2}}}={\frac {\left\|X{\hat {\beta }}-y\right\|^{2}}{\left[\operatorname {Tr} \left(I-X\left(X^{\mathsf {T}}X+\alpha ^{2}I\right)^{-1}X^{\mathsf {T}}\right)\right]^{2}}},}whereRSS{\displaystyle \operatorname {RSS} }is theresidual sum of squares, andτ{\displaystyle \tau }is theeffective number of degrees of freedom.
Using the previous SVD decomposition, we can simplify the above expression:RSS=‖y−∑i=1q(ui′b)ui‖2+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\left\|y-\sum _{i=1}^{q}(u_{i}'b)u_{i}\right\|^{2}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}RSS=RSS0+‖∑i=1qα2σi2+α2(ui′b)ui‖2,{\displaystyle \operatorname {RSS} =\operatorname {RSS} _{0}+\left\|\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}(u_{i}'b)u_{i}\right\|^{2},}andτ=m−∑i=1qσi2σi2+α2=m−q+∑i=1qα2σi2+α2.{\displaystyle \tau =m-\sum _{i=1}^{q}{\frac {\sigma _{i}^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}=m-q+\sum _{i=1}^{q}{\frac {\alpha ^{2}}{\sigma _{i}^{2}+\alpha ^{2}}}.}
The probabilistic formulation of aninverse problemintroduces (when all uncertainties are Gaussian) a covariance matrixCM{\displaystyle C_{M}}representing thea prioriuncertainties on the model parameters, and a covariance matrixCD{\displaystyle C_{D}}representing the uncertainties on the observed parameters.[29]In the special case when these two matrices are diagonal and isotropic,CM=σM2I{\displaystyle C_{M}=\sigma _{M}^{2}I}andCD=σD2I{\displaystyle C_{D}=\sigma _{D}^{2}I}, and, in this case, the equations of inverse theory reduce to the equations above, withα=σD/σM{\displaystyle \alpha ={\sigma _{D}}/{\sigma _{M}}}.[30][31]
Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrixΓ{\displaystyle \Gamma }seems rather arbitrary, the process can be justified from aBayesian point of view.[32]Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, theprior probabilitydistribution ofx{\displaystyle x}is sometimes taken to be amultivariate normal distribution.[33]For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the samestandard deviationσx{\displaystyle \sigma _{x}}. The data are also subject to errors, and the errors inb{\displaystyle b}are also assumed to beindependentwith zero mean and standard deviationσb{\displaystyle \sigma _{b}}. Under these assumptions the Tikhonov-regularized solution is themost probablesolution given the data and thea prioridistribution ofx{\displaystyle x}, according toBayes' theorem.[34]
If the assumption ofnormalityis replaced by assumptions ofhomoscedasticityand uncorrelatedness oferrors, and if one still assumes zero mean, then theGauss–Markov theorementails that the solution is the minimalunbiased linear estimator.[35]
|
https://en.wikipedia.org/wiki/Tikhonov_regularization
|
Inmachine learning(ML),feature learningorrepresentation learning[2]is a set of techniques that allow a system to automatically discover the representations needed forfeaturedetection or classification from raw data. This replaces manualfeature engineeringand allows a machine to both learn the features and use them to perform a specific task.
Feature learning is motivated by the fact that ML tasks such asclassificationoften require input that is mathematically and computationally convenient to process. However, real-world data, such as image, video, and sensor data, have not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Feature learning can be either supervised, unsupervised, or self-supervised:
Supervisedfeature learning is learning features from labeled data. The data label allows the system to compute an error term, the degree to which the system fails to produce the label, which can then be used as feedback to correct the learning process (reduce/minimize the error). Approaches include:
Dictionary learning develops a set (dictionary) of representative elements from the input data such that each data point can be represented as a weighted sum of the representative elements. The dictionary elements and the weights may be found by minimizing the average representation error (over the input data), together withL1regularizationon the weights to enable sparsity (i.e., the representation of each data point has only a few nonzero weights).
Supervised dictionary learning exploits both the structure underlying the input data and the labels for optimizing the dictionary elements. For example, this[12]supervised dictionary learning technique applies dictionary learning on classification problems by jointly optimizing the dictionary elements, weights for representing data points, and parameters of the classifier based on the input data. In particular, a minimization problem is formulated, where the objective function consists of the classification error, the representation error, anL1regularization on the representing weights for each data point (to enable sparse representation of data), and anL2regularization on the parameters of the classifier.
Neural networksare a family of learning algorithms that use a "network" consisting of multiple layers of inter-connected nodes. It is inspired by the animal nervous system, where the nodes are viewed as neurons and edges are viewed as synapses. Each edge has an associated weight, and the network defines computational rules for passing input data from the network's input layer to the output layer. A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights).
Multilayerneural networkscan be used to perform feature learning, since they learn a representation of their input at the hidden layer(s) which is subsequently used for classification or regression at the output layer. The most popular network architecture of this type isSiamese networks.
Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form ofsemisupervised learningwhere features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data.[13][14]Several approaches are introduced in the following.
K-means clusteringis an approach for vector quantization. In particular, given a set ofnvectors,k-means clustering groups them into k clusters (i.e., subsets) in such a way that each vector belongs to the cluster with the closest mean. The problem is computationallyNP-hard, although suboptimalgreedy algorithmshave been developed.
K-means clustering can be used to group an unlabeled set of inputs intokclusters, and then use thecentroidsof these clusters to produce features. These features can be produced in several ways. The simplest is to addkbinary features to each sample, where each featurejhas value oneiffthejth centroid learned byk-means is the closest to the sample under consideration.[6]It is also possible to use the distances to the clusters as features, perhaps after transforming them through aradial basis function(a technique that has been used to trainRBF networks[15]). Coates andNgnote that certain variants ofk-means behave similarly tosparse codingalgorithms.[16]
In a comparative evaluation of unsupervised feature learning methods, Coates, Lee and Ng found thatk-means clustering with an appropriate transformation outperforms the more recently invented auto-encoders and RBMs on an image classification task.[6]K-means also improves performance in the domain ofNLP, specifically fornamed-entity recognition;[17]there, it competes withBrown clustering, as well as with distributed word representations (also known as neural word embeddings).[14]
Principal component analysis(PCA) is often used for dimension reduction. Given an unlabeled set ofninput data vectors, PCA generatesp(which is much smaller than the dimension of the input data)right singular vectorscorresponding to theplargest singular values of the data matrix, where thekth row of the data matrix is thekth input data vector shifted by thesample meanof the input (i.e., subtracting the sample mean from the data vector). Equivalently, these singular vectors are theeigenvectorscorresponding to theplargest eigenvalues of thesample covariance matrixof the input vectors. Thesepsingular vectors are the feature vectors learned from the input data, and they represent directions along which the data has the largest variations.
PCA is a linear feature learning approach since thepsingular vectors are linear functions of the data matrix. The singular vectors can be generated via a simple algorithm withpiterations. In theith iteration, the projection of the data matrix on the(i-1)th eigenvector is subtracted, and theith singular vector is found as the right singular vector corresponding to the largest singular of the residual data matrix.
PCA has several limitations. First, it assumes that the directions with large variance are of most interest, which may not be the case. PCA only relies on orthogonal transformations of the original data, and it exploits only the first- and second-ordermomentsof the data, which may not well characterize the data distribution. Furthermore, PCA can effectively reduce dimension only when the input data vectors are correlated (which results in a few dominant eigenvalues).
Local linear embedding(LLE) is a nonlinear learning approach for generating low-dimensional neighbor-preserving representations from (unlabeled) high-dimension input. The approach was proposed by Roweis and Saul (2000).[18][19]The general idea of LLE is to reconstruct the original high-dimensional data using lower-dimensional points while maintaining some geometric properties of the neighborhoods in the original data set.
LLE consists of two major steps. The first step is for "neighbor-preserving", where each input data pointXiis reconstructed as a weighted sum ofKnearest neighbordata points, and the optimal weights are found by minimizing the average squared reconstruction error (i.e., difference between an input point and its reconstruction) under the constraint that the weights associated with each point sum up to one. The second step is for "dimension reduction," by looking for vectors in a lower-dimensional space that minimizes the representation error using the optimized weights in the first step. Note that in the first step, the weights are optimized with fixed data, which can be solved as aleast squaresproblem. In the second step, lower-dimensional points are optimized with fixed weights, which can be solved via sparse eigenvalue decomposition.
The reconstruction weights obtained in the first step capture the "intrinsic geometric properties" of a neighborhood in the input data.[19]It is assumed that original data lie on a smooth lower-dimensionalmanifold, and the "intrinsic geometric properties" captured by the weights of the original data are also expected to be on the manifold. This is why the same weights are used in the second step of LLE. Compared with PCA, LLE is more powerful in exploiting the underlying data structure.
Independent component analysis(ICA) is a technique for forming a data representation using a weighted sum of independent non-Gaussian components.[20]The assumption of non-Gaussian is imposed since the weights cannot be uniquely determined when all the components followGaussiandistribution.
Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements. An example of unsupervised dictionary learning issparse coding, which aims to learn basis functions (dictionary elements) for data representation from unlabeled input data. Sparse coding can be applied to learn overcomplete dictionaries, where the number of dictionary elements is larger than the dimension of the input data.[21]Aharonet al. proposed algorithmK-SVDfor learning a dictionary of elements that enables sparse representation.[22]
The hierarchical architecture of the biological neural system inspiresdeep learningarchitectures for feature learning by stacking multiple layers of learning nodes.[23]These architectures are often designed based on the assumption ofdistributed representation: observed data is generated by the interactions of many different factors on multiple levels. In a deep learning architecture, the output of each intermediate layer can be viewed as a representation of the original input data. Each level uses the representation produced by the previous, lower level as input, and produces new representations as output, which are then fed to higher levels. The input at the bottom layer is raw data, and the output of the final, highest layer is the final low-dimensional feature or representation.
Restricted Boltzmann machines(RBMs) are often used as a building block for multilayer learning architectures.[6][24]An RBM can be represented by an undirectedbipartite graphconsisting of a group ofbinaryhidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more generalBoltzmann machineswith the constraint of no intra-node connections. Each edge in an RBM is associated with a weight. The weights together with the connections define anenergy function, based on which ajoint distributionof visible and hidden nodes can be devised. Based on the topology of the RBM, the hidden (visible) variables are independent, conditioned on the visible (hidden) variables.[clarification needed]Such conditional independence facilitates computations.
An RBM can be viewed as a single layer architecture for unsupervised feature learning. In particular, the visible variables correspond to input data, and the hidden variables correspond to feature detectors. The weights can be trained by maximizing the probability of visible variables usingHinton'scontrastive divergence(CD) algorithm.[24]
In general, training RBMs by solving the maximization problem tends to result in non-sparse representations. Sparse RBM[25]was proposed to enable sparse representations. The idea is to add aregularizationterm in the objective function of data likelihood, which penalizes the deviation of the expected hidden variables from a small constantp{\displaystyle p}. RBMs have also been used to obtaindisentangledrepresentations of data, where interesting features map to separate hidden units.[26]
Anautoencoderconsisting of an encoder and a decoder is a paradigm for deep learning architectures. An example is provided by Hinton and Salakhutdinov[24]where the encoder uses raw data (e.g., image) as input and produces feature or representation as output and the decoder uses the extracted feature from the encoder as input and reconstructs the original input raw data as output. The encoder and decoder are constructed by stacking multiple layers of RBMs. The parameters involved in the architecture were originally trained in agreedylayer-by-layer manner: after one layer of feature detectors is learned, they are fed up as visible variables for training the corresponding RBM. Current approaches typically apply end-to-end training withstochastic gradient descentmethods. Training can be repeated until some stopping criteria are satisfied.
Self-supervised representation learning is learning features by training on the structure of unlabeled data rather than relying on explicit labels for aninformation signal. This approach has enabled the combined use of deep neural network architectures and larger unlabeled datasets to produce deep feature representations.[9]Training tasks typically fall under the classes of either contrastive, generative or both.[27]Contrastive representation learning trains representations for associated data pairs, called positive samples, to be aligned, while pairs with no relation, called negative samples, are contrasted. A larger portion of negative samples is typically necessary in order to prevent catastrophic collapse, which is when all inputs are mapped to the same representation.[9]Generative representation learning tasks the model with producing the correct data to either match a restricted input or reconstruct the full input from a lower dimensional representation.[27]
A common setup for self-supervised representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data.[11]Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, or a neural network able to convert each new data point (e.g. image) into a set of lower dimensional features.[9]In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input.[11]
Many self-supervised training schemes have been developed for use in representation learning of variousmodalities, often first showing successful application in text or image before being transferred to other data types.[9]
Word2vecis aword embeddingtechnique which learns to represent words through self-supervision over each word and its neighboring words in a sliding window across a large corpus of text.[28]The model has two possible training schemes to produce word vector representations, one generative and one contrastive.[27]The first is word prediction given each of the neighboring words as an input.[28]The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words.[10]A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks.[9]GPTspretrain on next word prediction using prior input words as context,[29]whereasBERTmasks random tokens in order to provide bidirectional context.[30]
Other self-supervised techniques extend word embeddings by finding representations for larger text structures such assentencesor paragraphs in the input data.[9]Doc2vecextends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context.[31]
The domain of image representation learning has employed many different self-supervised training techniques, including transformation,[32]inpainting,[33]patch discrimination[34]and clustering.[35]
Examples of generative approaches are Context Encoders, which trains anAlexNetCNNarchitecture to generate a removed image region given the masked image as input,[33]and iGPT, which applies theGPT-2language model architecture to images by training on pixel prediction after reducing theimage resolution.[36]
Many other self-supervised methods usesiamese networks, which generate different views of the image through various augmentations that are then aligned to have similar representations. The challenge is avoiding collapsing solutions where the model encodes all images to the same representation.[37]SimCLR is a contrastive approach which uses negative examples in order to generate image representations with aResNetCNN.[34]Bootstrap Your Own Latent (BYOL) removes the need for negative samples by encoding one of the views with a slow moving average of the model parameters as they are being modified during training.[38]
The goal of manygraphrepresentation learning techniques is to produce an embedded representation of eachnodebased on the overallnetwork topology.[39]node2vecextends theword2vectraining technique to nodes in a graph by using co-occurrence inrandom walksthrough the graph as the measure of association.[40]Another approach is to maximizemutual information, a measure of similarity, between the representations of associated structures within the graph.[9]An example is Deep Graph Infomax, which uses contrastive self-supervision based on mutual information between the representation of a “patch” around each node, and a summary representation of the entire graph. Negative samples are obtained by pairing the graph representation with either representations from another graph in a multigraph training setting, or corrupted patch representations in single graph training.[41]
With analogous results in masked prediction[42]and clustering,[43]video representation learning approaches are often similar to image techniques but must utilize the temporal sequence of video frames as an additional learned structure. Examples include VCP, which masks video clips and trains to choose the correct one given a set of clip options, and Xu et al., who train a 3D-CNN to identify the original order given a shuffled set of video clips.[44]
Self-supervised representation techniques have also been applied to many audio data formats, particularly forspeech processing.[9]Wav2vec 2.0 discretizes theaudio waveforminto timesteps via temporalconvolutions, and then trains atransformeron masked prediction of random timesteps using a contrastive loss.[45]This is similar to theBERT language model, except as in many SSL approaches to video, the model chooses among a set of options rather than over the entire word vocabulary.[30][45]
Self-supervised learning has also been used to develop joint representations of multiple data types.[9]Approaches usually rely on some natural or human-derived association between the modalities as an implicit label, for instance video clips of animals or objects with characteristic sounds,[46]or captions written to describe images.[47]CLIP produces a joint image-text representation space by training to align image and text encodings from a large dataset of image-caption pairs using a contrastive loss.[47]MERLOT Reserve trains a transformer-based encoder to jointly represent audio, subtitles and video frames from a large dataset of videos through 3 joint pretraining tasks: contrastive masked prediction of either audio or text segments given the video frames and surrounding audio and text context, along with contrastive alignment of video frames with their corresponding captions.[46]
Multimodal representation modelsare typically unable to assume direct correspondence of representations in the different modalities, since the precise alignment can often be noisy or ambiguous. For example, the text "dog" could be paired with many different pictures of dogs, and correspondingly a picture of a dog could be captioned with varying degrees of specificity. This limitation means that downstream tasks may require an additional generative mapping network between modalities to achieve optimal performance, such as inDALLE-2for text to image generation.[48]
Dynamic representation learning methods[49][50]generate latent embeddings for dynamic systems such as dynamic networks. Since particular distance functions are invariant under particular linear transformations, different sets of embedding vectors can actually represent the same/similar information. Therefore, for a dynamic system, a temporal difference in its embeddings may be explained by misalignment of embeddings due to arbitrary transformations and/or actual changes in the system.[51]Therefore, generally speaking, temporal embeddings learned via dynamic representation learning methods should be inspected for any spurious changes and be aligned before consequent dynamic analyses.
|
https://en.wikipedia.org/wiki/Representation_learning
|
Sparse dictionary learning(also known assparse codingorSDL) is arepresentation learningmethod which aims to find asparserepresentation of the input data in the form of alinear combinationof basic elements as well as those basic elements themselves. These elements are calledatoms, and they compose adictionary. Atoms in the dictionary are not required to beorthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than any one of the signals being observed. These two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal, but also provide an improvement insparsityand flexibility of the representation.
One of the most important applications of sparse dictionary learning is in the field ofcompressed sensingorsignal recovery. In compressed sensing, a high-dimensional signal can be recovered with only a few linear measurements, provided that the signal is sparse or near-sparse. Since not all signals satisfy this condition, it is crucial to find a sparse representation of that signal such as thewavelet transformor the directional gradient of a rasterized matrix. Once a matrix or a high-dimensional vector is transferred to a sparse space, different recovery algorithms likebasis pursuit, CoSaMP,[1]or fast non-iterative algorithms[2]can be used to recover the signal.
One of the key principles of dictionary learning is that the dictionary has to be inferred from the input data. The emergence of sparse dictionary learning methods was stimulated by the fact that insignal processing, one typically wants to represent the input data using a minimal amount of components. Before this approach, the general practice was to use predefined dictionaries such asFourierorwavelettransforms. However, in certain cases, a dictionary that is trained to fit the input data can significantly improve the sparsity, which has applications in data decomposition,compression, andanalysis, and has been used in the fields of imagedenoisingandclassification, and video andaudio processing. Sparsity and overcomplete dictionaries have immense applications in image compression, image fusion, andinpainting.
Given the input datasetX=[x1,...,xK],xi∈Rd{\displaystyle X=[x_{1},...,x_{K}],x_{i}\in \mathbb {R} ^{d}}we wish to find a dictionaryD∈Rd×n:D=[d1,...,dn]{\displaystyle \mathbf {D} \in \mathbb {R} ^{d\times n}:D=[d_{1},...,d_{n}]}and a representationR=[r1,...,rK],ri∈Rn{\displaystyle R=[r_{1},...,r_{K}],r_{i}\in \mathbb {R} ^{n}}such that both‖X−DR‖F2{\displaystyle \|X-\mathbf {D} R\|_{F}^{2}}is minimized and the representationsri{\displaystyle r_{i}}are sparse enough. This can be formulated as the followingoptimization problem:
argminD∈C,ri∈Rn∑i=1K‖xi−Dri‖22+λ‖ri‖0{\displaystyle {\underset {\mathbf {D} \in {\mathcal {C}},r_{i}\in \mathbb {R} ^{n}}{\text{argmin}}}\sum _{i=1}^{K}\|x_{i}-\mathbf {D} r_{i}\|_{2}^{2}+\lambda \|r_{i}\|_{0}}, whereC≡{D∈Rd×n:‖di‖2≤1∀i=1,...,n}{\displaystyle {\mathcal {C}}\equiv \{\mathbf {D} \in \mathbb {R} ^{d\times n}:\|d_{i}\|_{2}\leq 1\,\,\forall i=1,...,n\}},λ>0{\displaystyle \lambda >0}
C{\displaystyle {\mathcal {C}}}is required to constrainD{\displaystyle \mathbf {D} }so that its atoms would not reach arbitrarily high values allowing for arbitrarily low (but non-zero) values ofri{\displaystyle r_{i}}.λ{\displaystyle \lambda }controls the trade off between the sparsity and the minimization error.
The minimization problem above is not convex because of theℓ0-"norm"and solving this problem is NP-hard.[3]In some casesL1-normis known to ensure sparsity[4]and so the above becomes aconvex optimizationproblem with respect to each of the variablesD{\displaystyle \mathbf {D} }andR{\displaystyle \mathbf {R} }when the other one is fixed, but it is not jointly convex in(D,R){\displaystyle (\mathbf {D} ,\mathbf {R} )}.
The dictionaryD{\displaystyle \mathbf {D} }defined above can be "undercomplete" ifn<d{\displaystyle n<d}or "overcomplete" in casen>d{\displaystyle n>d}with the latter being a typical assumption for a sparse dictionary learning problem. The case of a complete dictionary does not provide any improvement from a representational point of view and thus isn't considered.
Undercomplete dictionaries represent the setup in which the actual input data lies in a lower-dimensional space. This case is strongly related todimensionality reductionand techniques likeprincipal component analysiswhich require atomsd1,...,dn{\displaystyle d_{1},...,d_{n}}to be orthogonal. The choice of these subspaces is crucial for efficient dimensionality reduction, but it is not trivial. And dimensionality reduction based on dictionary representation can be extended to address specific tasks such as data analysis or classification. However, their main downside is limiting the choice of atoms.
Overcomplete dictionaries, however, do not require the atoms to be orthogonal (they will never have abasisanyway) thus allowing for more flexible dictionaries and richer data representations.
An overcomplete dictionary which allows for sparse representation of signal can be a famous transform matrix (wavelets transform, fourier transform) or it can be formulated so that its elements are changed in such a way that it sparsely represents the given signal in a best way. Learned dictionaries are capable of giving sparser solutions as compared to predefined transform matrices.
As the optimization problem described above can be solved as a convex problem with respect to either dictionary or sparse coding while the other one of the two is fixed, most of the algorithms are based on the idea of iteratively updating one and then the other.
The problem of finding an optimal sparse codingR{\displaystyle R}with a given dictionaryD{\displaystyle \mathbf {D} }is known assparse approximation(or sometimes just sparse coding problem). A number of algorithms have been developed to solve it (such asmatching pursuitandLASSO) and are incorporated in the algorithms described below.
The method of optimal directions (or MOD) was one of the first methods introduced to tackle the sparse dictionary learning problem.[5]The core idea of it is to solve the minimization problem subject to the limited number of non-zero components of the representation vector:
minD,R{‖X−DR‖F2}s.t.∀i‖ri‖0≤T{\displaystyle \min _{\mathbf {D} ,R}\{\|X-\mathbf {D} R\|_{F}^{2}\}\,\,{\text{s.t.}}\,\,\forall i\,\,\|r_{i}\|_{0}\leq T}
Here,F{\displaystyle F}denotes theFrobenius norm. MOD alternates between getting thesparse codingusing a method such asmatching pursuitand updating the dictionary by computing the analytical solution of the problem given byD=XR+{\displaystyle \mathbf {D} =XR^{+}}whereR+{\displaystyle R^{+}}is aMoore-Penrose pseudoinverse. After this updateD{\displaystyle \mathbf {D} }is renormalized to fit the constraints and the new sparse coding is obtained again. The process is repeated until convergence (or until a sufficiently small residue).
MOD has proved to be a very efficient method for low-dimensional input dataX{\displaystyle X}requiring just a few iterations to converge. However, due to the high complexity of the matrix-inversion operation, computing the pseudoinverse in high-dimensional cases is in many cases intractable. This shortcoming has inspired the development of other dictionary learning methods.
K-SVDis an algorithm that performsSVDat its core to update the atoms of the dictionary one by one and basically is a generalization ofK-means. It enforces that each element of the input dataxi{\displaystyle x_{i}}is encoded by a linear combination of not more thanT0{\displaystyle T_{0}}elements in a way identical to the MOD approach:
minD,R{‖X−DR‖F2}s.t.∀i‖ri‖0≤T0{\displaystyle \min _{\mathbf {D} ,R}\{\|X-\mathbf {D} R\|_{F}^{2}\}\,\,{\text{s.t.}}\,\,\forall i\,\,\|r_{i}\|_{0}\leq T_{0}}
This algorithm's essence is to first fix the dictionary, find the best possibleR{\displaystyle R}under the above constraint (usingOrthogonal Matching Pursuit) and then iteratively update the atoms of dictionaryD{\displaystyle \mathbf {D} }in the following manner:
‖X−DR‖F2=|X−∑i=1KdixTi|F2=‖Ek−dkxTk‖F2{\displaystyle \|X-\mathbf {D} R\|_{F}^{2}=\left|X-\sum _{i=1}^{K}d_{i}x_{T}^{i}\right|_{F}^{2}=\|E_{k}-d_{k}x_{T}^{k}\|_{F}^{2}}
The next steps of the algorithm includerank-1 approximationof the residual matrixEk{\displaystyle E_{k}}, updatingdk{\displaystyle d_{k}}and enforcing the sparsity ofxk{\displaystyle x_{k}}after the update. This algorithm is considered to be standard for dictionary learning and is used in a variety of applications. However, it shares weaknesses with MOD being efficient only for signals with relatively low dimensionality and having the possibility for being stuck at local minima.
One can also apply a widespread stochastic gradient descent method with iterative projection to solve this problem.[6]The idea of this method is to update the dictionary using the first order stochastic gradient and project it on the constraint setC{\displaystyle {\mathcal {C}}}. The step that occurs at i-th iteration is described by this expression:
Di=projC{Di−1−δi∇D∑i∈S‖xi−Dri‖22+λ‖ri‖1}{\displaystyle \mathbf {D} _{i}={\text{proj}}_{\mathcal {C}}\left\{\mathbf {D} _{i-1}-\delta _{i}\nabla _{\mathbf {D} }\sum _{i\in S}\|x_{i}-\mathbf {D} r_{i}\|_{2}^{2}+\lambda \|r_{i}\|_{1}\right\}}, whereS{\displaystyle S}is a random subset of{1...K}{\displaystyle \{1...K\}}andδi{\displaystyle \delta _{i}}is a gradient step.
An algorithm based on solving adual Lagrangian problemprovides an efficient way to solve for the dictionary having no complications induced by the sparsity function.[7]Consider the following Lagrangian:
L(D,Λ)=tr((X−DR)T(X−DR))+∑j=1nλj(∑i=1dDij2−c){\displaystyle {\mathcal {L}}(\mathbf {D} ,\Lambda )={\text{tr}}\left((X-\mathbf {D} R)^{T}(X-\mathbf {D} R)\right)+\sum _{j=1}^{n}\lambda _{j}\left({\sum _{i=1}^{d}\mathbf {D} _{ij}^{2}-c}\right)}, wherec{\displaystyle c}is a constraint on the norm of the atoms andλi{\displaystyle \lambda _{i}}are the so-called dual variables forming the diagonal matrixΛ{\displaystyle \Lambda }.
We can then provide an analytical expression for the Lagrange dual after minimization overD{\displaystyle \mathbf {D} }:
D(Λ)=minDL(D,Λ)=tr(XTX−XRT(RRT+Λ)−1(XRT)T−cΛ){\displaystyle {\mathcal {D}}(\Lambda )=\min _{\mathbf {D} }{\mathcal {L}}(\mathbf {D} ,\Lambda )={\text{tr}}(X^{T}X-XR^{T}(RR^{T}+\Lambda )^{-1}(XR^{T})^{T}-c\Lambda )}.
After applying one of the optimization methods to the value of the dual (such asNewton's methodorconjugate gradient) we get the value ofD{\displaystyle \mathbf {D} }:
DT=(RRT+Λ)−1(XRT)T{\displaystyle \mathbf {D} ^{T}=(RR^{T}+\Lambda )^{-1}(XR^{T})^{T}}
Solving this problem is less computational hard because the amount of dual variablesn{\displaystyle n}is a lot of times much less than the amount of variables in the primal problem.
In this approach, the optimization problem is formulated as:
minr∈Rn{‖r‖1}subject to‖X−DR‖F2<ϵ{\displaystyle \min _{r\in \mathbb {R} ^{n}}\{\,\,\|r\|_{1}\}\,\,{\text{subject to}}\,\,\|X-\mathbf {D} R\|_{F}^{2}<\epsilon }, whereϵ{\displaystyle \epsilon }is the permitted error in the reconstruction LASSO.
It finds an estimate ofri{\displaystyle r_{i}}by minimizing the least square error subject to aL1-normconstraint in the solution vector, formulated as:
minr∈Rn12‖X−Dr‖F2+λ‖r‖1{\displaystyle \min _{r\in \mathbb {R} ^{n}}\,\,{\dfrac {1}{2}}\,\,\|X-\mathbf {D} r\|_{F}^{2}+\lambda \,\,\|r\|_{1}}, whereλ>0{\displaystyle \lambda >0}controls the trade-off between sparsity and the reconstruction error. This gives the global optimal solution.[8]See alsoOnline dictionary learning for Sparse coding
Parametric training methods are aimed to incorporate the best of both worlds — the realm of analytically constructed dictionaries and the learned ones.[9]This allows to construct more powerful generalized dictionaries that can potentially be applied to the cases of arbitrary-sized signals. Notable approaches include:
Many common approaches to sparse dictionary learning rely on the fact that the whole input dataX{\displaystyle X}(or at least a large enough training dataset) is available for the algorithm. However, this might not be the case in the real-world scenario as the size of the input data might be too big to fit it into memory. The other case where this assumption can not be made is when the input data comes in a form of astream. Such cases lie in the field of study ofonline learningwhich essentially suggests iteratively updating the model upon the new data pointsx{\displaystyle x}becoming available.
A dictionary can be learned in an online manner the following way:[13]
This method allows us to gradually update the dictionary as new data becomes available for sparse representation learning and helps drastically reduce the amount of memory needed to store the dataset (which often has a huge size).
The dictionary learning framework, namely the linear decomposition of an input signal using a few basis elements learned from data itself, has led to state-of-art[citation needed]results in various image and video processing tasks. This technique can be applied to classification problems in a way that if we have built specific dictionaries for each class, the input signal can be classified by finding the dictionary corresponding to the sparsest representation.
It also has properties that are useful for signal denoising since usually one can learn a dictionary to represent the meaningful part of the input signal in a sparse way but the noise in the input will have a much less sparse representation.[14]
Sparse dictionary learning has been successfully applied to various image, video and audio processing tasks as well as to texture synthesis[15]and unsupervised clustering.[16]In evaluations with theBag-of-Wordsmodel,[17][18]sparse coding was found empirically to outperform other coding approaches on the object category recognition tasks.
Dictionary learning is used to analyse medical signals in detail. Such medical signals include those from electroencephalography (EEG), electrocardiography (ECG), magnetic resonance imaging (MRI), functional MRI (fMRI), continuous glucose monitors[19]and ultrasound computer tomography (USCT), where different assumptions are used to analyze each signal.
|
https://en.wikipedia.org/wiki/Sparse_dictionary_learning
|
Inmathematicsandmachine learning, thesoftplusfunction is
It is a smooth approximation (in fact, ananalytic function) to theramp function, which is known as therectifierorReLU (rectified linear unit)in machine learning. For large negativex{\displaystyle x}it islog(1+ex)=log(1+ϵ)⪆log1=0{\displaystyle \log(1+e^{x})=\log(1+\epsilon )\gtrapprox \log 1=0}, so just above 0, while for large positivex{\displaystyle x}it islog(1+ex)⪆log(ex)=x{\displaystyle \log(1+e^{x})\gtrapprox \log(e^{x})=x}, so just abovex{\displaystyle x}.
The namessoftplus[1][2]andSmoothReLU[3]are used in machine learning. The name "softplus" (2000), by analogy with the earliersoftmax(1989) is presumably because it is a smooth (soft) approximation of the positive part ofx, which is sometimes denoted with a superscriptplus,x+:=max(0,x){\displaystyle x^{+}:=\max(0,x)}.
The derivative of softplus is thelogistic function:
The logisticsigmoid functionis a smooth approximation of the derivative of the rectifier, theHeaviside step function.
The multivariable generalization of single-variable softplus is theLogSumExpwith the first argument set to zero:
The LogSumExp function is
and its gradient is thesoftmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
Theconvex conjugate(specifically, theLegendre transform) of the softplus function is the negative binary entropy (with basee). This is because (following the definition of the Legendre transform: the derivatives are inverse functions) the derivative of softplus is the logistic function, whose inverse function is thelogit, which is the derivative of negative binary entropy.
Softplus can be interpreted aslogistic loss(as a positive number), so byduality, minimizing logistic loss corresponds to maximizing entropy. This justifies theprinciple of maximum entropyas loss minimization.
This function can be approximated as:
By making the change of variablesx=yln(2){\displaystyle x=y\ln(2)}, this is equivalent to
A sharpness parameterk{\displaystyle k}may be included:
|
https://en.wikipedia.org/wiki/Softplus
|
Instatistics,multinomial logistic regressionis aclassificationmethod that generalizeslogistic regressiontomulticlass problems, i.e. with more than two possible discrete outcomes.[1]That is, it is a model that is used to predict the probabilities of the different possible outcomes of acategorically distributeddependent variable, given a set ofindependent variables(which may be real-valued, binary-valued, categorical-valued, etc.).
Multinomial logistic regression is known by a variety of other names, includingpolytomous LR,[2][3]multiclass LR,softmaxregression,multinomial logit(mlogit), themaximum entropy(MaxEnt) classifier, and theconditional maximum entropy model.[4]
Multinomial logistic regression is used when thedependent variablein question isnominal(equivalentlycategorical, meaning that it falls into any one of a set of categories that cannot be ordered in any meaningful way) and for which there are more than two categories. Some examples would be:
These are allstatistical classificationproblems. They all have in common adependent variableto be predicted that comes from one of a limited set of items that cannot be meaningfully ordered, as well as a set ofindependent variables(also known as features, explanators, etc.), which are used to predict the dependent variable. Multinomial logistic regression is a particular solution to classification problems that use a linear combination of the observed features and some problem-specific parameters to estimate the probability of each particular value of the dependent variable. The best values of the parameters for a given problem are usually determined from some training data (e.g. some people for whom both the diagnostic test results and blood types are known, or some examples of known words being spoken).
The multinomial logistic model assumes that data are case-specific; that is, each independent variable has a single value for each case. As with other types of regression, there is no need for the independent variables to bestatistically independentfrom each other (unlike, for example, in anaive Bayes classifier); however,collinearityis assumed to be relatively low, as it becomes difficult to differentiate between the impact of several variables if this is not the case.[5]
If the multinomial logit is used to model choices, it relies on the assumption ofindependence of irrelevant alternatives(IIA), which is not always desirable. This assumption states that the odds of preferring one class over another do not depend on the presence or absence of other "irrelevant" alternatives. For example, the relative probabilities of taking a car or bus to work do not change if a bicycle is added as an additional possibility. This allows the choice ofKalternatives to be modeled as a set ofK− 1 independent binary choices, in which one alternative is chosen as a "pivot" and the otherK− 1 compared against it, one at a time. The IIA hypothesis is a core hypothesis in rational choice theory; however numerous studies in psychology show that individuals often violate this assumption when making choices. An example of a problem case arises if choices include a car and a blue bus. Suppose the odds ratio between the two is 1 : 1. Now if the option of a red bus is introduced, a person may be indifferent between a red and a blue bus, and hence may exhibit a car : blue bus : red bus odds ratio of 1 : 0.5 : 0.5, thus maintaining a 1 : 1 ratio of car : any bus while adopting a changed car : blue bus ratio of 1 : 0.5. Here the red bus option was not in fact irrelevant, because a red bus was aperfect substitutefor a blue bus.
If the multinomial logit is used to model choices, it may in some situations impose too much constraint on the relative preferences between the different alternatives. It is especially important to take into account if the analysis aims to predict how choices would change if one alternative were to disappear (for instance if one political candidate withdraws from a three candidate race). Other models like thenested logitor themultinomial probitmay be used in such cases as they allow for violation of the IIA.[6]
There are multiple equivalent ways to describe the mathematical model underlying multinomial logistic regression. This can make it difficult to compare different treatments of the subject in different texts. The article onlogistic regressionpresents a number of equivalent formulations of simple logistic regression, and many of these have analogues in the multinomial logit model.
The idea behind all of them, as in many otherstatistical classificationtechniques, is to construct alinear predictor functionthat constructs a score from a set of weights that arelinearly combinedwith the explanatory variables (features) of a given observation using adot product:
whereXiis the vector of explanatory variables describing observationi,βkis a vector of weights (orregression coefficients) corresponding to outcomek, and score(Xi,k) is the score associated with assigning observationito categoryk. Indiscrete choicetheory, where observations represent people and outcomes represent choices, the score is considered theutilityassociated with personichoosing outcomek. The predicted outcome is the one with the highest score.
The difference between the multinomial logit model and numerous other methods, models, algorithms, etc. with the same basic setup (theperceptronalgorithm,support vector machines,linear discriminant analysis, etc.) is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted. In particular, in the multinomial logit model, the score can directly be converted to a probability value, indicating theprobabilityof observationichoosing outcomekgiven the measured characteristics of the observation. This provides a principled way of incorporating the prediction of a particular multinomial logit model into a larger procedure that may involve multiple such predictions, each with a possibility of error. Without such means of combining predictions, errors tend to multiply. For example, imagine a largepredictive modelthat is broken down into a series of submodels where the prediction of a given submodel is used as the input of another submodel, and that prediction is in turn used as the input into a third submodel, etc. If each submodel has 90% accuracy in its predictions, and there are five submodels in series, then the overall model has only 0.95= 59% accuracy. If each submodel has 80% accuracy, then overall accuracy drops to 0.85= 33% accuracy. This issue is known aserror propagationand is a serious problem in real-world predictive models, which are usually composed of numerous parts. Predicting probabilities of each possible outcome, rather than simply making a single optimal prediction, is one means of alleviating this issue.[citation needed]
The basic setup is the same as inlogistic regression, the only difference being that thedependent variablesarecategoricalrather thanbinary, i.e. there areKpossible outcomes rather than just two. The following description is somewhat shortened; for more details, consult thelogistic regressionarticle.
Specifically, it is assumed that we have a series ofNobserved data points. Each data pointi(ranging from 1 toN) consists of a set ofMexplanatory variablesx1,i...xM,i(also known asindependent variables, predictor variables, features, etc.), and an associatedcategoricaloutcomeYi(also known asdependent variable, response variable), which can take on one ofKpossible values. These possible values represent logically separate categories (e.g. different political parties, blood types, etc.), and are often described mathematically by arbitrarily assigning each a number from 1 toK. The explanatory variables and outcome represent observed properties of the data points, and are often thought of as originating in the observations ofN"experiments" — although an "experiment" may consist of nothing more than gathering data. The goal of multinomial logistic regression is to construct a model that explains the relationship between the explanatory variables and the outcome, so that the outcome of a new "experiment" can be correctly predicted for a new data point for which the explanatory variables, but not the outcome, are available. In the process, the model attempts to explain the relative effect of differing explanatory variables on the outcome.
Some examples:
As in other forms of linear regression, multinomial logistic regression uses alinear predictor functionf(k,i){\displaystyle f(k,i)}to predict the probability that observationihas outcomek, of the following form:
whereβm,k{\displaystyle \beta _{m,k}}is aregression coefficientassociated with themth explanatory variable and thekth outcome. As explained in thelogistic regressionarticle, the regression coefficients and explanatory variables are normally grouped into vectors of sizeM+ 1, so that the predictor function can be written more compactly:
whereβk{\displaystyle {\boldsymbol {\beta }}_{k}}is the set of regression coefficients associated with outcomek, andxi{\displaystyle \mathbf {x} _{i}}(a row vector) is the set of explanatory variables associated with observationi, prepended by a 1 in entry 0.
To arrive at the multinomial logit model, one can imagine, forKpossible outcomes, runningKindependent binary logistic regression models, in which one outcome is chosen as a "pivot" and then the otherK− 1 outcomes are separately regressed against the pivot outcome. If outcomeK(the last outcome) is chosen as the pivot, theK− 1 regression equations are:
This formulation is also known as theAdditive Log Ratiotransform commonly used in compositional data analysis. In other applications it’s referred to as “relative risk”.[7]
If we exponentiate both sides and solve for the probabilities, we get:
Using the fact that allKof the probabilities must sum to one, we find:
We can use this to find the other probabilities:
The fact that we run multiple regressions reveals why the model relies on the assumption ofindependence of irrelevant alternativesdescribed above.
The unknown parameters in each vectorβkare typically jointly estimated bymaximum a posteriori(MAP) estimation, which is an extension ofmaximum likelihoodusingregularizationof the weights to prevent pathological solutions (usually a squared regularizing function, which is equivalent to placing a zero-meanGaussianprior distributionon the weights, but other distributions are also possible). The solution is typically found using an iterative procedure such asgeneralized iterative scaling,[8]iteratively reweighted least squares(IRLS),[9]by means ofgradient-based optimizationalgorithms such asL-BFGS,[4]or by specializedcoordinate descentalgorithms.[10]
The formulation of binary logistic regression as alog-linear modelcan be directly extended to multi-way regression. That is, we model thelogarithmof the probability of seeing a given output using the linear predictor as well as an additionalnormalization factor, the logarithm of thepartition function:
As in the binary case, we need an extra term−lnZ{\displaystyle -\ln Z}to ensure that the whole set of probabilities forms aprobability distribution, i.e. so that they all sum to one:
The reason why we need to add a term to ensure normalization, rather than multiply as is usual, is because we have taken the logarithm of the probabilities. Exponentiating both sides turns the additive term into a multiplicative factor, so that the probability is just theGibbs measure:
The quantityZis called thepartition functionfor the distribution. We can compute the value of the partition function by applying the above constraint that requires all probabilities to sum to 1:
Therefore
Note that this factor is "constant" in the sense that it is not a function ofYi, which is the variable over which the probability distribution is defined. However, it is definitely not constant with respect to the explanatory variables, or crucially, with respect to the unknown regression coefficientsβk, which we will need to determine through some sort ofoptimizationprocedure.
The resulting equations for the probabilities are
The following function:
is referred to as thesoftmax function. The reason is that the effect of exponentiating the valuess1,…,sK{\displaystyle s_{1},\ldots ,s_{K}}is to exaggerate the differences between them. As a result,softmax(k,s1,…,sK){\displaystyle \operatorname {softmax} (k,s_{1},\ldots ,s_{K})}will return a value close to 0 wheneversk{\displaystyle s_{k}}is significantly less than the maximum of all the values, and will return a value close to 1 when applied to the maximum value, unless it is extremely close to the next-largest value. Thus, the softmax function can be used to construct aweighted averagethat behaves as asmooth function(which can be convenientlydifferentiated, etc.) and which approximates theindicator function
Thus, we can write the probability equations as
The softmax function thus serves as the equivalent of thelogistic functionin binary logistic regression.
Note that not all of theβk{\displaystyle {\boldsymbol {\beta }}_{k}}vectors of coefficients are uniquelyidentifiable. This is due to the fact that all probabilities must sum to 1, making one of them completely determined once all the rest are known. As a result, there are onlyK−1{\displaystyle K-1}separately specifiable probabilities, and henceK−1{\displaystyle K-1}separately identifiable vectors of coefficients. One way to see this is to note that if we add a constant vector to all of the coefficient vectors, the equations are identical:
As a result, it is conventional to setC=−βK{\displaystyle \mathbf {C} =-{\boldsymbol {\beta }}_{K}}(or alternatively, one of the other coefficient vectors). Essentially, we set the constant so that one of the vectors becomes0{\displaystyle {\boldsymbol {0}}}, and all of the other vectors get transformed into the difference between those vectors and the vector we chose. This is equivalent to "pivoting" around one of theKchoices, and examining how much better or worse all of the otherK− 1 choices are, relative to the choice we are pivoting around. Mathematically, we transform the coefficients as follows:
This leads to the following equations:
Other than the prime symbols on the regression coefficients, this is exactly the same as the form of the model described above, in terms ofK− 1 independent two-way regressions.
It is also possible to formulate multinomial logistic regression as a latent variable model, following thetwo-way latent variable modeldescribed for binary logistic regression. This formulation is common in the theory ofdiscrete choicemodels, and makes it easier to compare multinomial logistic regression to the relatedmultinomial probitmodel, as well as to extend it to more complex models.
Imagine that, for each data pointiand possible outcomek= 1,2,...,K, there is a continuouslatent variableYi,k*(i.e. an unobservedrandom variable) that is distributed as follows:
whereεk∼EV1(0,1),{\displaystyle \varepsilon _{k}\sim \operatorname {EV} _{1}(0,1),}i.e. a standard type-1extreme value distribution.
This latent variable can be thought of as theutilityassociated with data pointichoosing outcomek, where there is some randomness in the actual amount of utility obtained, which accounts for other unmodeled factors that go into the choice. The value of the actual variableYi{\displaystyle Y_{i}}is then determined in a non-random fashion from these latent variables (i.e. the randomness has been moved from the observed outcomes into the latent variables), where outcomekis chosenif and only ifthe associated utility (the value ofYi,k∗{\displaystyle Y_{i,k}^{\ast }}) is greater than the utilities of all the other choices, i.e. if the utility associated with outcomekis the maximum of all the utilities. Since the latent variables arecontinuous, the probability of two having exactly the same value is 0, so we ignore the scenario. That is:
Or equivalently:
Let's look more closely at the first equation, which we can write as follows:
There are a few things to realize here:
Actually finding the values of the above probabilities is somewhat difficult, and is a problem of computing a particularorder statistic(the first, i.e. maximum) of a set of values. However, it can be shown that the resulting expressions are the same as in above formulations, i.e. the two are equivalent.
When using multinomial logistic regression, one category of the dependent variable is chosen as the reference category. Separateodds ratiosare determined for all independent variables for each category of the dependent variable with the exception of the reference category, which is omitted from the analysis. The exponential beta coefficient represents the change in the odds of the dependent variable being in a particular category vis-a-vis the reference category, associated with a one unit change of the corresponding independent variable.
The observed valuesyi∈{1,…,K}{\displaystyle y_{i}\in \{1,\dots ,K\}}fori=1,…,n{\displaystyle i=1,\dots ,n}of the explained variables are considered as realizations of stochastically independent,categorically distributedrandom variablesY1,…,Yn{\displaystyle Y_{1},\dots ,Y_{n}}.
Thelikelihood functionfor this model is defined by
where the indexi{\displaystyle i}denotes the observations 1 tonand the indexj{\displaystyle j}denotes the classes 1 toK.δj,yi={1,forj=yi0,otherwise{\displaystyle \delta _{j,y_{i}}={\begin{cases}1,{\text{ for }}j=y_{i}\\0,{\text{ otherwise}}\end{cases}}}is theKronecker delta.
The negative log-likelihood function is therefore the well-known cross-entropy:
Innatural language processing, multinomial LR classifiers are commonly used as an alternative tonaive Bayes classifiersbecause they do not assumestatistical independenceof the random variables (commonly known asfeatures) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized usingmaximum a posteriori(MAP) estimation, must be learned using an iterative procedure; see#Estimating the coefficients.
|
https://en.wikipedia.org/wiki/Multinomial_logistic_regression
|
Inprobabilityandstatistics, theDirichlet distribution(afterPeter Gustav Lejeune Dirichlet), often denotedDir(α){\displaystyle \operatorname {Dir} ({\boldsymbol {\alpha }})}, is a family ofcontinuousmultivariateprobability distributionsparameterized by a vectorαof positivereals. It is a multivariate generalization of thebeta distribution,[1]hence its alternative name ofmultivariate beta distribution(MBD).[2]Dirichlet distributions are commonly used asprior distributionsinBayesian statistics, and in fact, the Dirichlet distribution is theconjugate priorof thecategorical distributionandmultinomial distribution.
The infinite-dimensional generalization of the Dirichlet distribution is theDirichlet process.
The Dirichlet distribution of orderK≥ 2with parametersα1, ...,αK> 0has aprobability density functionwith respect toLebesgue measureon theEuclidean spaceRK−1given by
f(x1,…,xK;α1,…,αK)=1B(α)∏i=1Kxiαi−1{\displaystyle f\left(x_{1},\ldots ,x_{K};\alpha _{1},\ldots ,\alpha _{K}\right)={\frac {1}{\mathrm {B} ({\boldsymbol {\alpha }})}}\prod _{i=1}^{K}x_{i}^{\alpha _{i}-1}}
Thenormalizing constantis the multivariatebeta function, which can be expressed in terms of thegamma function:
B(α)=∏i=1KΓ(αi)Γ(∑i=1Kαi),α=(α1,…,αK).{\displaystyle \mathrm {B} ({\boldsymbol {\alpha }})={\frac {\prod \limits _{i=1}^{K}\Gamma (\alpha _{i})}{\Gamma \left(\sum \limits _{i=1}^{K}\alpha _{i}\right)}},\qquad {\boldsymbol {\alpha }}=(\alpha _{1},\ldots ,\alpha _{K}).}
Thesupportof the Dirichlet distribution is the set ofK-dimensional vectorsxwhose entries are real numbers in the interval [0,1] such that‖x‖1=1{\displaystyle \|{\boldsymbol {x}}\|_{1}=1}, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of aK-waycategoricalevent. Another way to express this is that the domain of the Dirichlet distribution is itself a set ofprobability distributions, specifically the set ofK-dimensionaldiscrete distributions. The technical term for the set of points in the support of aK-dimensional Dirichlet distribution is theopenstandard(K− 1)-simplex,[3]which is a generalization of atriangle, embedded in the next-higher dimension. For example, withK= 3, the support is anequilateral triangleembedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.
A common special case is thesymmetric Dirichlet distribution, where all of the elements making up the parameter vectorαhave the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar valueα, called theconcentration parameter. In terms ofα, the density function has the form
f(x1,…,xK;α)=Γ(αK)Γ(α)K∏i=1Kxiα−1.{\displaystyle f(x_{1},\dots ,x_{K};\alpha )={\frac {\Gamma (\alpha K)}{\Gamma (\alpha )^{K}}}\prod _{i=1}^{K}x_{i}^{\alpha -1}.}
Whenα= 1,[1]the symmetric Dirichlet distribution is equivalent to a uniform distribution over the openstandard(K−1)-simplex, i.e. it is uniform over all points in itssupport. This particular distribution is known as theflat Dirichlet distribution. Values of the concentration parameter above 1 prefervariatesthat are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.
Whenα= 1/2, the distribution is the same as would be obtained by choosing a point uniformly at random from the surface of a(K−1)-dimensionalunit hypersphereand squaring each coordinate. Theα= 1/2distribution is theJeffreys priorfor the Dirichlet distribution.
More generally, the parameter vector is sometimes written as the productαn{\displaystyle \alpha {\boldsymbol {n}}}of a (scalar)concentration parameterαand a (vector)base measuren=(n1,…,nK){\displaystyle {\boldsymbol {n}}=(n_{1},\dots ,n_{K})}wherenlies within the(K− 1)-simplex (i.e.: its coordinatesni{\displaystyle n_{i}}sum to one). The concentration parameter in this case is larger by a factor ofKthan the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussingDirichlet processesand is often used in the topic modelling literature.
LetX=(X1,…,XK)∼Dir(α){\displaystyle X=(X_{1},\ldots ,X_{K})\sim \operatorname {Dir} ({\boldsymbol {\alpha }})}.
Let
α0=∑i=1Kαi.{\displaystyle \alpha _{0}=\sum _{i=1}^{K}\alpha _{i}.}
Then[4][5]
E[Xi]=αiα0,{\displaystyle \operatorname {E} [X_{i}]={\frac {\alpha _{i}}{\alpha _{0}}},}Var[Xi]=αi(α0−αi)α02(α0+1).{\displaystyle \operatorname {Var} [X_{i}]={\frac {\alpha _{i}(\alpha _{0}-\alpha _{i})}{\alpha _{0}^{2}(\alpha _{0}+1)}}.}
Furthermore, ifi≠j{\displaystyle i\neq j}
Cov[Xi,Xj]=−αiαjα02(α0+1).{\displaystyle \operatorname {Cov} [X_{i},X_{j}]={\frac {-\alpha _{i}\alpha _{j}}{\alpha _{0}^{2}(\alpha _{0}+1)}}.}
The covariance matrix issingular.
More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. Fort=(t1,…,tK)∈RK{\displaystyle {\boldsymbol {t}}=(t_{1},\dotsc ,t_{K})\in \mathbb {R} ^{K}}, denote byt∘i=(t1i,…,tKi){\displaystyle {\boldsymbol {t}}^{\circ i}=(t_{1}^{i},\dotsc ,t_{K}^{i})}itsi-thHadamard power. Then,[6]
E[(t⋅X)n]=n!Γ(α0)Γ(α0+n)∑t1k1⋯tKkKk1!⋯kK!∏i=1KΓ(αi+ki)Γ(αi)=n!Γ(α0)Γ(α0+n)Zn(t∘1⋅α,⋯,t∘n⋅α),{\displaystyle \operatorname {E} \left[({\boldsymbol {t}}\cdot {\boldsymbol {X}})^{n}\right]={\frac {n!\,\Gamma (\alpha _{0})}{\Gamma (\alpha _{0}+n)}}\sum {\frac {{t_{1}}^{k_{1}}\cdots {t_{K}}^{k_{K}}}{k_{1}!\cdots k_{K}!}}\prod _{i=1}^{K}{\frac {\Gamma (\alpha _{i}+k_{i})}{\Gamma (\alpha _{i})}}={\frac {n!\,\Gamma (\alpha _{0})}{\Gamma (\alpha _{0}+n)}}Z_{n}({\boldsymbol {t}}^{\circ 1}\cdot {\boldsymbol {\alpha }},\cdots ,{\boldsymbol {t}}^{\circ n}\cdot {\boldsymbol {\alpha }}),}
where the sum is over non-negative integersk1,…,kK{\displaystyle k_{1},\ldots ,k_{K}}withn=k1+⋯+kK{\displaystyle n=k_{1}+\cdots +k_{K}}, andZn{\displaystyle Z_{n}}is thecycle index polynomialof theSymmetric groupof degreen.
We have the special caseE[t⋅X]=t⋅αα0.{\displaystyle \operatorname {E} \left[{\boldsymbol {t}}\cdot {\boldsymbol {X}}\right]={\frac {{\boldsymbol {t}}\cdot {\boldsymbol {\alpha }}}{\alpha _{0}}}.}
The multivariate analogueE[(t1⋅X)n1⋯(tq⋅X)nq]{\textstyle \operatorname {E} \left[({\boldsymbol {t}}_{1}\cdot {\boldsymbol {X}})^{n_{1}}\cdots ({\boldsymbol {t}}_{q}\cdot {\boldsymbol {X}})^{n_{q}}\right]}for vectorst1,…,tq∈RK{\displaystyle {\boldsymbol {t}}_{1},\dotsc ,{\boldsymbol {t}}_{q}\in \mathbb {R} ^{K}}can be expressed[7]in terms of a color pattern of the exponentsn1,…,nq{\displaystyle n_{1},\dotsc ,n_{q}}in the sense ofPólya enumeration theorem.
Particular cases include the simple computation[8]
E[∏i=1KXiβi]=B(α+β)B(α)=Γ(∑i=1Kαi)Γ[∑i=1K(αi+βi)]×∏i=1KΓ(αi+βi)Γ(αi).{\displaystyle \operatorname {E} \left[\prod _{i=1}^{K}X_{i}^{\beta _{i}}\right]={\frac {B\left({\boldsymbol {\alpha }}+{\boldsymbol {\beta }}\right)}{B\left({\boldsymbol {\alpha }}\right)}}={\frac {\Gamma \left(\sum \limits _{i=1}^{K}\alpha _{i}\right)}{\Gamma \left[\sum \limits _{i=1}^{K}(\alpha _{i}+\beta _{i})\right]}}\times \prod _{i=1}^{K}{\frac {\Gamma (\alpha _{i}+\beta _{i})}{\Gamma (\alpha _{i})}}.}
Themodeof the distribution is[9]the vector(x1, ...,xK)with
xi=αi−1α0−K,αi>1.{\displaystyle x_{i}={\frac {\alpha _{i}-1}{\alpha _{0}-K}},\qquad \alpha _{i}>1.}
Themarginal distributionsarebeta distributions:[10]
Xi∼Beta(αi,α0−αi).{\displaystyle X_{i}\sim \operatorname {Beta} (\alpha _{i},\alpha _{0}-\alpha _{i}).}
The Dirichlet distribution is theconjugate priordistribution of thecategorical distribution(a genericdiscrete probability distributionwith a given number of possible outcomes) andmultinomial distribution(the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and theprior distributionof the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then theposterior distributionof the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.
Formally, this can be expressed as follows. Given a model
α=(α1,…,αK)=concentration hyperparameterp∣α=(p1,…,pK)∼Dir(K,α)X∣p=(x1,…,xK)∼Cat(K,p){\displaystyle {\begin{array}{rcccl}{\boldsymbol {\alpha }}&=&\left(\alpha _{1},\ldots ,\alpha _{K}\right)&=&{\text{concentration hyperparameter}}\\\mathbf {p} \mid {\boldsymbol {\alpha }}&=&\left(p_{1},\ldots ,p_{K}\right)&\sim &\operatorname {Dir} (K,{\boldsymbol {\alpha }})\\\mathbb {X} \mid \mathbf {p} &=&\left(\mathbf {x} _{1},\ldots ,\mathbf {x} _{K}\right)&\sim &\operatorname {Cat} (K,\mathbf {p} )\end{array}}}
then the following holds:
c=(c1,…,cK)=number of occurrences of categoryip∣X,α∼Dir(K,c+α)=Dir(K,c1+α1,…,cK+αK){\displaystyle {\begin{array}{rcccl}\mathbf {c} &=&\left(c_{1},\ldots ,c_{K}\right)&=&{\text{number of occurrences of category }}i\\\mathbf {p} \mid \mathbb {X} ,{\boldsymbol {\alpha }}&\sim &\operatorname {Dir} (K,\mathbf {c} +{\boldsymbol {\alpha }})&=&\operatorname {Dir} \left(K,c_{1}+\alpha _{1},\ldots ,c_{K}+\alpha _{K}\right)\end{array}}}
This relationship is used inBayesian statisticsto estimate the underlying parameterpof acategorical distributiongiven a collection ofNsamples. Intuitively, we can view thehyperpriorvectorαaspseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vectorc) in order to derive the posterior distribution.
In Bayesianmixture modelsand otherhierarchical Bayesian modelswith mixture components, Dirichlet distributions are commonly used as the prior distributions for thecategorical variablesappearing in the models. See the section onapplicationsbelow for more information.
In a model where a Dirichlet prior distribution is placed over a set ofcategorical-valuedobservations, themarginaljoint distributionof the observations (i.e. the joint distribution of the observations, with the prior parametermarginalized out) is aDirichlet-multinomial distribution. This distribution plays an important role inhierarchical Bayesian models, because when doinginferenceover such models using methods such asGibbs samplingorvariational Bayes, Dirichlet prior distributions are often marginalized out. See thearticle on this distributionfor more details.
IfXis aDir(α){\displaystyle \operatorname {Dir} ({\boldsymbol {\alpha }})}random variable, thedifferential entropyofX(innat units) is[11]
h(X)=E[−lnf(X)]=lnB(α)+(α0−K)ψ(α0)−∑j=1K(αj−1)ψ(αj){\displaystyle h({\boldsymbol {X}})=\operatorname {E} [-\ln f({\boldsymbol {X}})]=\ln \operatorname {B} ({\boldsymbol {\alpha }})+(\alpha _{0}-K)\psi (\alpha _{0})-\sum _{j=1}^{K}(\alpha _{j}-1)\psi (\alpha _{j})}
whereψ{\displaystyle \psi }is thedigamma function.
The following formula forE[ln(Xi)]{\displaystyle \operatorname {E} [\ln(X_{i})]}can be used to derive the differentialentropyabove. Since the functionsln(Xi){\displaystyle \ln(X_{i})}are the sufficient statistics of the Dirichlet distribution, theexponential family differential identitiescan be used to get an analytic expression for the expectation ofln(Xi){\displaystyle \ln(X_{i})}(see equation (2.62) in[12]) and its associated covariance matrix:
E[ln(Xi)]=ψ(αi)−ψ(α0){\displaystyle \operatorname {E} [\ln(X_{i})]=\psi (\alpha _{i})-\psi (\alpha _{0})}
and
Cov[ln(Xi),ln(Xj)]=ψ′(αi)δij−ψ′(α0){\displaystyle \operatorname {Cov} [\ln(X_{i}),\ln(X_{j})]=\psi '(\alpha _{i})\delta _{ij}-\psi '(\alpha _{0})}
whereψ{\displaystyle \psi }is thedigamma function,ψ′{\displaystyle \psi '}is thetrigamma function, andδij{\displaystyle \delta _{ij}}is theKronecker delta.
The spectrum ofRényi informationfor values other thanλ=1{\displaystyle \lambda =1}is given by[13]
FR(λ)=(1−λ)−1(−λlogB(α)+∑i=1KlogΓ(λ(αi−1)+1)−logΓ(λ(α0−K)+K)){\displaystyle F_{R}(\lambda )=(1-\lambda )^{-1}\left(-\lambda \log \mathrm {B} ({\boldsymbol {\alpha }})+\sum _{i=1}^{K}\log \Gamma (\lambda (\alpha _{i}-1)+1)-\log \Gamma (\lambda (\alpha _{0}-K)+K)\right)}
and the information entropy is the limit asλ{\displaystyle \lambda }goes to 1.
Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vectorZwith probability-mass distributionX, i.e.,P(Zi=1,Zj≠i=0|X)=Xi{\displaystyle P(Z_{i}=1,Z_{j\neq i}=0|{\boldsymbol {X}})=X_{i}}. The conditionalinformation entropyofZ, givenXis
S(X)=H(Z|X)=EZ[−logP(Z|X)]=∑i=1K−XilogXi{\displaystyle S({\boldsymbol {X}})=H({\boldsymbol {Z}}|{\boldsymbol {X}})=\operatorname {E} _{\boldsymbol {Z}}[-\log P({\boldsymbol {Z}}|{\boldsymbol {X}})]=\sum _{i=1}^{K}-X_{i}\log X_{i}}
This function ofXis a scalar random variable. IfXhas a symmetric Dirichlet distribution with allαi=α{\displaystyle \alpha _{i}=\alpha }, the expected value of the entropy (innat units) is[14]
E[S(X)]=∑i=1KE[−XilnXi]=ψ(Kα+1)−ψ(α+1){\displaystyle \operatorname {E} [S({\boldsymbol {X}})]=\sum _{i=1}^{K}\operatorname {E} [-X_{i}\ln X_{i}]=\psi (K\alpha +1)-\psi (\alpha +1)}
If
X=(X1,…,XK)∼Dir(α1,…,αK){\displaystyle X=(X_{1},\ldots ,X_{K})\sim \operatorname {Dir} (\alpha _{1},\ldots ,\alpha _{K})}
then, if the random variables with subscriptsiandjare dropped from the vector and replaced by their sum,
X′=(X1,…,Xi+Xj,…,XK)∼Dir(α1,…,αi+αj,…,αK).{\displaystyle X'=(X_{1},\ldots ,X_{i}+X_{j},\ldots ,X_{K})\sim \operatorname {Dir} (\alpha _{1},\ldots ,\alpha _{i}+\alpha _{j},\ldots ,\alpha _{K}).}
This aggregation property may be used to derive the marginal distribution ofXi{\displaystyle X_{i}}mentioned above.
IfX=(X1,…,XK)∼Dir(α){\displaystyle X=(X_{1},\ldots ,X_{K})\sim \operatorname {Dir} ({\boldsymbol {\alpha }})}, then the vectorXis said to beneutral[15]in the sense thatXKis independent ofX(−K){\displaystyle X^{(-K)}}[3]where
X(−K)=(X11−XK,X21−XK,…,XK−11−XK),{\displaystyle X^{(-K)}=\left({\frac {X_{1}}{1-X_{K}}},{\frac {X_{2}}{1-X_{K}}},\ldots ,{\frac {X_{K-1}}{1-X_{K}}}\right),}
and similarly for removing any ofX2,…,XK−1{\displaystyle X_{2},\ldots ,X_{K-1}}. Observe that any permutation ofXis also neutral (a property not possessed by samples drawn from ageneralized Dirichlet distribution).[16]
Combining this with the property of aggregation it follows thatXj+ ... +XKis independent of(X1X1+⋯+Xj−1,X2X1+⋯+Xj−1,…,Xj−1X1+⋯+Xj−1){\displaystyle \left({\frac {X_{1}}{X_{1}+\cdots +X_{j-1}}},{\frac {X_{2}}{X_{1}+\cdots +X_{j-1}}},\ldots ,{\frac {X_{j-1}}{X_{1}+\cdots +X_{j-1}}}\right)}. In fact it is true, further, for the Dirichlet distribution, that for3≤j≤K−1{\displaystyle 3\leq j\leq K-1}, the pair(X1+⋯+Xj−1,Xj+⋯+XK){\displaystyle \left(X_{1}+\cdots +X_{j-1},X_{j}+\cdots +X_{K}\right)}, and the two vectors(X1X1+⋯+Xj−1,X2X1+⋯+Xj−1,…,Xj−1X1+⋯+Xj−1){\displaystyle \left({\frac {X_{1}}{X_{1}+\cdots +X_{j-1}}},{\frac {X_{2}}{X_{1}+\cdots +X_{j-1}}},\ldots ,{\frac {X_{j-1}}{X_{1}+\cdots +X_{j-1}}}\right)}and(XjXj+⋯+XK,Xj+1Xj+⋯+XK,…,XKXj+⋯+XK){\displaystyle \left({\frac {X_{j}}{X_{j}+\cdots +X_{K}}},{\frac {X_{j+1}}{X_{j}+\cdots +X_{K}}},\ldots ,{\frac {X_{K}}{X_{j}+\cdots +X_{K}}}\right)}, viewed as triple of normalised random vectors, aremutually independent. The analogous result is true for partition of the indices{1, 2, ...,K}into any other pair of non-singleton subsets.
The characteristic function of the Dirichlet distribution is aconfluentform of theLauricella hypergeometric series. It is given byPhillipsas[17]
CF(s1,…,sK−1)=E(ei(s1X1+⋯+sK−1XK−1))=Ψ[K−1](α1,…,αK−1;α0;is1,…,isK−1){\displaystyle CF\left(s_{1},\ldots ,s_{K-1}\right)=\operatorname {E} \left(e^{i\left(s_{1}X_{1}+\cdots +s_{K-1}X_{K-1}\right)}\right)=\Psi ^{\left[K-1\right]}(\alpha _{1},\ldots ,\alpha _{K-1};\alpha _{0};is_{1},\ldots ,is_{K-1})}
where
Ψ[m](a1,…,am;c;z1,…zm)=∑(a1)k1⋯(am)kmz1k1⋯zmkm(c)kk1!⋯km!.{\displaystyle \Psi ^{[m]}(a_{1},\ldots ,a_{m};c;z_{1},\ldots z_{m})=\sum {\frac {(a_{1})_{k_{1}}\cdots (a_{m})_{k_{m}}\,z_{1}^{k_{1}}\cdots z_{m}^{k_{m}}}{(c)_{k}\,k_{1}!\cdots k_{m}!}}.}
The sum is over non-negative integersk1,…,km{\displaystyle k_{1},\ldots ,k_{m}}andk=k1+⋯+km{\displaystyle k=k_{1}+\cdots +k_{m}}. Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of acomplex path integral:
Ψ[m]=Γ(c)2πi∫Letta1+⋯+am−c∏j=1m(t−zj)−ajdt{\displaystyle \Psi ^{[m]}={\frac {\Gamma (c)}{2\pi i}}\int _{L}e^{t}\,t^{a_{1}+\cdots +a_{m}-c}\,\prod _{j=1}^{m}(t-z_{j})^{-a_{j}}\,dt}
whereLdenotes any path in the complex plane originating at−∞{\displaystyle -\infty }, encircling in the positive direction all the singularities of the integrand and returning to−∞{\displaystyle -\infty }.
Probability density functionf(x1,…,xK−1;α1,…,αK){\displaystyle f\left(x_{1},\ldots ,x_{K-1};\alpha _{1},\ldots ,\alpha _{K}\right)}plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.[18]
Another inequality relates the moment-generating function of the Dirichlet distribution to the convex conjugate of the scaled reversed Kullback-Leibler divergence:[19]
logE(exp∑i=1KsiXi)≤supp∑i=1K(pisi−αilog(αiα0pi)),{\displaystyle \log \operatorname {E} \left(\exp {\sum _{i=1}^{K}s_{i}X_{i}}\right)\leq \sup _{p}\sum _{i=1}^{K}\left(p_{i}s_{i}-\alpha _{i}\log \left({\frac {\alpha _{i}}{\alpha _{0}p_{i}}}\right)\right),}where the supremum is taken overpspanning the(K− 1)-simplex.
ForKindependently distributedGamma distributions:
Y1∼Gamma(α1,θ),…,YK∼Gamma(αK,θ){\displaystyle Y_{1}\sim \operatorname {Gamma} (\alpha _{1},\theta ),\ldots ,Y_{K}\sim \operatorname {Gamma} (\alpha _{K},\theta )}
we have:[20]: 402
V=∑i=1KYi∼Gamma(α0,θ),{\displaystyle V=\sum _{i=1}^{K}Y_{i}\sim \operatorname {Gamma} \left(\alpha _{0},\theta \right),}X=(X1,…,XK)=(Y1V,…,YKV)∼Dir(α1,…,αK).{\displaystyle X=(X_{1},\ldots ,X_{K})=\left({\frac {Y_{1}}{V}},\ldots ,{\frac {Y_{K}}{V}}\right)\sim \operatorname {Dir} \left(\alpha _{1},\ldots ,\alpha _{K}\right).}
Although theXis are not independent from one another, they can be seen to be generated from a set ofKindependentgammarandom variables.[20]: 594Unfortunately, since the sumVis lost in formingX(in fact it can be shown thatVis stochastically independent ofX), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.
Because the Dirichlet distribution is anexponential family distributionit has a conjugate prior.
The conjugate prior is of the form:[21]
CD(α∣v,η)∝(1B(α))ηexp(−∑kvkαk).{\displaystyle \operatorname {CD} ({\boldsymbol {\alpha }}\mid {\boldsymbol {v}},\eta )\propto \left({\frac {1}{\operatorname {B} ({\boldsymbol {\alpha }})}}\right)^{\eta }\exp \left(-\sum _{k}v_{k}\alpha _{k}\right).}
Herev{\displaystyle {\boldsymbol {v}}}is aK-dimensional real vector andη{\displaystyle \eta }is a scalar parameter. The domain of(v,η){\displaystyle ({\boldsymbol {v}},\eta )}is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is:[22]
∀kvk>0andη>−1and(η≤0or∑kexp−vkη<1){\displaystyle \forall k\;\;v_{k}>0\;\;\;\;{\text{ and }}\;\;\;\;\eta >-1\;\;\;\;{\text{ and }}\;\;\;\;(\eta \leq 0\;\;\;\;{\text{ or }}\;\;\;\;\sum _{k}\exp -{\frac {v_{k}}{\eta }}<1)}
The conjugation property can be expressed as
In the published literature there is no practical algorithm to efficiently generate samples fromCD(α∣v,η){\displaystyle \operatorname {CD} ({\boldsymbol {\alpha }}\mid {\boldsymbol {v}},\eta )}.
Dirichlet distributions are most commonly used as theprior distributionofcategorical variablesormultinomial variablesin Bayesianmixture modelsand otherhierarchical Bayesian models. (In many fields, such as innatural language processing, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as whenBernoulli distributionsandbinomial distributionsare commonly conflated.)
Inference over hierarchical Bayesian models is often done usingGibbs sampling, and in such a case, instances of the Dirichlet distribution are typicallymarginalized outof the model by integrating out the Dirichletrandom variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes aDirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (theconcentration parameters). One of the reasons for doing this is that Gibbs sampling of theDirichlet-multinomial distributionis extremely easy; see that article for more information.
Dirichlet distributions are very often used asprior distributionsinBayesian inference. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single valueαto which all parameters are set is called theconcentration parameter. If the sample space of the Dirichlet distribution is interpreted as adiscrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on theconcentration parameterfor further discussion.
One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) intoKpieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall thatα0=∑i=1Kαi.{\displaystyle \alpha _{0}=\sum _{i=1}^{K}\alpha _{i}.}Theαi/α0{\displaystyle \alpha _{i}/\alpha _{0}}values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely withα0{\displaystyle \alpha _{0}}.
Consider an urn containing balls ofKdifferent colors. Initially, the urn containsα1balls of color 1,α2balls of color 2, and so on. Now performNdraws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit asNapproaches infinity, the proportions of different colored balls in the urn will be distributed asDir(α1, ...,αK).[23]
For a formal proof, note that the proportions of the different colored balls form a bounded[0,1]K-valuedmartingale, hence by themartingale convergence theorem, these proportions convergealmost surelyandin meanto a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixedmomentsagree.
Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.
With a source of Gamma-distributed random variates, one can easily sample a random vectorx=(x1,…,xK){\displaystyle x=(x_{1},\ldots ,x_{K})}from theK-dimensional Dirichlet distribution with parameters(α1,…,αK){\displaystyle (\alpha _{1},\ldots ,\alpha _{K})}. First, drawKindependent random samplesy1,…,yK{\displaystyle y_{1},\ldots ,y_{K}}fromGamma distributionseach with density
Gamma(αi,1)=yiαi−1e−yiΓ(αi),{\displaystyle \operatorname {Gamma} (\alpha _{i},1)={\frac {y_{i}^{\alpha _{i}-1}\;e^{-y_{i}}}{\Gamma (\alpha _{i})}},\!}
and then set
xi=yi∑j=1Kyj.{\displaystyle x_{i}={\frac {y_{i}}{\sum _{j=1}^{K}y_{j}}}.}
The joint distribution of the independently sampled gamma variates,{yi}{\displaystyle \{y_{i}\}}, is given by the product:
e−∑iyi∏i=1Kyiαi−1Γ(αi){\displaystyle e^{-\sum _{i}y_{i}}\prod _{i=1}^{K}{\frac {y_{i}^{\alpha _{i}-1}}{\Gamma (\alpha _{i})}}}
Next, one uses a change of variables, parametrising{yi}{\displaystyle \{y_{i}\}}in terms ofy1,y2,…,yK−1{\displaystyle y_{1},y_{2},\ldots ,y_{K-1}}and∑i=1Kyi{\displaystyle \sum _{i=1}^{K}y_{i}}, and performs a change of variables fromy→x{\displaystyle y\to x}such thatx¯=∑i=1Kyi,x1=y1x¯,x2=y2x¯,…,xK−1=yK−1x¯{\displaystyle {\bar {x}}=\textstyle \sum _{i=1}^{K}y_{i},x_{1}={\frac {y_{1}}{\bar {x}}},x_{2}={\frac {y_{2}}{\bar {x}}},\ldots ,x_{K-1}={\frac {y_{K-1}}{\bar {x}}}}. Each of the variables0≤x1,x2,…,xk−1≤1{\displaystyle 0\leq x_{1},x_{2},\ldots ,x_{k-1}\leq 1}and likewise0≤∑i=1K−1xi≤1{\displaystyle 0\leq \textstyle \sum _{i=1}^{K-1}x_{i}\leq 1}. One must then use the change of variables formula,P(x)=P(y(x))|∂y∂x|{\displaystyle P(x)=P(y(x)){\bigg |}{\frac {\partial y}{\partial x}}{\bigg |}}in which|∂y∂x|{\displaystyle {\bigg |}{\frac {\partial y}{\partial x}}{\bigg |}}is the transformation Jacobian. Writing y explicitly as a function of x, one obtainsy1=x¯x1,y2=x¯x2…yK−1=x¯xK−1,yK=x¯(1−∑i=1K−1xi){\displaystyle y_{1}={\bar {x}}x_{1},y_{2}={\bar {x}}x_{2}\ldots y_{K-1}={\bar {x}}x_{K-1},y_{K}={\bar {x}}(1-\textstyle \sum _{i=1}^{K-1}x_{i})}The Jacobian now looks like|x¯0…x10x¯…x2⋮⋮⋱⋮−x¯−x¯…1−∑i=1K−1xi|{\displaystyle {\begin{vmatrix}{\bar {x}}&0&\ldots &x_{1}\\0&{\bar {x}}&\ldots &x_{2}\\\vdots &\vdots &\ddots &\vdots \\-{\bar {x}}&-{\bar {x}}&\ldots &1-\sum _{i=1}^{K-1}x_{i}\end{vmatrix}}}
The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain
|x¯0…x10x¯…x2⋮⋮⋱⋮00…1|{\displaystyle {\begin{vmatrix}{\bar {x}}&0&\ldots &x_{1}\\0&{\bar {x}}&\ldots &x_{2}\\\vdots &\vdots &\ddots &\vdots \\0&0&\ldots &1\end{vmatrix}}}
which can be expanded about the bottom row to obtain the determinant valuex¯K−1{\displaystyle {\bar {x}}^{K-1}}. Substituting for x in the joint pdf and including the Jacobian determinant, one obtains:
[∏i=1K−1(x¯xi)αi−1][x¯(1−∑i=1K−1xi)]αK−1∏i=1KΓ(αi)x¯K−1e−x¯=Γ(α¯)[∏i=1K−1(xi)αi−1][1−∑i=1K−1xi]αK−1∏i=1KΓ(αi)×x¯α¯−1e−x¯Γ(α¯){\displaystyle {\begin{aligned}&{\frac {\left[\prod _{i=1}^{K-1}({\bar {x}}x_{i})^{\alpha _{i}-1}\right]\left[{\bar {x}}(1-\sum _{i=1}^{K-1}x_{i})\right]^{\alpha _{K}-1}}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}{\bar {x}}^{K-1}e^{-{\bar {x}}}\\=&{\frac {\Gamma ({\bar {\alpha }})\left[\prod _{i=1}^{K-1}(x_{i})^{\alpha _{i}-1}\right]\left[1-\sum _{i=1}^{K-1}x_{i}\right]^{\alpha _{K}-1}}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}\times {\frac {{\bar {x}}^{{\bar {\alpha }}-1}e^{-{\bar {x}}}}{\Gamma ({\bar {\alpha }})}}\end{aligned}}}whereα¯=∑i=1Kαi{\displaystyle {\bar {\alpha }}=\textstyle \sum _{i=1}^{K}\alpha _{i}}. The right-hand side can be recognized as the product of a Dirichlet pdf for thexi{\displaystyle x_{i}}and a gamma pdf forx¯{\displaystyle {\bar {x}}}. The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain:x1,x2,…,xK−1∼(1−∑i=1K−1xi)αK−1∏i=1K−1xiαi−1B(α){\displaystyle x_{1},x_{2},\ldots ,x_{K-1}\sim {\frac {(1-\sum _{i=1}^{K-1}x_{i})^{\alpha _{K}-1}\prod _{i=1}^{K-1}x_{i}^{\alpha _{i}-1}}{B({\boldsymbol {\alpha }})}}}
Which is equivalent to
∏i=1Kxiαi−1B(α){\displaystyle {\frac {\prod _{i=1}^{K}x_{i}^{\alpha _{i}-1}}{B({\boldsymbol {\alpha }})}}}with support∑i=1Kxi=1{\displaystyle \sum _{i=1}^{K}x_{i}=1}
Below is example Python code to draw the sample:
This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.
A less efficient algorithm[24]relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulatex1{\displaystyle x_{1}}from
Beta(α1,∑i=2Kαi){\displaystyle {\textrm {Beta}}\left(\alpha _{1},\sum _{i=2}^{K}\alpha _{i}\right)}
Then simulatex2,…,xK−1{\displaystyle x_{2},\ldots ,x_{K-1}}in order, as follows. Forj=2,…,K−1{\displaystyle j=2,\ldots ,K-1}, simulateϕj{\displaystyle \phi _{j}}from
Beta(αj,∑i=j+1Kαi),{\displaystyle {\textrm {Beta}}\left(\alpha _{j},\sum _{i=j+1}^{K}\alpha _{i}\right),}
and let
xj=(1−∑i=1j−1xi)ϕj.{\displaystyle x_{j}=\left(1-\sum _{i=1}^{j-1}x_{i}\right)\phi _{j}.}
Finally, set
xK=1−∑i=1K−1xi.{\displaystyle x_{K}=1-\sum _{i=1}^{K-1}x_{i}.}
This iterative procedure corresponds closely to the "string cutting" intuition described above.
Below is example Python code to draw the sample:
Whenα1= ... =αK= 1, a sample from the distribution can be found by randomly drawing a set ofK− 1values independently and uniformly from the interval[0, 1], adding the values0and1to the set to make it haveK+ 1values, sorting the set, and computing the difference between each pair of order-adjacent values, to givex1, ...,xK.
Whenα1= ... =αK= 1/2, a sample from the distribution can be found by randomly drawingKvalues independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to givex1, ...,xK.
A point(x1, ...,xK)can be drawn uniformly at random from the (K−1)-dimensional unit hypersphere (which is the surface of aK-dimensionalhyperball) via a similar procedure. Randomly drawKvalues independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares.
|
https://en.wikipedia.org/wiki/Dirichlet_distribution
|
Inphysics, apartition functiondescribes thestatisticalproperties of a system inthermodynamic equilibrium.[citation needed]Partition functions arefunctionsof the thermodynamicstate variables, such as thetemperatureandvolume. Most of the aggregatethermodynamicvariables of the system, such as thetotal energy,free energy,entropy, andpressure, can be expressed in terms of the partition function or itsderivatives. The partition function is dimensionless.
Each partition function is constructed to represent a particularstatistical ensemble(which, in turn, corresponds to a particularfree energy). The most common statistical ensembles have named partition functions. Thecanonical partition functionapplies to acanonical ensemble, in which the system is allowed to exchangeheatwith theenvironmentat fixed temperature, volume, andnumber of particles. Thegrand canonical partition functionapplies to agrand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, andchemical potential. Other types of partition functions can be defined for different circumstances; seepartition function (mathematics)for generalizations. The partition function has many physical meanings, as discussed inMeaning and significance.
Initially, let us assume that a thermodynamically large system is inthermal contactwith the environment, with a temperatureT, and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called acanonical ensemble. The appropriatemathematical expressionfor the canonical partition function depends on thedegrees of freedomof the system, whether the context isclassical mechanicsorquantum mechanics, and whether the spectrum of states isdiscreteorcontinuous.[citation needed]
For a canonical ensemble that is classical and discrete, the canonical partition function is defined asZ=∑ie−βEi,{\displaystyle Z=\sum _{i}e^{-\beta E_{i}},}where
Theexponentialfactore−βEi{\displaystyle e^{-\beta E_{i}}}is otherwise known as theBoltzmann factor.
There are multiple approaches to deriving the partition function. The following derivation follows the more powerful and generalinformation-theoreticJaynesianmaximum entropyapproach.
According to thesecond law of thermodynamics, a system assumes a configuration ofmaximum entropyatthermodynamic equilibrium. We seek a probability distribution of statesρi{\displaystyle \rho _{i}}that maximizes the discreteGibbs entropyS=−kB∑iρilnρi{\displaystyle S=-k_{\text{B}}\sum _{i}\rho _{i}\ln \rho _{i}}subject to two physical constraints:
Applyingvariational calculuswith constraints (analogous in some sense to the method ofLagrange multipliers), we write the Lagrangian (or Lagrange function)L{\displaystyle {\mathcal {L}}}asL=(−kB∑iρilnρi)+λ1(1−∑iρi)+λ2(U−∑iρiEi).{\displaystyle {\mathcal {L}}=\left(-k_{\text{B}}\sum _{i}\rho _{i}\ln \rho _{i}\right)+\lambda _{1}\left(1-\sum _{i}\rho _{i}\right)+\lambda _{2}\left(U-\sum _{i}\rho _{i}E_{i}\right).}
Varying and extremizingL{\displaystyle {\mathcal {L}}}with respect toρi{\displaystyle \rho _{i}}leads to0≡δL=δ(−∑ikBρilnρi)+δ(λ1−∑iλ1ρi)+δ(λ2U−∑iλ2ρiEi)=∑i[δ(−kBρilnρi)−δ(λ1ρi)−δ(λ2Eiρi)]=∑i[∂∂ρi(−kBρilnρi)δρi−∂∂ρi(λ1ρi)δρi−∂∂ρi(λ2Eiρi)δρi]=∑i[−kBlnρi−kB−λ1−λ2Ei]δρi.{\displaystyle {\begin{aligned}0&\equiv \delta {\mathcal {L}}\\&=\delta {\left(-\sum _{i}k_{\text{B}}\rho _{i}\ln \rho _{i}\right)}+\delta {\left(\lambda _{1}-\sum _{i}\lambda _{1}\rho _{i}\right)}+\delta {\left(\lambda _{2}U-\sum _{i}\lambda _{2}\rho _{i}E_{i}\right)}\\[1ex]&=\sum _{i}\left[\delta {\left(-k_{\text{B}}\rho _{i}\ln \rho _{i}\right)}-\delta {\left(\lambda _{1}\rho _{i}\right)}-\delta {\left(\lambda _{2}E_{i}\rho _{i}\right)}\right]\\&=\sum _{i}\left[{\frac {\partial }{\partial \rho _{i}}}\left(-k_{\text{B}}\rho _{i}\ln \rho _{i}\right)\delta \rho _{i}-{\frac {\partial }{\partial \rho _{i}}}\left(\lambda _{1}\rho _{i}\right)\delta \rho _{i}-{\frac {\partial }{\partial \rho _{i}}}\left(\lambda _{2}E_{i}\rho _{i}\right)\delta \rho _{i}\right]\\[1ex]&=\sum _{i}\left[-k_{\text{B}}\ln \rho _{i}-k_{\text{B}}-\lambda _{1}-\lambda _{2}E_{i}\right]\delta \rho _{i}.\end{aligned}}}
Since this equation should hold for any variationδ(ρi){\displaystyle \delta (\rho _{i})}, it implies that0≡−kBlnρi−kB−λ1−λ2Ei.{\displaystyle 0\equiv -k_{\text{B}}\ln \rho _{i}-k_{\text{B}}-\lambda _{1}-\lambda _{2}E_{i}.}
Isolating forρi{\displaystyle \rho _{i}}yieldsρi=exp(−kB−λ1−λ2EikB).{\displaystyle \rho _{i}=\exp \left({\frac {-k_{\text{B}}-\lambda _{1}-\lambda _{2}E_{i}}{k_{\text{B}}}}\right).}
To obtainλ1{\displaystyle \lambda _{1}}, one substitutes the probability into the first constraint:1=∑iρi=exp(−kB−λ1kB)Z,{\displaystyle {\begin{aligned}1&=\sum _{i}\rho _{i}\\&=\exp \left({\frac {-k_{\text{B}}-\lambda _{1}}{k_{\text{B}}}}\right)Z,\end{aligned}}}whereZ{\displaystyle Z}is a number defined as the canonical ensemble partition function:Z≡∑iexp(−λ2kBEi).{\displaystyle Z\equiv \sum _{i}\exp \left(-{\frac {\lambda _{2}}{k_{\text{B}}}}E_{i}\right).}
Isolating forλ1{\displaystyle \lambda _{1}}yieldsλ1=kBln(Z)−kB{\displaystyle \lambda _{1}=k_{\text{B}}\ln(Z)-k_{\text{B}}}.
Rewritingρi{\displaystyle \rho _{i}}in terms ofZ{\displaystyle Z}givesρi=1Zexp(−λ2kBEi).{\displaystyle \rho _{i}={\frac {1}{Z}}\exp \left(-{\frac {\lambda _{2}}{k_{\text{B}}}}E_{i}\right).}
RewritingS{\displaystyle S}in terms ofZ{\displaystyle Z}givesS=−kB∑iρilnρi=−kB∑iρi(−λ2kBEi−ln(Z))=λ2∑iρiEi+kBln(Z)∑iρi=λ2U+kBln(Z).{\displaystyle {\begin{aligned}S&=-k_{\text{B}}\sum _{i}\rho _{i}\ln \rho _{i}\\&=-k_{\text{B}}\sum _{i}\rho _{i}\left(-{\frac {\lambda _{2}}{k_{\text{B}}}}E_{i}-\ln(Z)\right)\\&=\lambda _{2}\sum _{i}\rho _{i}E_{i}+k_{\text{B}}\ln(Z)\sum _{i}\rho _{i}\\&=\lambda _{2}U+k_{\text{B}}\ln(Z).\end{aligned}}}
To obtainλ2{\displaystyle \lambda _{2}}, we differentiateS{\displaystyle S}with respect to the average energyU{\displaystyle U}and apply thefirst law of thermodynamics,dU=TdS−PdV{\displaystyle dU=TdS-PdV}:dSdU=λ2≡1T.{\displaystyle {\frac {dS}{dU}}=\lambda _{2}\equiv {\frac {1}{T}}.}
(Note thatλ2{\displaystyle \lambda _{2}}andZ{\displaystyle Z}vary withU{\displaystyle U}as well; however, using the chain rule andddλ2ln(Z)=−1kB∑iρiEi=−UkB,{\displaystyle {\frac {d}{d\lambda _{2}}}\ln(Z)=-{\frac {1}{k_{\text{B}}}}\sum _{i}\rho _{i}E_{i}=-{\frac {U}{k_{\text{B}}}},}one can show that the additional contributions to this derivative cancel each other.)
Thus the canonical partition functionZ{\displaystyle Z}becomesZ≡∑ie−βEi,{\displaystyle Z\equiv \sum _{i}e^{-\beta E_{i}},}whereβ≡1/(kBT){\displaystyle \beta \equiv 1/(k_{\text{B}}T)}is defined as thethermodynamic beta. Finally, the probability distributionρi{\displaystyle \rho _{i}}and entropyS{\displaystyle S}are respectivelyρi=1Ze−βEi,S=UT+kBlnZ.{\displaystyle {\begin{aligned}\rho _{i}&={\frac {1}{Z}}e^{-\beta E_{i}},\\S&={\frac {U}{T}}+k_{\text{B}}\ln Z.\end{aligned}}}
Inclassical mechanics, thepositionandmomentumvariables of a particle can vary continuously, so the set of microstates is actuallyuncountable. Inclassicalstatistical mechanics, it is rather inaccurate to express the partition function as asumof discrete terms. In this case we must describe the partition function using anintegralrather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined asZ=1h3∫e−βH(q,p)d3qd3p,{\displaystyle Z={\frac {1}{h^{3}}}\int e^{-\beta H(q,p)}\,d^{3}q\,d^{3}p,}where
To make it into a dimensionless quantity, we must divide it byh, which is some quantity with units ofaction(usually taken to be thePlanck constant).
For generalized cases, the partition function ofN{\displaystyle N}particles ind{\displaystyle d}-dimensions is given by
Z=1hNd∫∏i=1Ne−βH(qi,pi)ddqiddpi,{\displaystyle Z={\frac {1}{h^{Nd}}}\int \prod _{i=1}^{N}e^{-\beta {\mathcal {H}}({\textbf {q}}_{i},{\textbf {p}}_{i})}\,d^{d}{\textbf {q}}_{i}\,d^{d}{\textbf {p}}_{i},}
For a gas ofN{\displaystyle N}identical classical non-interacting particles in three dimensions, the partition function isZ=1N!h3N∫exp(−β∑i=1NH(qi,pi))d3q1⋯d3qNd3p1⋯d3pN=ZsingleNN!{\displaystyle Z={\frac {1}{N!h^{3N}}}\int \,\exp \left(-\beta \sum _{i=1}^{N}H({\textbf {q}}_{i},{\textbf {p}}_{i})\right)\;d^{3}q_{1}\cdots d^{3}q_{N}\,d^{3}p_{1}\cdots d^{3}p_{N}={\frac {Z_{\text{single}}^{N}}{N!}}}where
The reason for thefactorialfactorN! is discussedbelow. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is notdimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it byh3N(wherehis usually taken to be the Planck constant).
For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as thetraceof the Boltzmann factor:Z=tr(e−βH^),{\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),}where:
Thedimensionofe−βH^{\displaystyle e^{-\beta {\hat {H}}}}is the number ofenergy eigenstatesof the system.
For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined asZ=1h∫⟨q,p|e−βH^|q,p⟩dqdp,{\displaystyle Z={\frac {1}{h}}\int \left\langle q,p\right\vert e^{-\beta {\hat {H}}}\left\vert q,p\right\rangle \,dq\,dp,}where:
In systems with multiplequantum statesssharing the same energyEs, it is said that theenergy levelsof the system aredegenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed byj) as follows:Z=∑jgje−βEj,{\displaystyle Z=\sum _{j}g_{j}\,e^{-\beta E_{j}},}wheregjis the degeneracy factor, or number of quantum statessthat have the same energy level defined byEj=Es.
The above treatment applies toquantumstatistical mechanics, where a physical system inside afinite-sized boxwill typically have a discrete set of energy eigenstates, which we can use as the statessabove. In quantum mechanics, the partition function can be more formally written as a trace over thestate space(which is independent of the choice ofbasis):Z=tr(e−βH^),{\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),}whereĤis thequantum Hamiltonian operator. The exponential of an operator can be defined using theexponential power series.
The classical form ofZis recovered when the trace is expressed in terms ofcoherent states[1]and when quantum-mechanicaluncertaintiesin the position and momentum of a particle are regarded as negligible. Formally, usingbra–ket notation, one inserts under the trace for each degree of freedom the identity:1=∫|x,p⟩⟨x,p|dxdph,{\displaystyle {\boldsymbol {1}}=\int |x,p\rangle \langle x,p|{\frac {dx\,dp}{h}},}where|x,p⟩is anormalisedGaussian wavepacketcentered at positionxand momentump. ThusZ=∫tr(e−βH^|x,p⟩⟨x,p|)dxdph=∫⟨x,p|e−βH^|x,p⟩dxdph.{\displaystyle Z=\int \operatorname {tr} \left(e^{-\beta {\hat {H}}}|x,p\rangle \langle x,p|\right){\frac {dx\,dp}{h}}=\int \langle x,p|e^{-\beta {\hat {H}}}|x,p\rangle {\frac {dx\,dp}{h}}.}A coherent state is an approximate eigenstate of both operatorsx^{\displaystyle {\hat {x}}}andp^{\displaystyle {\hat {p}}}, hence also of the HamiltonianĤ, with errors of the size of the uncertainties. IfΔxandΔpcan be regarded as zero, the action ofĤreduces to multiplication by the classical Hamiltonian, andZreduces to the classical configuration integral.
For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form.
Consider a systemSembedded into aheat bathB. Let the totalenergyof both systems beE. Letpidenote theprobabilitythat the systemSis in a particularmicrostate,i, with energyEi. According to thefundamental postulate of statistical mechanics(which states that all attainable microstates of a system are equally probable), the probabilitypiwill be inversely proportional to the number of microstates of the totalclosed system(S,B) in whichSis in microstateiwith energyEi. Equivalently,piwill be proportional to the number of microstates of the heat bathBwith energyE−Ei:pi=ΩB(E−Ei)Ω(S,B)(E).{\displaystyle p_{i}={\frac {\Omega _{B}(E-E_{i})}{\Omega _{(S,B)}(E)}}.}
Assuming that the heat bath's internal energy is much larger than the energy ofS(E≫Ei), we canTaylor-expandΩB{\displaystyle \Omega _{B}}to first order inEiand use the thermodynamic relation∂SB/∂E=1/T{\displaystyle \partial S_{B}/\partial E=1/T}, where hereSB{\displaystyle S_{B}},T{\displaystyle T}are the entropy and temperature of the bath respectively:klnpi=klnΩB(E−Ei)−klnΩ(S,B)(E)≈−∂(klnΩB(E))∂EEi+klnΩB(E)−klnΩ(S,B)(E)≈−∂SB∂EEi+klnΩB(E)Ω(S,B)(E)≈−EiT+klnΩB(E)Ω(S,B)(E){\displaystyle {\begin{aligned}k\ln p_{i}&=k\ln \Omega _{B}(E-E_{i})-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial {\big (}k\ln \Omega _{B}(E){\big )}}{\partial E}}E_{i}+k\ln \Omega _{B}(E)-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial S_{B}}{\partial E}}E_{i}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\\[5pt]&\approx -{\frac {E_{i}}{T}}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\end{aligned}}}
Thuspi∝e−Ei/(kT)=e−βEi.{\displaystyle p_{i}\propto e^{-E_{i}/(kT)}=e^{-\beta E_{i}}.}
Since the total probability to find the system insomemicrostate (the sum of allpi) must be equal to 1, we know that the constant of proportionality must be thenormalization constant, and so, we can define the partition function to be this constant:Z=∑ie−βEi=Ω(S,B)(E)ΩB(E).{\displaystyle Z=\sum _{i}e^{-\beta E_{i}}={\frac {\Omega _{(S,B)}(E)}{\Omega _{B}(E)}}.}
In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply theexpected value, orensemble averagefor the energy, which is the sum of the microstate energies weighted by their probabilities:⟨E⟩=∑sEsPs=1Z∑sEse−βEs=−1Z∂∂βZ(β,E1,E2,…)=−∂lnZ∂β{\displaystyle {\begin{aligned}\langle E\rangle =\sum _{s}E_{s}P_{s}&={\frac {1}{Z}}\sum _{s}E_{s}e^{-\beta E_{s}}\\[1ex]&=-{\frac {1}{Z}}{\frac {\partial }{\partial \beta }}Z(\beta ,E_{1},E_{2},\dots )\\[1ex]&=-{\frac {\partial \ln Z}{\partial \beta }}\end{aligned}}}or, equivalently,⟨E⟩=kBT2∂lnZ∂T.{\displaystyle \langle E\rangle =k_{\text{B}}T^{2}{\frac {\partial \ln Z}{\partial T}}.}
Incidentally, one should note that if the microstate energies depend on a parameter λ in the mannerEs=Es(0)+λAsfor alls{\displaystyle E_{s}=E_{s}^{(0)}+\lambda A_{s}\qquad {\text{for all}}\;s}then the expected value ofAis⟨A⟩=∑sAsPs=−1β∂∂λlnZ(β,λ).{\displaystyle \langle A\rangle =\sum _{s}A_{s}P_{s}=-{\frac {1}{\beta }}{\frac {\partial }{\partial \lambda }}\ln Z(\beta ,\lambda ).}
This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then setλto zero in the final expression. This is analogous to thesource fieldmethod used in thepath integral formulationofquantum field theory.[citation needed]
In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.
As we have already seen, the thermodynamic energy is⟨E⟩=−∂lnZ∂β.{\displaystyle \langle E\rangle =-{\frac {\partial \ln Z}{\partial \beta }}.}
Thevariancein the energy (or "energy fluctuation") is⟨(ΔE)2⟩≡⟨(E−⟨E⟩)2⟩=⟨E2⟩−⟨E⟩2=∂2lnZ∂β2.{\displaystyle \left\langle (\Delta E)^{2}\right\rangle \equiv \left\langle (E-\langle E\rangle )^{2}\right\rangle =\left\langle E^{2}\right\rangle -{\left\langle E\right\rangle }^{2}={\frac {\partial ^{2}\ln Z}{\partial \beta ^{2}}}.}
Theheat capacityisCv=∂⟨E⟩∂T=1kBT2⟨(ΔE)2⟩.{\displaystyle C_{v}={\frac {\partial \langle E\rangle }{\partial T}}={\frac {1}{k_{\text{B}}T^{2}}}\left\langle (\Delta E)^{2}\right\rangle .}
In general, consider theextensive variableXandintensive variableYwhereXandYform a pair ofconjugate variables. In ensembles whereYis fixed (andXis allowed to fluctuate), then the average value ofXwill be:⟨X⟩=±∂lnZ∂βY.{\displaystyle \langle X\rangle =\pm {\frac {\partial \ln Z}{\partial \beta Y}}.}
The sign will depend on the specific definitions of the variablesXandY. An example would beX= volume andY= pressure. Additionally, the variance inXwill be⟨(ΔX)2⟩≡⟨(X−⟨X⟩)2⟩=∂⟨X⟩∂βY=∂2lnZ∂(βY)2.{\displaystyle \left\langle (\Delta X)^{2}\right\rangle \equiv \left\langle (X-\langle X\rangle )^{2}\right\rangle ={\frac {\partial \langle X\rangle }{\partial \beta Y}}={\frac {\partial ^{2}\ln Z}{\partial (\beta Y)^{2}}}.}
In the special case ofentropy, entropy is given byS≡−kB∑sPslnPs=kB(lnZ+β⟨E⟩)=∂∂T(kBTlnZ)=−∂A∂T{\displaystyle S\equiv -k_{\text{B}}\sum _{s}P_{s}\ln P_{s}=k_{\text{B}}(\ln Z+\beta \langle E\rangle )={\frac {\partial }{\partial T}}(k_{\text{B}}T\ln Z)=-{\frac {\partial A}{\partial T}}}whereAis theHelmholtz free energydefined asA=U−TS, whereU= ⟨E⟩is the total energy andSis theentropy, so thatA=⟨E⟩−TS=−kBTlnZ.{\displaystyle A=\langle E\rangle -TS=-k_{\text{B}}T\ln Z.}
Furthermore, the heat capacity can be expressed asCv=T∂S∂T=−T∂2A∂T2.{\displaystyle C_{\text{v}}=T{\frac {\partial S}{\partial T}}=-T{\frac {\partial ^{2}A}{\partial T^{2}}}.}
Suppose a system is subdivided intoNsub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems areζ1,ζ2, ...,ζN, then the partition function of the entire system is theproductof the individual partition functions:Z=∏j=1Nζj.{\displaystyle Z=\prod _{j=1}^{N}\zeta _{j}.}
If the sub-systems have the same physical properties, then their partition functions are equal,ζ1=ζ2= ... =ζ, in which caseZ=ζN.{\displaystyle Z=\zeta ^{N}.}
However, there is a well-known exception to this rule. If the sub-systems are actuallyidentical particles, in thequantum mechanicalsense that they are impossible to distinguish even in principle, the total partition function must be divided by aN! (Nfactorial):Z=ζNN!.{\displaystyle Z={\frac {\zeta ^{N}}{N!}}.}
This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as theGibbs paradox.
It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperatureTand the microstate energiesE1,E2,E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.
The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probabilityPsthat the system occupies microstatesisPs=1Ze−βEs.{\displaystyle P_{s}={\frac {1}{Z}}e^{-\beta E_{s}}.}
Thus, as shown above, the partition function plays the role of a normalizing constant (note that it doesnotdepend ons), ensuring that the probabilities sum up to one:∑sPs=1Z∑se−βEs=1ZZ=1.{\displaystyle \sum _{s}P_{s}={\frac {1}{Z}}\sum _{s}e^{-\beta E_{s}}={\frac {1}{Z}}Z=1.}
This is the reason for callingZthe "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. Other partition functions for different ensembles divide up the probabilities based on other macrostate variables. As an example: the partition function for theisothermal-isobaric ensemble, thegeneralized Boltzmann distribution, divides up probabilities based on particle number, pressure, and temperature. The energy is replaced by the characteristic potential of that ensemble, theGibbs Free Energy. The letterZstands for theGermanwordZustandssumme, "sum over states". The usefulness of the partition function stems from the fact that the macroscopicthermodynamic quantitiesof a system can be related to its microscopic details through the derivatives of its partition function. Finding the partition function is also equivalent to performing aLaplace transformof the density of states function from the energy domain to theβdomain, and theinverse Laplace transformof the partition function reclaims the state density function of energies.
We can define agrand canonical partition functionfor agrand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperatureT, and achemical potentialμ.
The grand canonical partition function, denoted byZ{\displaystyle {\mathcal {Z}}}, is the following sum overmicrostatesZ(μ,V,T)=∑iexp(Niμ−EikBT).{\displaystyle {\mathcal {Z}}(\mu ,V,T)=\sum _{i}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).}Here, each microstate is labelled byi{\displaystyle i}, and has total particle numberNi{\displaystyle N_{i}}and total energyEi{\displaystyle E_{i}}. This partition function is closely related to thegrand potential,ΦG{\displaystyle \Phi _{\rm {G}}}, by the relation−kBTlnZ=ΦG=⟨E⟩−TS−μ⟨N⟩.{\displaystyle -k_{\text{B}}T\ln {\mathcal {Z}}=\Phi _{\rm {G}}=\langle E\rangle -TS-\mu \langle N\rangle .}This can be contrasted to the canonical partition function above, which is related instead to theHelmholtz free energy.
It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in statei{\displaystyle i}:pi=1Zexp(Niμ−EikBT).{\displaystyle p_{i}={\frac {1}{\mathcal {Z}}}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).}
An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi–Dirac statisticsfor fermions,Bose–Einstein statisticsfor bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.
The grand partition function is sometimes written (equivalently) in terms of alternate variables as[2]Z(z,V,T)=∑NizNiZ(Ni,V,T),{\displaystyle {\mathcal {Z}}(z,V,T)=\sum _{N_{i}}z^{N_{i}}Z(N_{i},V,T),}wherez≡exp(μ/kBT){\displaystyle z\equiv \exp(\mu /k_{\text{B}}T)}is known as the absoluteactivity(orfugacity) andZ(Ni,V,T){\displaystyle Z(N_{i},V,T)}is the canonical partition function.
|
https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)
|
Exponential Tilting(ET),Exponential Twisting, orExponential Change of Measure(ECM) is a distribution shifting technique used in many parts of mathematics.
The different exponential tiltings of a random variableX{\displaystyle X}is known as thenatural exponential familyofX{\displaystyle X}.
Exponential Tilting is used inMonte Carlo Estimationfor rare-event simulation, andrejectionandimportance samplingin particular.
In mathematical finance[1]Exponential Tilting is also known asEsscher tilting(or theEsscher transform), and often combined with indirectEdgeworth approximationand is used in such contexts as insurance futures pricing.[2]
The earliest formalization of Exponential Tilting is often attributed toEsscher[3]with its use in importance sampling being attributed toDavid Siegmund.[4]
Given a random variableX{\displaystyle X}with probability distributionP{\displaystyle \mathbb {P} }, densityf{\displaystyle f}, andmoment generating function(MGF)MX(θ)=E[eθX]<∞{\displaystyle M_{X}(\theta )=\mathbb {E} [e^{\theta X}]<\infty }, the exponentially tilted measurePθ{\displaystyle \mathbb {P} _{\theta }}is defined as follows:
whereκ(θ){\displaystyle \kappa (\theta )}is thecumulant generating function(CGF) defined as
We call
theθ{\displaystyle \theta }-tilteddensityofX{\displaystyle X}. It satisfiesfθ(x)∝eθxf(x){\displaystyle f_{\theta }(x)\propto e^{\theta x}f(x)}.
The exponential tilting of a random vectorX{\displaystyle X}has an analogous definition:
whereκ(θ)=logE[exp{θTX}]{\displaystyle \kappa (\theta )=\log \mathbb {E} [\exp\{\theta ^{T}X\}]}.
The exponentially tilted measure in many cases has the same parametric form as that ofX{\displaystyle X}. One-dimensional examples include the normal distribution, the exponential distribution, the binomial distribution and the Poisson distribution.
For example, in the case of the normal distribution,N(μ,σ2){\displaystyle N(\mu ,\sigma ^{2})}the tilted densityfθ(x){\displaystyle f_{\theta }(x)}is theN(μ+θσ2,σ2){\displaystyle N(\mu +\theta \sigma ^{2},\sigma ^{2})}density. The table below provides more examples of tilted densities.
For some distributions, however, the exponentially tilted distribution does not belong to the same parametric family asf{\displaystyle f}. An example of this is thePareto distributionwithf(x)=α/(1+x)α,x>0{\displaystyle f(x)=\alpha /(1+x)^{\alpha },x>0}, wherefθ(x){\displaystyle f_{\theta }(x)}is well defined forθ<0{\displaystyle \theta <0}but is not a standard distribution. In such examples, the random variable generation may not always be straightforward.[7]
In statistical mechanics, the energy of a system in equilibrium with a heat bath has theBoltzmann distribution:P(E∈dE)∝e−βEdE{\displaystyle \mathbb {P} (E\in dE)\propto e^{-\beta E}dE}, whereβ{\displaystyle \beta }is theinverse temperature. Exponential tilting then corresponds to changing the temperature:Pθ(E∈dE)∝e−(β−θ)EdE{\displaystyle \mathbb {P} _{\theta }(E\in dE)\propto e^{-(\beta -\theta )E}dE}.
Similarly, the energy and particle number of a system in equilibrium with a heat and particle bath has thegrand canonical distribution:P((N,E)∈(dN,dE))∝eβμN−βEdNdE{\displaystyle \mathbb {P} ((N,E)\in (dN,dE))\propto e^{\beta \mu N-\beta E}dNdE}, whereμ{\displaystyle \mu }is thechemical potential. Exponential tilting then corresponds to changing both the temperature and the chemical potential.
In many cases, the tilted distribution belongs to the same parametric family as the original. This is particularly true when the original density belongs to theexponential familyof distribution. This simplifies random variable generation during Monte-Carlo simulations. Exponential tilting may still be useful if this is not the case, though normalization must be possible and additional sampling algorithms may be needed.
In addition, there exists a simple relationship between the original and tilted CGF,
We can see this by observing that
Thus,
Clearly, this relationship allows for easy calculation of the CGF of the tilted distribution and thus the distributions moments. Moreover, it results in a simple form of the likelihood ratio. Specifically,
The exponential tilting ofX{\displaystyle X}, assuming it exists, supplies a family of distributions that can be used as proposal distributions foracceptance-rejection samplingor importance distributions forimportance sampling. One common application is sampling from a distribution conditional on a sub-region of the domain, i.e.X|X∈A{\displaystyle X|X\in A}. With an appropriate choice ofθ{\displaystyle \theta }, sampling fromPθ{\displaystyle \mathbb {P} _{\theta }}can meaningfully reduce the required amount of sampling or the variance of an estimator.
Thesaddlepoint approximation methodis a density approximation methodology often used for the distribution of sums and averages of independent, identically distributed random variables that employsEdgeworth series, but which generally performs better at extreme values. From the definition of the natural exponential family, it follows that
Applying theEdgeworth expansionforfθ(x¯){\displaystyle f_{\theta }({\bar {x}})}, we have
whereψ(z){\displaystyle \psi (z)}is the standard normal density of
andhn{\displaystyle h_{n}}are thehermite polynomials.
When considering values ofx¯{\displaystyle {\bar {x}}}progressively farther from the center of the distribution,|z|→∞{\displaystyle |z|\rightarrow \infty }and thehn(z){\displaystyle h_{n}(z)}terms become unbounded. However, for each value ofx¯{\displaystyle {\bar {x}}}, we can chooseθ{\displaystyle \theta }such that
This value ofθ{\displaystyle \theta }is referred to as the saddle-point, and the above expansion is always evaluated at the expectation of the tilted distribution. This choice ofθ{\displaystyle \theta }leads to the final representation of the approximation given by
Using the tilted distributionPθ{\displaystyle \mathbb {P} _{\theta }}as the proposal, therejection samplingalgorithm prescribes sampling fromfθ(x){\displaystyle f_{\theta }(x)}and accepting with probability
where
That is, a uniformly distributed random variablep∼Unif(0,1){\displaystyle p\sim {\mbox{Unif}}(0,1)}is generated, and the sample fromfθ(x){\displaystyle f_{\theta }(x)}is accepted if
Applying the exponentially tilted distribution as the importance distribution yields the equation
where
is the likelihood function. So, one samples fromfθ{\displaystyle f_{\theta }}to estimate the probability under the importance distributionP(dX){\displaystyle \mathbb {P} (dX)}and then multiplies it by the likelihood ratio. Moreover, we have the variance given by
Assume independent and identically distributed{Xi}{\displaystyle \{X_{i}\}}such thatκ(θ)<∞{\displaystyle \kappa (\theta )<\infty }. In order to estimateP(X1+⋯+Xn>c){\displaystyle \mathbb {P} (X_{1}+\cdots +X_{n}>c)}, we can employ importance sampling by taking
The constantc{\displaystyle c}can be rewritten asna{\displaystyle na}for some other constanta{\displaystyle a}. Then,
whereθa{\displaystyle \theta _{a}}denotes theθ{\displaystyle \theta }defined by the saddle-point equation
Given the tilting of a normal R.V., it is intuitive that the exponential tilting ofXt{\displaystyle X_{t}}, aBrownian motionwith driftμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}, is a Brownian motion with driftμ+θσ2{\displaystyle \mu +\theta \sigma ^{2}}and varianceσ2{\displaystyle \sigma ^{2}}. Thus, any Brownian motion with drift underP{\displaystyle \mathbb {P} }can be thought of as a Brownian motion without drift underPθ∗{\displaystyle \mathbb {P} _{\theta ^{*}}}. To observe this, consider the processXt=Bt+μt{\displaystyle X_{t}=B_{t}+\mu _{t}}.f(Xt)=fθ∗(Xt)dPdPθ∗=f(Bt)exp{μBT−12μ2T}{\displaystyle f(X_{t})=f_{\theta ^{*}}(X_{t}){\frac {d\mathbb {P} }{d\mathbb {P} _{\theta ^{*}}}}=f(B_{t})\exp\{\mu B_{T}-{\frac {1}{2}}\mu ^{2}T\}}. The likelihood ratio term,exp{μBT−12μ2T}{\displaystyle \exp\{\mu B_{T}-{\frac {1}{2}}\mu ^{2}T\}}, is amartingaleand commonly denotedMT{\displaystyle M_{T}}. Thus, a Brownian motion with drift process (as well as many other continuous processes adapted to the Brownian filtration) is aPθ∗{\displaystyle \mathbb {P} _{\theta ^{*}}}-martingale.[10][11]
The above leads to the alternate representation of thestochastic differential equationdX(t)=μ(t)dt+σ(t)dB(t){\displaystyle dX(t)=\mu (t)dt+\sigma (t)dB(t)}:dXθ(t)=μθ(t)dt+σ(t)dB(t){\displaystyle dX_{\theta }(t)=\mu _{\theta }(t)dt+\sigma (t)dB(t)}, whereμθ(t){\displaystyle \mu _{\theta }(t)}=μ(t)+θσ(t){\displaystyle \mu (t)+\theta \sigma (t)}. Girsanov's Formula states the likelihood ratiodPdPθ=exp{−∫0Tμθ(t)−μ(t)σ2(t)dB(t)+∫0T(σ2(t)2)dt}{\displaystyle {\frac {d\mathbb {P} }{d\mathbb {P} _{\theta }}}=\exp\{-\int \limits _{0}^{T}{\frac {\mu _{\theta }(t)-\mu (t)}{\sigma ^{2}(t)}}dB(t)+\int \limits _{0}^{T}({\frac {\sigma ^{2}(t)}{2}})dt\}}. Therefore, Girsanov's Formula can be used to implement importance sampling for certain SDEs.
Tilting can also be useful for simulating a processX(t){\displaystyle X(t)}via rejection sampling of the SDEdX(t)=μ(X(t))dt+dB(t){\displaystyle dX(t)=\mu (X(t))dt+dB(t)}. We may focus on the SDE since we know thatX(t){\displaystyle X(t)}can be written∫0tdX(t)+X(0){\displaystyle \int \limits _{0}^{t}dX(t)+X(0)}. As previously stated, a Brownian motion with drift can be tilted to a Brownian motion without drift. Therefore, we choosePproposal=Pθ∗{\displaystyle \mathbb {P} _{proposal}=\mathbb {P} _{\theta ^{*}}}. The likelihood ratiodPθ∗dP(dX(s):0≤s≤t)={\displaystyle {\frac {d\mathbb {P} _{\theta ^{*}}}{d\mathbb {P} }}(dX(s):0\leq s\leq t)=}∏τ≥texp{μ(X(τ))dX(τ)−μ(X(τ))22}dt=exp{∫0tμ(X(τ))dX(τ)−∫0tμ(X(s))22}dt{\displaystyle \prod \limits _{\tau \geq t}\exp\{\mu (X(\tau ))dX(\tau )-{\frac {\mu (X(\tau ))^{2}}{2}}\}dt=\exp\{\int \limits _{0}^{t}\mu (X(\tau ))dX(\tau )-\int \limits _{0}^{t}{\frac {\mu (X(s))^{2}}{2}}\}dt}. This likelihood ratio will be denotedM(t){\displaystyle M(t)}. To ensure this is a true likelihood ratio, it must be shown thatE[M(t)]=1{\displaystyle \mathbb {E} [M(t)]=1}. Assuming this condition holds, it can be shown thatfX(t)(y)=fX(t)θ∗(y)Eθ∗[M(t)|X(t)=y]{\displaystyle f_{X(t)}(y)=f_{X(t)}^{\theta ^{*}}(y)\mathbb {E} _{\theta ^{*}}[M(t)|X(t)=y]}. So, rejection sampling prescribes that one samples from a standard Brownian motion and accept with probabilityfX(t)(y)fX(t)θ∗(y)1c=1cEθ∗[M(t)|X(t)=y]{\displaystyle {\frac {f_{X(t)}(y)}{f_{X(t)}^{\theta ^{*}}(y)}}{\frac {1}{c}}={\frac {1}{c}}\mathbb {E} _{\theta ^{*}}[M(t)|X(t)=y]}.
Assume i.i.d. X's with light tailed distribution andE[X]>0{\displaystyle \mathbb {E} [X]>0}. In order to estimateψ(c)=P(τ(c)<∞){\displaystyle \psi (c)=\mathbb {P} (\tau (c)<\infty )}whereτ(c)=inf{t:∑i=1tXi>c}{\displaystyle \tau (c)=\inf\{t:\sum \limits _{i=1}^{t}X_{i}>c\}}, whenc{\displaystyle c}is large and henceψ(c){\displaystyle \psi (c)}small, the algorithm uses exponential tilting to derive the importance distribution. The algorithm is used in many aspects, such as sequential tests,[12]G/G/1 queuewaiting times, andψ{\displaystyle \psi }is used as the probability of ultimate ruin inruin theory. In this context, it is logical to ensure thatPθ(τ(c)<∞)=1{\displaystyle \mathbb {P} _{\theta }(\tau (c)<\infty )=1}. The criterionθ>θ0{\displaystyle \theta >\theta _{0}}, whereθ0{\displaystyle \theta _{0}}is s.t.κ′(θ0)=0{\displaystyle \kappa '(\theta _{0})=0}achieves this. Siegmund's algorithm usesθ=θ∗{\displaystyle \theta =\theta ^{*}}, if it exists, whereθ∗{\displaystyle \theta ^{*}}is defined in the following way:κ(θ∗)=0{\displaystyle \kappa (\theta ^{*})=0}.
It has been shown thatθ∗{\displaystyle \theta ^{*}}is the only tilting parameter producing bounded relative error (limsupx→∞VarIA(x)PA(x)2<∞{\displaystyle {\underset {x\rightarrow \infty }{\lim \sup }}{\frac {Var\mathbb {I} _{A(x)}}{\mathbb {P} A(x)^{2}}}<\infty }).[13]
We can only see the input and output of a black box, without knowing its structure. The algorithm is to use only minimal information on its structure. When we generate random numbers, the output may not be
within the same common parametric class, such as normal or exponential distributions. An automated way may be used to perform ECM. LetX1,X2,...{\displaystyle X_{1},X_{2},...}be i.i.d. r.v.’s with distributionG{\displaystyle G}; for simplicity we assumeX≥0{\displaystyle X\geq 0}. DefineFn=σ(X1,...,Xn,U1,...,Un){\displaystyle {\mathfrak {F}}_{n}=\sigma (X_{1},...,X_{n},U_{1},...,U_{n})}, whereU1,U2{\displaystyle U_{1},U_{2}}, . . . are independent (0, 1) uniforms. A randomized stopping time forX1,X2{\displaystyle X_{1},X_{2}}, . . . is then a stopping time w.r.t. the filtration{Fn}{\displaystyle \{{\mathfrak {F}}_{n}\}}, . . . Let furtherG{\displaystyle {\mathfrak {G}}}be a class of distributionsG{\displaystyle G}on[0,∞){\displaystyle [0,\infty )}withkG=∫0∞eθxG(dx)<∞{\displaystyle k_{G}=\int _{0}^{\infty }e^{\theta x}G(dx)<\infty }and defineGθ{\displaystyle G_{\theta }}bydGθdG(x)=eθx−kG{\displaystyle {\frac {dG_{\theta }}{dG(x)}}=e^{\theta x-k_{G}}}. We define a black-box algorithm for ECM for the givenθ{\displaystyle \theta }and the given classG{\displaystyle {\mathfrak {G}}}of distributions as a pair of a randomized stopping timeτ{\displaystyle \tau }and anFτ−{\displaystyle {\mathfrak {F}}_{\tau }-}measurable r.v.Z{\displaystyle Z}such thatZ{\displaystyle Z}is distributed according toGθ{\displaystyle G_{\theta }}for anyG∈G{\displaystyle G\in {\mathfrak {G}}}. Formally, we write this asPG(Z<x)=Gθ(x){\displaystyle \mathbb {P} _{G}(Z<x)=G_{\theta }(x)}for allx{\displaystyle x}. In other words, the rules of the game are that the algorithm may use
simulated values fromG{\displaystyle G}and additional uniforms to produce an r.v. fromGθ{\displaystyle G_{\theta }}.[14]
|
https://en.wikipedia.org/wiki/Exponential_tilting
|
ADALINE(Adaptive Linear Neuronor laterAdaptive Linear Element) is an early single-layerartificial neural networkand the name of the physical device that implemented it.[2][3][1][4][5]It was developed by professorBernard Widrowand his doctoral studentMarcian HoffatStanford Universityin 1960. It is based on theperceptronand consists of weights, a bias, and a summation function. The weights and biases were implemented byrheostats(as seen in the "knobby ADALINE"), and later,memistors.
The difference between Adaline and the standard (Rosenblatt) perceptron is in how they learn. Adaline unit weights are adjusted to match a teacher signal, before applying the Heaviside function (see figure), but the standard perceptron unit weights are adjusted to match the correct output, after applying the Heaviside function.
Amultilayer network ofADALINEunits is known as aMADALINE.
Adaline is a single-layer neural network with multiple nodes, where each node accepts multiple inputs and generates one output. Given the following variables:
the output is:
If we further assume thatx0=1{\displaystyle x_{0}=1}andw0=θ{\displaystyle w_{0}=\theta }, then the output further reduces to:
Thelearning ruleused by ADALINE is the LMS ("least mean squares") algorithm, a special case ofgradient descent.
Given the following:
the LMS algorithm updates the weights as follows:
This update rule minimizesE{\displaystyle E}, the square of the error,[6]and is in fact thestochastic gradient descentupdate forlinear regression.[7]
MADALINE (Many ADALINE[8]) is a three-layer (input, hidden, output), fully connected,feedforward neural networkarchitecture forclassificationthat uses ADALINE units in its hidden and output layers. I.e., itsactivation functionis thesign function.[9]The three-layer network usesmemistors. As the sign function is non-differentiable,backpropagationcannot be used to train MADALINE networks. Hence, three different training algorithms have been suggested, called Rule I, Rule II and Rule III.
Despite many attempts, they never succeeded in training more than a single layer of weights in a MADALINE model. This was until Widrow saw the backpropagation algorithm in a 1985 conference inSnowbird, Utah.[10]
MADALINE Rule 1 (MRI) - The first of these dates back to 1962.[11]It consists of two layers: the first is made of ADALINE units (let the output of thei{\displaystyle i}th ADALINE unit beoi{\displaystyle o_{i}}); the second layer has two units. One is a majority-voting unit that takes in alloi{\displaystyle o_{i}}, and if there are more positives than negatives, outputs +1, and vice versa. Another is a "job assigner": suppose the desired output is -1, and different from the majority-voted output, then the job assigner calculates the minimal number of ADALINE units that must change their outputs from positive to negative, and picks those ADALINE units that areclosestto being negative, and makes them update their weights according to the ADALINE learning rule. It was thought of as a form of "minimal disturbance principle".[12]
The largest MADALINE machine built had 1000 weights, each implemented by a memistor. It was built in 1963 and used MRI for learning.[12][13]
Some MADALINE machines were demonstrated to perform tasks includinginverted pendulumbalancing,weather forecasting, andspeech recognition.[3]
MADALINE Rule 2 (MRII) - The second training algorithm, described in 1988, improved on Rule I.[8]The Rule II training algorithm is based on a principle called "minimal disturbance". It proceeds by looping over training examples, and for each example, it:
MADALINE Rule 3 - The third "Rule" applied to a modified network withsigmoidactivations instead of sign; it was later found to be equivalent to backpropagation.[12]
Additionally, when flipping single units' signs does not drive the error to zero for a particular example, the training algorithm starts flipping pairs of units' signs, then triples of units, etc.[8]
|
https://en.wikipedia.org/wiki/ADALINE
|
Bio-inspired computing, short forbiologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates toconnectionism,social behavior, andemergence. Withincomputer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset ofnatural computation.
Early Ideas
The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as aTuring machine.Turingfirstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes.[1]He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.[2]
Neural Networks
First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms.[3]They first mathematically described that a system of simplistic neurons was able to produce simplelogical operationssuch aslogical conjunction,disjunctionandnegation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969bookby Marvin Minsky and Seymour Papert as the main cause.[4][5]Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known asthreshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.[6]
Ant Colonies
Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent.[7]More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms.[8]Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony.[9]A very important result since it suggested that group selectionevolutionary algorithmscoupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.
Some areas of study in biologically inspired computing, and their biological counterparts:
Bio-inspired computing, which work on a population of possible solutions in the context ofevolutionary algorithmsor in the context ofswarm intelligencealgorithms, are subdivided intoPopulation Based Bio-Inspired Algorithms(PBBIA).[10]They includeEvolutionary Algorithms,Particle Swarm Optimization,Ant colony optimization algorithmsandArtificial bee colony algorithms.
Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:
The virtual insect controlled by the trainedspiking neural networkcan find food after training in any unknown terrain.[11]After several generations of rule application it is usually the case that some forms of complex behaviouremerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (seecomplex systems). For this reason, when modeling theneural network, it is necessary to accurately model anin vivonetwork, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.[12]
Natural evolution is a good analogy to this method–the rules of evolution (selection,recombination/reproduction,mutationand more recentlytransposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used ingenetic algorithms.
Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.
Artificial intelligenceresearchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.
Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopicnetwork connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.[13]
Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphicchip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backedSpiNNakerand BrainScaleS, Stanford'sNeurogrid, IBM'sTrueNorth, and Qualcomm'sZeroth.
TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities.[14]The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.
In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian".[15]The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.
The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.[16]Recent advances in brain simulation linked individual variability in human cognitiveprocessing speedandfluid intelligenceto thebalance of excitation and inhibitioninstructural brain networks,functional connectivity,winner-take-all decision-makingandattractorworking memory.[17]
In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.
Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.
(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)
|
https://en.wikipedia.org/wiki/Bio-inspired_computing
|
TheBlue Brain Projectwas a Swiss brain research initiative that aimed to create adigital reconstructionof the mouse brain. The project was founded in May 2005 by the Brain Mind Institute ofÉcole Polytechnique Fédérale de Lausanne(EPFL) in Switzerland. The project ended in December 2024. Its mission was to use biologically-detailed digital reconstructions andsimulations of the mammalian brainto identify the fundamental principles of brain structure and function.
The project was headed by the founding directorHenry Markram—who also launched the EuropeanHuman Brain Project—and was co-directed by Felix Schürmann, Adriana Salvatore andSean Hill. Using aBlue Genesupercomputerrunning Michael Hines'sNEURON, the simulation involved a biologically realistic model ofneurons[1][2][3]and an empirically reconstructed modelconnectome.
There were a number of collaborations, including theCajal Blue Brain, which is coordinated by theSupercomputing and Visualization Center of Madrid(CeSViMa), and others run by universities and independent laboratories.
In 2006, the project made its first model of aneocortical columnwith simplified neurons.[4]In November 2007, it completed an initial model of the rat neocortical column. This marked the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.[5][4][6]
Neocortical columns are considered by some researchers to be the smallest functional units of theneocortex,[7][8]and they are thought to be responsible for higher functions such asconscious thought. In humans, each column is about 2 mm (0.079 in) in length, has a diameter of 0.5 mm (0.020 in) and contains about 60,000 neurons.Ratneocortical columns are very similar in structure but contain only 10,000 neurons and 108synapses.
In 2009, Henry Markram claimed that a "detailed, functional artificial human brain can be built within the next 10 years".[9]He conceived theHuman Brain Project, to which the Blue Brain Project contributed,[4]and which became funded in 2013 by the European Union with up to $1.3 billion.[10]
In 2015, the project simulated part of a rat brain with 30,000 neurons.[11]Also in 2015, scientists atÉcole Polytechnique Fédérale de Lausanne(EPFL) developed a quantitative model of the previously unknown relationship between the neurons and theastrocytes. This model describes the energy management of the brain through the function of the neuro-glial vascular unit (NGV). The additional layer of neuron andglial cellsis being added to Blue Brain Project models to improve functionality of the system.[12]
In 2017, Blue Brain Project discovered thatneural cliquesconnected to one another in up to eleven dimensions. The project's director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studyingneural networkscannot detect that many dimensions. The Blue Brain Project was able to model these networks usingalgebraic topology.[13]
In 2018, Blue Brain Project released its first digital 3D brain cell atlas[14]which, according toScienceDaily, is like "going from hand-drawn maps to Google Earth", providing information about major cell types, numbers, and positions in 737 regions of the brain.[15]
In 2019, Idan Segev, one of thecomputational neuroscientistsworking on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtualEEGexperiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as anartificial neural network(see citation for details).[16]
In 2022, scientists at the Blue Brain Project used algebraic topology to create an algorithm, Topological Neuronal Synthesis, that generates a large number of unique cells using only a few examples, synthesizing millions of unique neuronal morphologies. This allows them to replicate both healthy and diseased states of the brain. In a paper Kenari et al. were able to digitally synthesize dendritic morphologies from the mouse brain using this algorithm. They mapped entire brain regions from just a few reference cells. Since it is open source, this will enable the modelling of brain diseases and eventually, the algorithm could lead to digital twins of brains.[17]
The Blue Brain Project has developed a number of software to reconstruct and to simulate the mouse brain. All software tools mentioned below areopen source softwareand available for everyone onGitHub.[18][19][20][21][22][23]
Blue Brain Nexus[24][25][26]is a data integration platform which uses aknowledge graphto enable users to search, deposit, and organise data. It stands on theFAIR dataprinciples to provide flexible data management solutions beyond neuroscience studies.
BluePyOpt[27]is a tool that is used to build electrical models of single neurons. For this, it usesevolutionary algorithmsto constrain the parameters to experimental electrophysiological data. Attempts to reconstruct single neurons using BluePyOpt are reported by Rosanna Migliore,[28]and Stefano Masori.[29]
CoreNEURON[30]is a supplemental tool toNEURON, which allows large scale simulation by boosting memory usage and computational speed.
NeuroMorphoVis[31]is a visualisation tool for morphologies of neurons.
SONATA[32]is a joint effort between Blue Brain Project andAllen Institute for Brain Science, to develop a standard for data format, which realises a multiple platform working environment with greater computational memory and efficiency.
The project was funded primarily by theSwiss governmentand theFuture and Emerging Technologies(FET) Flagship grant from theEuropean Commission,[33]and secondarily by grants and donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because it was still a prototype and IBM was interested in exploring how applications would perform on the machine. BBP was viewed as a validation of theBlue Genesupercomputer concept.[34]
Although the Blue Brain Project is often associated with theHuman Brain Project(HBP), it is important to distinguish between the two. While the Blue Brain Project was a key participant of the HBP, much of the criticism regarding targets and management issues actually pertains to theHuman Brain Projectrather than the Blue Brain Project itself.[35][36]
Voices raised as early as September 2014 highlighted concerns over the trajectory of the Human Brain Project, noting challenges in meeting its high-level goals and questioning its organizational structure and the project's key promoter, Professor Henry Markram.[37][38]In 2016, the HBP underwent a restructuring with resources originally earmarked for brain simulation redistributed to support a wider array of neuroscience research groups. Since then, scientists and engineers from the Blue Brain Project have contributed to various aspects of the HBP, including the Neuroinformatics, EBRAINS, Neurorobotics, and High-Performance Computing Platforms.[39]This distinction is important because some of the criticism directed at the initial incarnation of HBP may have been misattributed to the Blue Brain Project due to their shared leadership and early involvement in the initiative.
The Cajal Blue Brain Project is coordinated by theTechnical University of Madridled byJavier de Felipeand uses the facilities of theSupercomputing and Visualization Center of Madridand its supercomputerMagerit.[40]TheCajal Institutealso participates in this collaboration. The main lines of research currently being pursued atCajal Blue Braininclude neurological experimentation and computer simulations.[41]Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.[42]
Noah Huttoncreated the documentary filmIn Silicoover a 10-year period. The film was released in April 2021.[43]The film covers the "shifting goals and landmarks"[44]of the Blue Brain Project as well as the drama, "In the end, this isn’t about science. It’s about the universals of power, greed, ego, and fame."[45][46]
|
https://en.wikipedia.org/wiki/Blue_Brain_Project
|
Catastrophic interference, also known ascatastrophic forgetting, is the tendency of anartificial neural networkto abruptly and drastically forget previously learned information upon learning new information.[1][2]
Neural networks are an important part of theconnectionistapproach tocognitive science. The issue of catastrophic interference when modeling human memory with connectionist models was originally brought to the attention of the scientific community by research from McCloskey and Cohen (1989),[1]and Ratcliff (1990).[2]It is a radical manifestation of the 'sensitivity-stability' dilemma[3]or the 'stability-plasticity' dilemma.[4]Specifically, these problems refer to the challenge of making an artificial neural network that is sensitive to, but not disrupted by, new information.
Lookup tablesand connectionist networks lie on the opposite sides of the stability plasticity spectrum.[5]The former remains completely stable in the presence of new information but lacks the ability togeneralize, i.e. infer general principles, from new inputs. On the other hand, connectionist networks like the standardbackpropagationnetwork can generalize to unseen inputs, but they are sensitive to new information. Backpropagation models can be analogized tohuman memoryinsofar as they have a similar ability to generalize[citation needed], but these networks often exhibit less stability than human memory. Notably, these backpropagation networks are susceptible to catastrophic interference. This is an issue when modelling human memory, because unlike these networks, humans typically do not show catastrophic forgetting.[6]
The term catastrophic interference was originally coined by McCloskey and Cohen (1989) but was also brought to the attention of the scientific community by research from Ratcliff (1990).[2]
McCloskey and Cohen (1989) noted the problem of catastrophic interference during two different experiments with backpropagation neural network modelling.
In their first experiment they trained a standard backpropagation neural network on a single training set consisting of 17 single-digit ones problems (i.e., 1 + 1 through 9 + 1, and 1 + 2 through 1 + 9) until the network could represent and respond properly to all of them. The error between the actual output and the desired output steadily declined across training sessions, which reflected that the network learned to represent the target outputs better across trials. Next, they trained the network on a single training set consisting of 17 single-digit twos problems (i.e., 2 + 1 through 2 + 9, and 1 + 2 through 9 + 2) until the network could represent, respond properly to all of them. They noted that their procedure was similar to how a child would learn their addition facts. Following each learning trial on the twos facts, the network was tested for its knowledge on both the ones and twos addition facts. Like the ones facts, the twos facts were readily learned by the network. However, McCloskey and Cohen noted the network was no longer able to properly answer the ones addition problems even after one learning trial of the twos addition problems. The output pattern produced in response to the ones facts often resembled an output pattern for an incorrect number more closely than the output pattern for a correct number. This is considered to be a drastic amount of error. Furthermore, the problems 2+1 and 2+1, which were included in both training sets, even showed dramatic disruption during the first learning trials of the twos facts.
In their second connectionist model, McCloskey and Cohen attempted to replicate the study on retroactive interference in humans by Barnes and Underwood (1959). They trained the model on A-B and A-C lists and used a context pattern in the input vector (input pattern), to differentiate between the lists. Specifically the network was trained to respond with the right B response when shown the A stimulus and A-B context pattern and to respond with the correct C response when shown the A stimulus and the A-C context pattern. When the model was trained concurrently on the A-B and A-C items then the network readily learned all of the associations correctly. In sequential training the A-B list was trained first, followed by the A-C list. After each presentation of the A-C list, performance was measured for both the A-B and A-C lists. They found that the amount of training on the A-C list in Barnes and Underwood study that lead to 50% correct responses, lead to nearly 0% correct responses by the backpropagation network. Furthermore, they found that the network tended to show responses that looked like the C response pattern when the network was prompted to give the B response pattern. This indicated that the A-C list apparently had overwritten the A-B list. This could be likened to learning the word dog, followed by learning the word stool and then finding that you think of the word stool when presented with the word dog.
McCloskey and Cohen tried to reduce interference through a number of manipulations including changing the number of hidden units, changing the value of the learning rate parameter, overtraining on the A-B list, freezing certain connection weights, changing target values 0 and 1 instead 0.1 and 0.9. However, none of these manipulations satisfactorily reduced the catastrophic interference exhibited by the networks.
Overall, McCloskey and Cohen (1989) concluded that:
Ratcliff (1990) used multiple sets of backpropagation models applied to standard recognition memory procedures, in which the items were sequentially learned.[2]After inspecting the recognition performance models he found two major problems:
Even one learning trial with new information resulted in a significant loss of the old information, paralleling the findings of McCloskey and Cohen (1989).[1]Ratcliff also found that the resulting outputs were often a blend of the previous input and the new input. In larger networks, items learned in groups (e.g. AB then CD) were more resistant to forgetting than were items learned singly (e.g. A then B then C...). However, the forgetting for items learned in groups was still large. Adding new hidden units to the network did not reduce interference.
This finding contradicts with studies on human memory, which indicated that discrimination increases with learning. Ratcliff attempted to alleviate this problem by adding 'response nodes' that would selectively respond to old and new inputs. However, this method did not work as these response nodes would become active for all inputs. A model which used a context pattern also failed to increase discrimination between new and old items.
The main cause of catastrophic interference seems to be overlap in the representations at the hidden layer of distributed neural networks.[8][9][10]In a distributed representation, each input tends to create changes in the weights of many of the nodes. Catastrophic forgetting occurs because when many of the weights where "knowledge is stored" are changed, it is unlikely for prior knowledge to be kept intact. During sequential learning, the inputs become mixed, with the new inputs being superimposed on top of the old ones.[9]Another way to conceptualize this is by visualizing learning as a movement through a weight space.[11]This weight space can be likened to a spatial representation of all of the possible combinations of weights that the network could possess. When a network first learns to represent a set of patterns, it finds a point in the weight space that allows it to recognize all of those patterns.[10]However, when the network then learns a new set of patterns, it will move to a place in the weight space for which the only concern is the recognition of the new patterns.[10]To recognize both sets of patterns, the network must find a place in the weight space suitable for recognizing both the new and the old patterns.
Below are a number of techniques which have empirical support in successfully reducing catastrophic interference in backpropagation neural networks:
Many of the early techniques in reducing representational overlap involved making either the input vectors or the hidden unit activation patternsorthogonalto one another. Lewandowsky and Li (1995)[12]noted that the interference between sequentially learned patterns is minimized if the input vectors are orthogonal to each other. Input vectors are said to be orthogonal to each other if the pairwise product of their elements across the two vectors sum to zero. For example, the patterns [0,0,1,0] and [0,1,0,0] are said to be orthogonal because (0×0 + 0×1 + 1×0 + 0×0) = 0. One of the techniques which can create orthogonal representations at the hidden layers involves bipolar feature coding (i.e., coding using -1 and 1 rather than 0 and 1).[10]Orthogonal patterns tend to produce less interference with each other. However, not all learning problems can be represented using these types of vectors and some studies report that the degree of interference is still problematic with orthogonal vectors.[2]
According to French (1991),[8]catastrophic interference arises infeedforwardbackpropagation networks due to the interaction of node activations, or activation overlap, that occurs in distributed representations at the hidden layer.Neural networksthat employ very localized representations do not show catastrophic interference because of the lack of overlap at the hidden layer. French therefore suggested that reducing the value of activation overlap at the hidden layer would reduce catastrophic interference in distributed networks. Specifically he proposed that this could be done through changing the distributed representations at the hidden layer to 'semi-distributed' representations. A 'semi-distributed' representation has fewer hidden nodes that are active, and/or a lower activation value for these nodes, for each representation, which will make the representations of the different inputs overlap less at the hidden layer. French recommended that this could be done through 'activation sharpening', a technique which slightly increases the activation of a certain number of the most active nodes in the hidden layer, slightly reduces the activation of all the other units and then changes the input-to-hidden layer weights to reflect these activation changes (similar to error backpropagation).
Kortge (1990)[13]proposed a learning rule for training neural networks, called the 'novelty rule', to help alleviate catastrophic interference. As its name suggests, this rule helps the neural network to learn only the components of a new input that differ from an old input. Consequently, the novelty rule changes only the weights that were not previously dedicated to storing information, thereby reducing the overlap in representations at the hidden units. In order to apply the novelty rule, during learning the input pattern is replaced by a novelty vector that represents the components that differ. When the novelty rule is used in a standard backpropagation network there is no, or lessened, forgetting of old items when new items are presented sequentially.[13]However, a limitation is that this rule can only be used with auto-encoder or auto-associative networks, in which the target response for the output layer is identical to the input pattern.
McRae and Hetherington (1993)[9]argued that humans, unlike most neural networks, do not take on new learning tasks with a random set of weights. Rather, people tend to bring a wealth of prior knowledge to a task and this helps to avoid the problem of interference. They showed that when a network is pre-trained on a random sample of data prior to starting a sequential learning task that this prior knowledge will naturally constrain how the new information can be incorporated. This would occur because a random sample of data from a domain that has a high degree of internal structure, such as the English language, training would capture the regularities, or recurring patterns, found within that domain. Since the domain is based on regularities, a newly learned item will tend to be similar to the previously learned information, which will allow the network to incorporate new data with little interference with existing data. Specifically, an input vector that follows the same pattern of regularities as the previously trained data should not cause a drastically different pattern of activation at the hidden layer or drastically alter weights.
Robins (1995)[14]described that catastrophic forgetting can be prevented by rehearsal mechanisms. This means that when new information is added, the neural network is retrained on some of the previously learned information. In general, however, previously learned information may not be available for such retraining. A solution for this is "pseudo-rehearsal", in which the network is not retrained on the actual previous data but on representations of them. Several methods are based upon this general mechanism.
French (1997) proposed a pseudo-recurrent backpropagation network (see Figure 2).[5]In this model the network is separated into two functionally distinct but interacting sub-networks. This model is biologically inspired and is based on research from McClelland et al. (1995)[15]McClelland and colleagues suggested that thehippocampusandneocortexact as separable but complementary memory systems, with the hippocampus forshort term memorystorage and the neocortex forlong term memorystorage. Information initially stored in the hippocampus can be "transferred" to the neocortex by means of reactivation or replay. In the pseudo-recurrent network, one of the sub-networks acts as an early processing area, akin to the hippocampus, and functions to learn new input patterns. The other sub-network acts as a final-storage area, akin to the neocortex. However, unlike in the McClelland et al. (1995) model, the final-storage area sends internally generated representation back to the early processing area. This creates a recurrent network. French proposed that this interleaving of old representations with new representations is the only way to reduce radical forgetting. Since the brain would most likely not have access to the original input patterns, the patterns that would be fed back to the neocortex would be internally generated representations calledpseudo-patterns. These pseudo-patterns are approximations of previous inputs[14]and they can be interleaved with the learning of new inputs.
Inspired by[14]and independently of[5]Ans and Rousset (1997)[16]also proposed a two-network artificial neural architecture withmemory self-refreshingthat overcomes catastrophic interference when sequential learning tasks are carried out in distributed networks trained by backpropagation. The principle is to learn new external patterns concurrently with internally generated pseudopatterns, or 'pseudo-memories', that reflect the previously learned information. What mainly distinguishes this model from those that use classical pseudorehearsal[14][5]in feedforward multilayer networks is areverberatingprocess[further explanation needed]that is used for generating pseudopatterns. After a number of activity re-injections from a single random seed, this process tends to go up to nonlinear networkattractorsthat are more suitable for capturing optimally the deep structure of knowledge distributed within connection weights than the single feedforward pass of activity used in pseudo-rehearsal. The memory self-refreshing procedure turned out to be very efficient in transfer processes[17]and in serial learning of temporal sequences of patterns without catastrophic forgetting.[18]
In recent years, pseudo-rehearsal has re-gained in popularity thanks to the progress in the capabilities of deepgenerative models. When such deep generative models are used to generate the "pseudo-data" to be rehearsed, this method is typically referred to as generative replay.[19]Such generative replay can effectively prevent catastrophic forgetting, especially when the replay is performed in the hidden layers rather than at the input level.[20][21]
The insights into the mechanisms of memory consolidation during the sleep processes in human and animal brain led to other biologically inspired approaches. While declarative memories are in the classical picture consolidated by hippocampo-neocortical dialog during NREM phase of sleep (see above), some types of procedural memories were suggested not to rely on the hippocampus and involve REM phase of the sleep (e.g.,[22]but see[23]for the complexity of the topic). This inspired models where internal representations (memories) created by previous learning are spontaneously replayed during sleep-like periods in the network itself[24][25](i.e. without help of secondary network performed by generative replay approaches mentioned above).
Latent learning is a technique used by Gutstein & Stump (2015)[26]to mitigate catastrophic interference by taking advantage oftransfer learning. This approach tries to find optimal encodings for any new classes to be learned, so that they are least likely to catastrophically interfere with existing responses. Given a network that has learned to discriminate among one set of classes using Error Correcting Output Codes (ECOC)[27](as opposed to1 hot codes), optimal encodings for new classes are chosen by observing the network's average responses to them. Since these average responses arose while learning the original set of classeswithout any exposure to the new classes, they are referred to as 'Latently Learned Encodings'. This terminology borrows from the concept oflatent learning, as introduced by Tolman in 1930.[28]In effect, this technique uses transfer learning to avoid catastrophic interference, by making a network's responses to new classes as consistent as possible with existing responses to classes already learned.
Kirkpatrick et al. (2017)[29]proposed elastic weight consolidation (EWC), a method to sequentially train a single artificial neural network on multiple tasks. This technique supposes that some weights of the trained neural network are more important for previously learned tasks than others. During training of the neural network on a new task, changes to the weights of the network are made less likely the greater their importance. To estimate the importance of the network weights, EWC uses probabilistic mechanisms, in particular the Fisher information matrix, but this can be done in other ways as well.[30][31][32]
Catastrophic Remembering, also referred to asOvergeneralizationand extremeDéjà vu[33],refers to the tendency of artificial neural networks to abruptly lose the ability to discriminate between old and new data.[34]The essence of this problem is that when a large number of patterns are involved the network is no longer learning to reproduce a specific population of patterns, but is simply learning to “pass through” any input that it is given. The distinction between these two conditions is that in the first case the network will be able to distinguish between the learned population and any novel inputs ("recognize" the learned population) while in the latter case it will not.[35]Catastrophic Remembering may often occur as an outcome of elimination of catastrophic interference by using a large representative training set or enough sequential memory sets (memory replay or data rehearsal), leading to a breakdown in discrimination between input patterns that have been learned and those that have not.[33]The problem was initially investigated by Sharkey and Sharkey (1995),[33]Robins (1993)[35]and Ratcliff (1990),[2]and French (1999).[10]Kaushik et al. (2021)[34]reintroduced the problem in the context of modern neural networks and proposed a solution.
|
https://en.wikipedia.org/wiki/Catastrophic_interference
|
Acognitive architectureis both a theory about the structure of thehuman mindand to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) andcomputational cognitive science.[1]These formalizedmodelscan be used to further refine comprehensive theories ofcognitionand serve as the frameworks for useful artificial intelligence programs. Successful cognitive architectures includeACT-R(Adaptive Control of Thought – Rational) andSOAR.
The research on cognitive architectures as software instantiation of cognitive theories was initiated byAllen Newellin 1990.[2]
A theory for a cognitive architecture is an "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together — in conjunction with knowledge and skills embodied within the architecture — to yield intelligent behavior in a diversity of complex environments."[3]
Herbert A. Simon, one of the founders of the field of artificial intelligence, stated that the 1960 thesis by his studentEd Feigenbaum,EPAMprovided a possible "architecture for cognition" because it included some commitments for how more than one fundamental aspect of the human mind worked (in EPAM's case,[4]human memoryand humanlearning).
John R. Andersonstarted research on human memory in the early 1970s and his 1973 thesis withGordon H. Bowerprovided a theory of human associative memory.[5]He included more aspects of his research on long-term memory and thinking processes into this research and eventually designed a cognitive architecture he eventually calledACT. He and his students were influenced byAllen Newell's use of the term "cognitive architecture". Anderson's lab used the term to refer to the ACT theory as embodied in a collection of papers and designs. (There was not a complete implementation of ACT at the time.)
In 1983 John R. Anderson published the seminal work in this area, entitledThe Architecture of Cognition.[6]One can distinguish between the theory of cognition and the implementation of the theory. The theory of cognition outlined the structure of the various parts of the mind and made commitments to the use of rules, associative networks, and other aspects. The cognitive architecture implements the theory on computers. The software used to implement the cognitive architectures was also called "cognitive architectures". Thus, a cognitive architecture can also refer to a blueprint forintelligent agents. It proposes (artificial)computationalprocesses that act like certain cognitive systems. Most often, these processes are based on human cognition, but otherintelligentsystems may also be suitable. Cognitive architectures form a subset of generalagent architectures. The term 'architecture' implies an approach that attempts to model not only behavior, but also structural properties of the modelled system.
Cognitive architectures can besymbolic,connectionist, orhybrid.[7]Some cognitive architectures or models are based on a set ofgeneric rules, as, e.g., theInformation Processing Language(e.g.,Soarbased on theunified theory of cognition, or similarlyACT-R). Many of these architectures are based on principle that cognition is computational (seecomputationalism). In contrast, subsymbolic processing specifies no sucha prioriassumptions, relying only on emergent properties of processing units (e.g., nodes[clarification needed]). Hybrid architectures such asCLARIONcombine both types of processing. A further distinction is whether the architecture iscentralized, with a neural correlate of aprocessorat its core, ordecentralized(distributed). Decentralization has become popular under the name ofparallel distributed processingin mid-1980s andconnectionism, a prime example being theneural network. A further design issue is additionally a decision betweenholisticandatomistic, or (more concretely)modularstructure.
In traditionalAI,intelligenceis programmed in a top-down fashion. Although such a system may be designed tolearn, the programmer ultimately must imbue it with their own intelligence.Biologically-inspired computing, on the other hand, takes a morebottom-up, decentralized approach; bio-inspired techniques often involve the method of specifying a set of simple generic rules or a set of simple nodes, from the interaction of which emerges the overall behavior. It is hoped to build up complexity until the end result is something markedly complex (see complex systems). However, it is also arguable that systems designedtop-downon the basis of observations of what humans and other animals can do, rather than on observations of brain mechanisms, are also biologically inspired, though in a different way.[citation needed]
Some well-known cognitive architectures, in alphabetical order:
|
https://en.wikipedia.org/wiki/Cognitive_architecture
|
Connectionist expert systemsareartificial neural network(ANN) basedexpert systemswhere the ANN generates inferencing rules e.g., fuzzy-multi layerperceptronwhere linguistic and natural form of inputs are used. Apart from that, roughset theorymay be used for encoding knowledge in the weights better and alsogenetic algorithmsmay be used to optimize the search solutions better. Symbolic reasoning methods may also be incorporated (seehybrid intelligent system). (Also seeexpert system,neural network,clinical decision support system.)
This robotics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Connectionist_expert_system
|
Connectomicsis the production and study ofconnectomes, which are comprehensive maps ofconnectionswithin anorganism'snervous system. Study of neuronal wiring diagrams looks at how they contribute to the health and behavior of an organism.
There are two very different types of connectomes; microscale and macroscale. Microscale connectomics maps every neuron and synapse in an organism or chunk of tissue, using electron microscopy and histology. This level of detail is only possible for small animals (flies and worms) or tiny portions (less than 1 mm on a side) of large animal brains. Macroscale connectomics, on the other hand, refers to mapping out large fiber tracts and functional gray matter areas within a much larger brain (typically human), typically using forms of MRI to map out structure and function. Somewhat confusingly, both fields simply refer to their maps as "connectomes".
Macroscale connectomics typically concentrates on the humannervous system, a network made of up to billions of connections and responsible for our thoughts, emotions, actions, memories, function and dysfunction. Because these structures are physically large and experiments on humans must be non-invasive, typical methods arefunctionalandstructuralMRI data to measure blood flow (functional) and water diffusivity (structural). Examples include theHuman Connectome Projectand others.[1][2]Connectomics in this regime aims to advance our understanding of mental health and cognition by understanding how cells in the nervous system are connected and communicate.
In contrast, microscale connectomics looks in much greater detail at much smaller circuits, such as the wormC. elegans, thefruit flyDrosophila,[3]and portions of mammal brains such as the retina[4]and cortex. Connectomics at these scales searches for mechanistic explanations of how the nervous system operates.
Macroscale connectomes are commonly collected usingdiffusion-weighted magnetic resonance imaging(DW-MRI) andfunctional magnetic resonance imaging(fMRI). DW-MRI datasets can span the entire brain, imaging white matter between the cortex and subcortex, providing information about the diffusion of water molecules in brain tissue, and allowing researchers to infer the orientation and integrity of white matter pathways.[5]DW-MRI can be used in conjunction with tractography where it enables the reconstruction of white matter tracts in the brain.[5]It does so by measuring the diffusion of water molecules in multiple directions, as DW-MRI can estimate the local fiber orientations and generate a model of the brain's fiber pathways.[5]Meanwhile, tractography algorithms trace the likely trajectories of these pathways, providing a representation of the brain's anatomical connectivity.[5]Metrics such as fractional anisotropy (FA), mean diffusivity (MD), or connectivity strength can be computed from DW-MRI data to assess the microstructural properties of white matter and quantify the strength of (long-range) connections between brain regions.[6]
In contrast to DW-MRI, fMRI measures the blood oxygenation level-dependent (BOLD) signal, which reflects changes in cerebral blood flow and oxygenation associated with neural activity, as regulated by theneurovascular unit.[7]When used together, a resting-state fMRI and a DW-MRI dataset provide a comprehensive view of how regions of the brain are structurally connected, and how closely they are communicating.[8][9]Resting-state functional connectivity (RSFC) analysis is a common method to measure connectomes using fMRI that involves acquiring fMRI data while the subject is at rest and not performing any specific tasks or stimuli.[10]RSFC examines the temporal correlation of the BOLD signals between different brain regions (after accounting for the confounding effect of other regions), providing insights into functional connectivity.[7]
Techniques that actively manipulate the brain, often called neuromodulation, can provide insights into the connectome.[11]For example, transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation technique that applies strong magnetic pulses between scalp electrodes which target specific brain regions with electrical currents.[12]This can temporarily disrupt or enhance the activity of specific brain areas and observe changes in connectivity.[12]Transcranial direct current stimulation (tDCS) is another non-invasive neuromodulation technique that applies a constant but relatively weak electrical current for a few minutes, modulating neuronal excitability.[13]It allows researchers to investigate the causal relationship between targeted brain regions and changes in connectivity.[13]tDCS increases the functional connectivity within the brain, with a bias towards specific networks (e.g., cortical processing), and may even cause structural changes to take place in the white matter via myelination and in the gray matter via synaptic plasticity.[13]Another neuromodulation technique is deep brain stimulation (DBS), an invasive technique that involves surgically implanting electrodes into specific brain regions in order to apply localized, high-frequency electrical impulses.[14]This technique modulates brain networks and is often used to alleviate motor symptoms from disorders like Parkinson's, essential tremor, and dystonia.[15]The functional and structural connectivity between electrodes can be used to predict patient outcomes and estimate optimal connectivity profiles.[14]
Electrophysiological methods measure the difference in signals from different parts of the brain to estimate the connectivity between them, a process that requires a low signal-to-noise ratio to maintain the accuracy of the measurements and sufficient spatial resolution to support the connectivity between specific regions of the brain.[16]These methods offer insights into real-time neural dynamics and functional connectivity between brain regions. Electroencephalography (EEG) measures the differences in the electrical potential generated by oscillating currents at the surface of the scalp, due to the non-invasive, external placement of the electrodes.[17]Meanwhile, magnetoencephalography (MEG) relies on the magnetic fields generated by the electrical activity of the brain to collect information.[17]
Microscale connectomes focuses on resolving individual cell-to-cell connectivity within much smaller volumes of nervous system tissue. The most common method forneural circuit reconstructionis chemical brain preservation followed by 3Delectron microscopy,[18]which offers single synapse resolution. The first microscale connectome encompassing an entire nervous system was produced for the nematodeC. elegansin 1986.[19]This was done by manually annotating printouts of the EM scans.[19]Advances in EM acquisition, image alignment and segmentation, and manipulation of large datasets have since allowed for larger volumes to be imaged and segmented more easily. EM has been used to produce connectomes from a variety of nervous system samples, including publicly available datasets that encompass the entire brain[20]andventral nerve cord[21][22]of adultDrosophila melanogaster, the full central nervous system (connected brain and ventral nerve cord) oflarvalDrosophila melanogaster,[23]and volumes from mouse[24]and human cortex.[25][26]TheNational Institutes of Health(NIH) has now invested in creating an EM connectome of an entire mouse brain.[27]EM can be combined with fluorescence incorrelative microscopy, which can generate more interpretable data as is it able to automatically detect specific neuron types and can trace them in their entirety using fluorescent markers.[28]
Other imaging modalities are approaching the nanometer-scale resolution necessary for microscale connectomics. X-raynanotomographyusing asynchrotron sourcecan now reach <100 nm resolution, and can theoretically continue to improve.[29]Unlike EM, this technique does not require the tissue being imaged to be stained with heavy metals or to be physically sectioned.[29]Conventional light microscopy is constrained by light diffraction. Researchers have recently used stimulated emission depletion (STED) microscopy, asuper-resolution light microscopytechnique, to image theextracellular spaceof living human brain organoids and mouse hippocampal slice cultures, allowing for reconstruction of allneuriteswithin this volume by implementing a two-stage machine learning approach.[30]They combined this with fluorescently-tagged synaptic markers to find synaptic connectivity in the sample as well as with calcium imaging to monitor neuronal activity .[30]However, this live-imaging approach was limited to ~130 nm resolution, and was therefore not able to reliably reconstruct thin axons over long distances.[30]In 2024, a new technique called LICONN combinedhydrogel expansionwith light microscopy (as opposed to electron microscopy) to generate neuron level connectomes.[31]Potential advantages include cheaper equipment (optical vs EM microscopes), faster data acquisition, and multi-color labelling.
In addition to advanced microscopy techniques, connectomics heavily relies on software analysis tools and machine learning pipelines for reconstructing and analyzing neural networks. These tools are designed to process and interpret the vast amounts of data generated by volume electron microscopy and other imaging methods. Key steps in connectomic reconstruction includeimage segmentation, where individual neurons and their components are identified and annotated, and network mapping, where the connections between these neurons are established.[32]
Several software platforms facilitate these tasks.CATMAID(Collaborative Annotation Toolkit for Massive Amounts of Image Data) is a decentralized web interface allowing seamless navigation of large image stacks. It is designed to facilitate the collaborative exploration, annotation, and efficient sharing of regions of interests by bookmarking.[33]Another example isWEBKNOSSOS, an online platform used for viewing, annotating, and sharing large 3D images, aiding in the detailed analysis of neural structures by allowing efficient navigation and annotation of 3D datasets.[34]Neuroglancer, a web-based tool designed for visualizing and navigating large-scale neuroscience data, offers features like 3D rendering and interactive exploration of brain datasets.
To see one of the first micro-connectomes at full-resolution, visit theOpen Connectome Project, which is hosting several connectome datasets, including the 12TB dataset from Bock et al. (2011).
Comparative connectomics is a subfield in neuroscience that focuses on comparing the connectomes, or neural network maps, across different species, developmental stages, or pathological states.[35]This comparative approach aims to uncover fundamental principles of brain organization and function by identifying conserved and divergent patterns in neural circuitry. By analyzing similarities and differences in the wiring diagrams of various organisms, researchers can gain insights into the evolutionary processes shaping the nervous system, as well as into the neural basis of behavior and cognition. For example, a 2022 study comparing synaptic connectivity in the mouse and human/macaque cortex revealed that, even though the human cortex contains three times more interneurons than the mouse cortex, the excitation-to-inhibition ratio is similar between the species.[26]
At the beginning of the connectome project, it was thought that the connections between neurons were unchangeable once established and that only individual synapses could be altered.[36]However, recent evidence suggests that connectivity is also subject to change, termedneuroplasticity. There are two ways that the brain can rewire: formation and removal of synapses in an established connection or formation or removal of entire connections between neurons.[37]Both mechanisms of rewiring are useful for learning completely novel tasks that may require entirely new connections between regions of the brain.[38]However, the ability of the brain to gain or lose entire connections poses an issue for mapping a universal species connectome. Although rewiring happens on different scales, from microscale to macroscale, each scale does not occur in isolation. For example, in theC. elegansconnectome, the total number of synapses increases 5-fold from birth to adulthood, changing both local and global network properties.[39]Other developmental connectomes, such as the muscle connectome, retain some global network properties even though the number of synapses decreases by 10-fold in early postnatal life.[40]
Evidence for macroscale rewiring mostly comes from research on grey and white matter density, which could indicate new connections or changes in axon density. Direct evidence for this level of rewiring comes from primate studies, using viral tracing to map the formation of connections. Primates that were taught to use novel tools developed new connections between the interparietal cortex and higher visual areas of the brain.[41]Further viral tracing studies have provided evidence that macroscale rewiring occurs in adult animals duringassociativelearning.[42]However, it is not likely that long-distance neural connections undergo extensive rewiring in adults. Small changes in an already establishednerve tractare likely what is observed in macroscale rewiring.
Rewiring at the mesoscale involves studying the presence or absence of entire connections between neurons.[38]Evidence for this level of rewiring comes from observations that local circuits form new connections as a result ofexperience-dependent plasticityin the visual cortex. Additionally, the number of local connections between pyramidal neurons in the primarysomatosensory cortexincreases following altered whisker sensory experience in rodents.[43]
Microscale rewiring is the formation or removal of synaptic connections between two neurons and can be studied with longitudinal two-photon imaging. Dendritic spines onpyramidal neuronscan be shown forming within days following sensory experience and learning.[44][45][46]Changes can even be seen within five hours onapical tuftsof layer five pyramidal neurons in the primary motor cortex after a seed reaching task in primates.[46]
For macroscale connectomes, the most common subject is the human. For microscale connectomes, some of the model systems are themouse,[47]thefruit fly,[48][49]thenematodeC. elegans,[50][51]and thebarn owl.[52]
TheHuman Connectome Project(HCP) was an initiative launched in 2009 by the National Institutes of Health (NIH) to map the neural pathways that underlie human brain function.[53]Additional programs within the Connectome Initiative, such as the Lifespan Connectome and Disease Connectome, focus on mapping brain connections across different age groups and studying connectome variations in individuals with specific clinical diagnoses.[53]The Connectome Coordination Facility serves as a centralized repository for HCP data and provides support to researchers.[53]
TheC. elegansroundworm has a simple nervous system of 302 neurons and 5000 synaptic connections, (as compared to the human brain which has 100 billion neurons and more than 100 trillion chemical synapses).[54]It was the first of the very few animals in which a full connectome has been mapped using various imaging techniques, mainly serial-electron microscopy.[55]This has made it a natural target for connectomics.
One project studied the aging process of the C. elegans brain by comparing varying worms from birth to adulthood.[56]Researchers found the biggest change with age is the wiring circuits, and that connectivity between and within brain regions increases with age.[56]Additional findings are likely through comparative connectomics, comparing and contrasting different species' brain networks to pinpoint relations in behavior.[56]
Another study analyzed connections about sensory neurons, interneurons, neck motor neurons, behavior, environmental influences, and more in detail.[57]
Within the last decade, largely owing to technological advancements in EM data collection and image processing, multiple synapse-scale connectome datasets have been generated for the fruit flyDrosophila melanogasterin its adult and larval forms. The full fly connectome contains on the order of 100 thousand neurons and 100 million synapses.
The largest current dataset is the FlyWire segmentation and annotation of the female adult fly brain (FAFB) volume,[20]which encompasses the entire brain of an adult. Another adult brain dataset available is the Hemibrain, generated as a collaboration between the Janelia FlyEM team andGoogle.[58][59]This dataset is an incomplete but large section of the fly central brain.
There are also currently two publicly available datasets of the adult flyventral nerve cord(VNC). The female adult nerve cord (FANC) was collected using high-throughput SEM by Dr. Wei-Chung Allen Lee’s lab atHarvard Medical School.[3]The male adult nerve cord (MANC) was collected at Janelia using FIB-SEM.[22]The connectome of a completecentral nervous system(connected brain and VNC) of a 1stinstarD. melanogasterlarvahas been collected as a single volume. This dataset of 3016 neurons was segmented and annotated manually using CATMAID by a team of people mainly led by researchers at Janelia, Cambridge, and the MRC LMB.[23]
An online database known asMouseLightdisplays over 1000 neurons mapped in the mouse brain based on a collective database of sub-micron resolution images of these brains. This platform illustrates the thalamus, hippocampus, cerebral cortex, and hypothalamus based on single-cell projections.[60]Imaging technology to produce this mouse brain does not allow an in-depth look at synapses but can show axonal arborizations which contain many synapses.[61]A limiting factor to studying mouse connectomes, much like with humans, is the complexity of labeling all the cell types of the mouse brain; This is a process that would require the reconstruction of 100,000+ neurons and the imaging technology is advanced enough to do so.[61]
Mice models in the lab have provided insight into genetic brain disorders, one study manipulated mice with a deletion of 22q11.2 (chromosome 22, a likely known genetic risk factor that leads to schizophrenia).[62]The findings of this study showed that this impaired neural activity in mice's working memory is similar to what it does in humans.[62]
Macroscale and microscale connectomics have very different applications. Macroscale connectomics has furthered our understanding of variousbrain networksincluding visual,[63][64]brainstem,[65][66]and language networks,[67][68]among others. Microscale connectomics, on the other hand, concentrates on mechanistic explanations of how the neural circuits of the brain perform specific functions. Examples include motion vision,[69]olfactory learning,[70]navigation,[71]and escape responses,[72]all inDrosophila.
By comparing diseased and healthy connectomes, we can gain insight into certain psychopathologies, such asneuropathic pain, and potential therapies for them. Generally, the field ofneurosciencewould benefit from standardization and raw data. For example, connectome maps can be used to inform computational models of whole-brain dynamics.[73][self-published source?]Current neural networks mostly rely on probabilistic representations of connectivity patterns.[74]Connectivity matrices (checkerboard diagrams of connectomics) have been used in stroke recovery to evaluate the response to treatment viaTranscranial Magnetic Stimulation.[75]Similarly,connectograms(circular diagrams of connectomics) have been used intraumatic brain injurycases to document the extent of damage to neural networks.[76][77]
Looking into these methods of research, they can reveal information about different mental illnesses and brain disorders. The tracking of brain networks in alignment with diseases and illnesses would be enhanced by these advanced technologies that can produce complex images of neural networks.[78]With this in mind, diseases can not only be tracked, but predicted based on behavior of previous cases, a process that would take an extensive period of time to collect and record.[78]Specifically, studies on different brain disorders such as schizophrenia and bipolar disorder with a focus on the connectomics involved reveal information. Both of these disorders have a similar genetic origin,[78][79]and research found that those with higher polygenic scores for schizophrenia and bipolar disorder have lower amounts of connectivity shown through neuroimaging.[80]This method of research tackles real-world applications of connectomics, combining methods of imaging with genetics to dig deeper into the origins and outcomes of genetically related disorders.[78]Another study supports the finding that there is relation between connectivity and likelihood of disease, as researchers found those diagnosed with schizophrenia have less structurally complete brain networks.[81]The main drawback in this area of connectomics is not being able to achieve images of whole-brain networks, therefore it is hard to make complete and accurate assumptions about cause and effect of diseases' neural pathways.[81]Connectomics has been used to study patients with strokes using MRI imaging, however because such little research is done in this specific area, conclusions cannot be drawn regarding the relation between strokes and connectivity.[82]The research did find results that highlight an association between poor connectivity in the language system and poor motor coordination, however the results were not substantial enough to make a bold claim.[82]For behavioral disorders, it can be difficult to diagnose and treat because most situations revolve on a symptoms-based approach. However, this can be difficult because many disorders have overlapping symptoms. Connectomics has been used to find neuromarkers associated with social anxiety disorder (SAD) at a high precision rate in improving related symptoms.[83]This is an expanding field and there is room for greater application to mental health disorders and brain malfunction, in which current research is building on neural networks and the psychopathology involved.[84]
Human connectomes have an individual variability, which can be measured with thecumulative distribution function, as it was shown in.[85]By analyzing the individual variability of the human connectomes in distinct cerebral areas, it was found that the frontal and the limbic lobes are more conservative, and the edges in the temporal and occipital lobes are more diverse. A "hybrid" conservative/diverse distribution was detected in the paracentral lobule and the fusiform gyrus. Smaller cortical areas were also evaluated: precentral gyri were found to be more conservative, and the postcentral and the superior temporal gyri to be very diverse.
The recent advancements in the field of connectomics have sparked conversation around its relation to the field of genomics. Recently, scientists in the field have highlighted the parallels between this project and large-scale genomics initiatives.[86]Additionally, they have referenced the need for integration with other scientific disciplines, particularly genetics. While genomics focuses on the genetic blueprint of an organism, connectomics provides insights into the structural and functional connectivity of the brain. By integrating these two fields, researchers can explore how genetic variations and gene expression patterns influence the wiring and organization of neural circuits.[87]This interdisciplinary approach helps uncover the relationship between genes, neural connectivity, and brain function. Additionally, connectomics can benefit from genomics by leveraging genetic tools and techniques to manipulate specific genes or neuronal populations to study their impact on neural circuitry and behavior.[86]Understanding the genetic basis of neural connectivity can enhance our understanding of brain development, neural plasticity, and the mechanisms underlying various neurological disorders.
Thehuman genome projectinitially faced many of the above criticisms, but was nevertheless completed ahead of schedule and has led to many advances in genetics. Some have argued that analogies can be made between genomics and connectomics, and therefore we should be at least slightly more optimistic about the prospects in connectomics.[88]Others have criticized attempts towards a microscale connectome, arguing that we do not have enough knowledge about where to look for insights, or that it cannot be completed within a realistic time frame.[89]
Using fMRI in theresting stateand during tasks, functions of the connectome circuits are being studied.[90]Just as detailed road maps of the Earth's surface do not tell us much about the kind of vehicles that travel those roads or what cargo they are hauling, to understand how neural structures result in specific functional behavior such asconsciousness, it is necessary to build theories that relate functions to anatomical connectivity.[91]However, the bond between structural and functional connectivity is not straightforward. Computational models of whole-brain network dynamics are valuable tools to investigate the role of the anatomical network in shaping functional connectivity.[92][93]In particular, computational models can be used to predict the dynamic effect oflesionsin the connectome.[94][95]
A connectome can be viewed as agraph, and the rich tools, definitions and algorithms ofgraph theoryandnetwork sciencecan be applied to these graphs. In case of a micro-scale connectome, the nodes of this network (or graph) are the neurons, and the edges correspond to thesynapsesbetween those neurons. For the macro-scale connectome, the nodes correspond to the ROIs (regions of interest), while the edges of the graph are derived from the axons interconnecting those areas. Thus connectomes are sometimes referred to asbrain graphs, as they are indeed graphs in a mathematical sense which describe the connections in the brain (or, in a broader sense, the whole nervous system).
One group of researchers (Iturria-Medina et al., 2008)[96]has constructed connectome data sets usingdiffusion tensor imaging(DTI)[97][98]followed by the derivation of average connection probabilities between 70 and 90 cortical andbasalbrain gray matter areas. All networks were found to have small-world attributes and "broad-scale" degree distributions. An analysis ofbetweenness centralityin these networks demonstrated high centrality for theprecuneus, theinsula, thesuperior parietaland the superiorfrontal cortex. Another group (Gong et al. 2008)[99]has applied DTI to map a network of anatomical connections between 78 cortical regions. This study also identified several hub regions in the human brain, including the precuneus and thesuperior frontal gyrus.
Hagmann et al. (2007)[100]constructed a connection matrix from fiber densities measured between homogeneously distributed and equal-sized ROIs numbering between 500 and 4000. A quantitative analysis of connection matrices obtained for approximately 1,000 ROIs and approximately 50,000 fiber pathways from two subjects demonstrated an exponential (one-scale) degree distribution as well as robust small-world attributes for the network. The data sets were derived from diffusion spectrum imaging (DSI) (Wedeen, 2005),[101]a variant of diffusion-weighted imaging[102][103]that is sensitive to intra-voxel heterogeneities in diffusion directions caused by crossing fiber tracts and thus allows more accurate mapping of axonal trajectories than other diffusion imaging approaches (Wedeen, 2008).[104]The combination of whole-head DSI datasets acquired and processed according to the approach developed by Hagmann et al. (2007)[100]with the graph analysis tools conceived initially for animal tracing studies (Sporns, 2006; Sporns, 2007)[105][106]allow a detailed study of the network structure of human cortical connectivity (Hagmann et al., 2008).[107]The human brain network was characterized using a broad array of network analysis methods including core decomposition, modularity analysis, hub classification andcentrality. Hagmannet al. presented evidence for the existence of a structural core of highly and mutually interconnected brain regions, located primarily in posterior medial and parietal cortex. The core comprises portions of theposterior cingulate cortex, the precuneus, thecuneus, theparacentral lobule, theisthmus of the cingulate, the banks of thesuperior temporal sulcus, and theinferiorandsuperior parietal cortex, all located in bothcerebral hemispheres.
A subfield of connectomics deals with the comparison of the brain graphs of multiple subjects. It is possible to build a consensus graph such theBudapest Reference Connectomeby allowing only edges that are present in at leastk{\displaystyle k}connectomes, for a selectablek{\displaystyle k}parameter. The Budapest Reference Connectome has led the researchers to the discovery of the Consensus Connectome Dynamics of the human brain graphs. The edges appeared in all of the brain graphs form a connected subgraph around thebrainstem. By allowing gradually less frequent edges, this core subgraph grows continuously, as ashrub. The growth dynamics may reflect the individualbrain developmentand provide an opportunity to direct some edges of the human consensus brain graph.[108]
Alternatively, local difference which are statistically significantly different among groups have attracted more attention as they highlight specific connections and therefore shed more light on specific brain traits or pathology. Hence, algorithms to find local difference between graph populations have also been introduced (e.g. to compare case versus control groups).[109]Those can be found by using either an adjustedt-test,[110]or a sparsity model,[109]with the aim of finding statistically significant connections which are different among those groups.
Comparisons between the connectomes (or braingraphs) of healthy women and men[111][112][113]have shown that in several deep graph-theoretical parameters, the structural connectome of women is significantly better connected than that of men. For example, women's connectome has more edges, higher minimum bipartition width, largereigengap, greater minimumvertex coverthan that of men. The minimum bipartition width (or, in other words, the minimum balancedcut) is a well-known measure of quality of computermultistage interconnection networks, it describes the possible bottlenecks in network communication: The higher this value is, the better is the network. The larger eigengap shows that the female connectome is betterexpander graphthan the connectome of males. The better expanding property, the higher minimum bipartition width and the greater minimumvertex covershow deep advantages in network connectivity in the case of female braingraph.
Connectomes generally exhibit asmall-worldcharacter, with overall cortical connectivity decreasing with age.[114]The aim of the as of 2015 ongoingHCP Lifespan Pilot Projectis to identify connectome differences between 6 age groups (4–6, 8–9, 14–15, 25–35, 45–55, 65–75).
More recently,connectogramshave been used to visualize full-brain data by placing cortical areas around a circle, organized by lobe.[115][116]Inner circles then depict cortical metrics on a color scale. White matter fiber connections in DTI data are then drawn between these cortical regions and weighted byfractional anisotropyand strength of the connection. Such graphs have even been used to analyze the damage done to the famous traumatic brain injury patientPhineas Gage.[117]
Statistical graph theory is an emerging discipline which is developing sophisticated pattern recognition and inference tools to parse these brain graphs (Goldenberg et al., 2009).
In 2005, Dr.Olaf SpornsatIndiana Universityand Dr. Patric Hagmann atLausanne University Hospitalindependently and simultaneously suggested the term "connectome" to refer to a map of the neural connections within the brain. This term was directly inspired by the ongoing effort to sequence the humangenetic code—to build agenome.
"Connectomics"has been defined as the science concerned with assembling and analyzing connectome data sets.[118]
In their 2005 paper, "The Human Connectome, a structural description of the human brain", Sporns et al. wrote:
To understand the functioning of a network, one must know its elements and their interconnections. The purpose of this article is to discuss research strategies aimed at a comprehensive structural description of the network of elements and connections forming the human brain. We propose to call this dataset the human "connectome," and we argue that it is fundamentally important incognitive neuroscienceandneuropsychology. The connectome will significantly increase our understanding of how functional brain states emerge from their underlying structural substrate, and will provide new mechanistic insights into how brain function is affected if this structural substrate is disrupted.[36]
In his 2005 Ph.D. thesis,Fromdiffusion MRIto brain connectomics, Hagmann wrote:
It is clear that, like the genome, which is much more than just a juxtaposition ofgenes, the set of all neuronal connections in the brain is much more than the sum of their individual components. The genome is an entity it-self, as it is from the subtle gene interaction that [life] emerges. In a similar manner, one could consider the brain connectome, set of all neuronal connections, as one single entity, thus emphasizing the fact that the huge brainneuronal communicationcapacity and computational power critically relies on this subtle and incredibly complex connectivity architecture.[118]
The term "connectome" was more recently popularized bySebastian Seung'sI am my Connectomespeech given at the 2010TED conference, which discusses the high-level goals of mapping the human connectome, as well as ongoing efforts to build a three-dimensional neural map of brain tissue at the microscale.[119]In 2012, Seung published the bookConnectome: How the Brain's Wiring Makes Us Who We Are.
Websites to explore publicly available connectomics datasets:
Macroscale Connectomics (Healthy Young Adult Datasets)
For a more comprehensive list of open macroscale datasets, check outthis article
Microscale Connectomics
|
https://en.wikipedia.org/wiki/Connectomics
|
Deep image prioris a type ofconvolutional neural networkused to enhance a given image with no prior training data other than the image itself.
A neural network is randomly initialized and used as prior to solveinverse problemssuch asnoise reduction,super-resolution, andinpainting. Image statistics are captured by the structure of a convolutional image generator rather than by any previously learned capabilities.
Inverse problemssuch asnoise reduction,super-resolution, andinpaintingcan be formulated as theoptimization taskx∗=minxE(x;x0)+R(x){\displaystyle x^{*}=min_{x}E(x;x_{0})+R(x)}, wherex{\displaystyle x}is an image,x0{\displaystyle x_{0}}a corrupted representation of that image,E(x;x0){\displaystyle E(x;x_{0})}is a task-dependent data term, and R(x) is theregularizer. This forms an energy minimization problem.
Deep neural networkslearn a generator/decoderx=fθ(z){\displaystyle x=f_{\theta }(z)}which maps a randomcode vectorz{\displaystyle z}to an imagex{\displaystyle x}.
The image corruption method used to generatex0{\displaystyle x_{0}}is selected for the specific application.
In this approach, theR(x){\displaystyle R(x)}prior is replaced with the implicit prior captured by the neural network (whereR(x)=0{\displaystyle R(x)=0}for images that can be produced by adeep neural networksandR(x)=+∞{\displaystyle R(x)=+\infty }otherwise). This yields the equation for the minimizerθ∗=argminθE(fθ(z);x0){\displaystyle \theta ^{*}=argmin_{\theta }E(f_{\theta }(z);x_{0})}and the result of the optimization processx∗=fθ∗(z){\displaystyle x^{*}=f_{\theta ^{*}}(z)}.
The minimizerθ∗{\displaystyle \theta ^{*}}(typically agradient descent) starts from a randomly initialized parameters and descends into a local best result to yield thex∗{\displaystyle x^{*}}restoration function.
A parameter θ may be used to recover any image, including its noise. However, the network is reluctant to pick up noise because it contains high impedance while useful signal offers low impedance. This results in the θ parameter approaching a good-lookinglocal optimumso long as the number of iterations in the optimization process remains low enough not tooverfitdata.
Typically, the deep neural network model for deep image prior uses aU-Netlike model without the skip connections that connect the encoder blocks with the decoder blocks. The authors in their paper mention that "Our findings here (and in other similar comparisons) seem to suggest that having deeper architecture is beneficial, and that having skip-connections that work so well for recognition tasks (such as semantic segmentation) is highly detrimental."[1]
The principle ofdenoisingis to recover an imagex{\displaystyle x}from a noisy observationx0{\displaystyle x_{0}}, wherex0=x+ϵ{\displaystyle x_{0}=x+\epsilon }. The distributionϵ{\displaystyle \epsilon }is sometimes known (e.g.: profiling sensor and photon noise[2]) and may optionally be incorporated into the model, though this process works well in blind denoising.
The quadratic energy functionE(x,x0)=||x−x0||2{\displaystyle E(x,x_{0})=||x-x_{0}||^{2}}is used as the data term, plugging it into the equation forθ∗{\displaystyle \theta ^{*}}yields the optimization problemminθ||fθ(z)−x0||2{\displaystyle min_{\theta }||f_{\theta }(z)-x_{0}||^{2}}.
Super-resolutionis used to generate a higher resolution version of image x. The data term is set toE(x;x0)=||d(x)−x0||2{\displaystyle E(x;x_{0})=||d(x)-x_{0}||^{2}}where d(·) is adownsampling operatorsuch asLanczosthat decimates the image by a factor t.
Inpaintingis used to reconstruct a missing area in an imagex0{\displaystyle x_{0}}. These missing pixels are defined as the binary maskm∈{0,1}H×V{\displaystyle m\in \{0,1\}^{H\times V}}. The data term is defined asE(x;x0)=||(x−x0)⊙m||2{\displaystyle E(x;x_{0})=||(x-x_{0})\odot m||^{2}}(where⊙{\displaystyle \odot }is theHadamard product).
The intuition behind this is that the loss is computed only on the known pixels in the image, and the network is going to learn enough about the image to fill in unknown parts of the image even though the computed loss doesn't include those pixels. This strategy is used to remove image watermarks by treating the watermark as missing pixels in the image.
This approach may be extended to multiple images. A straightforward example mentioned by the author is the reconstruction of an image to obtain natural light and clarity from a flash–no-flash pair. Video reconstruction is possible but it requires optimizations to take into account the spatial differences.
See Astronomy Picture of the Day (APOD) of 2024-02-18[4]
|
https://en.wikipedia.org/wiki/Deep_image_prior
|
Digital morphogenesisis a type of generative art in which complex shape development, ormorphogenesis, is enabled by computation. This concept is applicable in many areas of design, art, architecture, and modeling. The concept was originally developed in the field ofbiology, later ingeology,geomorphology, andarchitecture.
Inarchitecture, it describes tools and methods for creating forms and adapting them to a known environment.[1][2][3]
Developments in digital morphogenesis have allowed construction and analysis of structures in more detail than could have been put into a blueprint or model by hand, with structure at all levels defined by iterative algorithms. As fabrication techniques advance, it is becoming possible to produce objects with fractal or other elaborate structures.
|
https://en.wikipedia.org/wiki/Digital_morphogenesis
|
In computer strategy games, for example inshogiandchess, anefficiently updatable neural network(NNUE, a Japanese wordplay onNue, sometimes stylised asƎUИИ) is aneural network-basedevaluation functionwhose inputs arepiece-square tables, or variants thereof like the king-piece-square table.[1]NNUE is used primarily for the leaf nodes of thealpha–betatree.[2]
NNUE was invented byYu Nasuand introduced tocomputer shogiin 2018.[3][4]On 6 August 2020, NNUE was for the first time ported to a chess engine,Stockfish12.[5][6]Since 2021, many of the top rated classical chess engines such asKomodo Dragonhave an NNUE implementation to remain competitive.
NNUE runs efficiently oncentral processing units(CPU) without a requirement for agraphics processing unit(GPU).[7][8]In contrast,deep neural network-based chess engines such asLeela Chess Zerorequire a GPU.[9][10]
The neural network used for the original 2018 computer shogi implementation consists of four weight layers: W1 (16-bit integers) and W2, W3 and W4 (8-bit). It has 4 fully-connected layers,ReLUactivation functions, and outputs a single number, being the score of the board.
W1 encoded the king's position and therefore this layer needed only to be re-evaluated once the king moved. It usedincremental computationandsingle instruction multiple data(SIMD) techniques along with appropriateintrinsic instructions.[3]
|
https://en.wikipedia.org/wiki/Efficiently_updatable_neural_network
|
Evolutionary algorithms(EA) reproduce essential elements of thebiological evolutionin acomputer algorithmin order to solve “difficult” problems, at leastapproximately, for which no exact or satisfactory solution methods are known. They belong to the class ofmetaheuristicsand are asubsetofpopulation based bio-inspired algorithms[1]andevolutionary computation, which itself are part of the field ofcomputational intelligence.[2]The mechanisms of biological evolution that an EA mainly imitates arereproduction,mutation,recombinationandselection.Candidate solutionsto theoptimization problemplay the role of individuals in a population, and thefitness functiondetermines the quality of the solutions (see alsoloss function).Evolutionof the population then takes place after the repeated application of the above operators.
Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlyingfitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations ofmicroevolutionary processesand planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor.[3]In fact, this computational complexity is due to fitness function evaluation.Fitness approximationis one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems;[4][5][6]therefore, there may be no direct link between algorithm complexity and problem complexity.
The following is an example of a generic evolutionary algorithm:[7][8][9]
Similar techniques differ ingenetic representationand other implementation details, and the nature of the particular applied problem.
The following theoretical principles apply to all or almost all EAs.
Theno free lunch theoremof optimization states that all optimization strategies are equally effective when the set of all optimization problems is considered. Under the same condition, no evolutionary algorithm is fundamentally better than another. This can only be the case if the set of all problems is restricted. This is exactly what is inevitably done in practice. Therefore, to improve an EA, it must exploit problem knowledge in some form (e.g. by choosing a certain mutation strength or aproblem-adapted coding). Thus, if two EAs are compared, this constraint is implied. In addition, an EA can use problem specific knowledge by, for example, not randomly generating the entire start population, but creating some individuals throughheuristicsor other procedures.[18][19]Another possibility to tailor an EA to a given problem domain is to involve suitable heuristics,local search proceduresor other problem-related procedures in the process of generating the offspring. This form of extension of an EA is also known as amemetic algorithm. Both extensions play a major role in practical applications, as they can speed up the search process and make it more robust.[18][20]
For EAs in which, in addition to the offspring, at least the best individual of the parent generation is used to form the subsequent generation (so-called elitist EAs), there is a general proof ofconvergenceunder the condition that anoptimumexists.Without loss of generality, a maximum search is assumed for the proof:
From the property of elitist offspring acceptance and the existence of the optimum it follows that per generationk{\displaystyle k}an improvement of the fitnessF{\displaystyle F}of the respective best individualx′{\displaystyle x'}will occur with a probabilityP>0{\displaystyle P>0}. Thus:
I.e., the fitness values represent amonotonicallynon-decreasingsequence, which isboundeddue to the existence of the optimum. From this follows the convergence of the sequence against the optimum.
Since the proof makes no statement about the speed of convergence, it is of little help in practical applications of EAs. But it does justify the recommendation to use elitist EAs. However, when using the usualpanmicticpopulation model, elitist EAs tend toconverge prematurelymore than non-elitist ones.[21]In a panmictic population model, mate selection (see step 4 of thegeneric definition) is such that every individual in the entire population is eligible as a mate. Innon-panmictic populations, selection is suitably restricted, so that the dispersal speed of better individuals is reduced compared to panmictic ones. Thus, the general risk of premature convergence of elitist EAs can be significantly reduced by suitable population models that restrict mate selection.[22][23]
With the theory of virtual alphabets,David E. Goldbergshowed in 1990 that by using a representation with real numbers, an EA that uses classicalrecombination operators(e.g. uniform or n-point crossover) cannot reach certain areas of the search space, in contrast to a coding with binary numbers.[24]This results in the recommendation for EAs with real representation to use arithmetic operators for recombination (e.g. arithmetic mean or intermediate recombination). With suitable operators, real-valued representations are more effective than binary ones, contrary to earlier opinion.[25][26]
A possible limitation[according to whom?]of many evolutionary algorithms is their lack of a cleargenotype–phenotype distinction. In nature, the fertilized egg cell undergoes a complex process known asembryogenesisto become a maturephenotype. This indirectencodingis believed to make the genetic search more robust (i.e. reduce the probability of fatal mutations), and also may improve theevolvabilityof the organism.[27][28]Such indirect (also known as generative or developmental) encodings also enable evolution to exploit the regularity in the environment.[29]Recent work in the field ofartificial embryogeny, or artificial developmental systems, seeks to address these concerns. Andgene expression programmingsuccessfully explores a genotype–phenotype system, where the genotype consists of linear multigenic chromosomes of fixed length and the phenotype consists of multiple expression trees or computer programs of different sizes and shapes.[30][improper synthesis?]
Both method classes have in common that their individual search steps are determined by chance. The main difference, however, is that EAs, like many other metaheuristics, learn from past search steps and incorporate this experience into the execution of the next search steps in a method-specific form. With EAs, this is done firstly through the fitness-based selection operators for partner choice and the formation of the next generation. And secondly, in the type of search steps: In EA, they start from a current solution and change it or they mix the information of two solutions. In contrast, when dicing out new solutions inMonte-Carlo methods, there is usually no connection to existing solutions.[31][32]
If, on the other hand, the search space of a task is such that there is nothing to learn, Monte-Carlo methods are an appropriate tool, as they do not contain any algorithmic overhead that attempts to draw suitable conclusions from the previous search. An example of such tasks is the proverbialsearch for a needle in a haystack, e.g. in the form of a flat (hyper)plane with a single narrow peak.
The areas in which evolutionary algorithms are practically used are almost unlimited[6]and range from industry,[33][34]engineering,[3][4][35]complex scheduling,[5][36][37]agriculture,[38]robot movement planning[39]and finance[40][41]to research[42][43]andart. The application of an evolutionary algorithm requires some rethinking from the inexperienced user, as the approach to a task using an EA is different from conventional exact methods and this is usually not part of the curriculum of engineers or other disciplines. For example, the fitness calculation must not only formulate the goal but also support the evolutionary search process towards it, e.g. by rewarding improvements that do not yet lead to a better evaluation of the original quality criteria. For example, if peak utilisation of resources such as personnel deployment or energy consumption is to be avoided in a scheduling task, it is not sufficient to assess the maximum utilisation. Rather, the number and duration of exceedances of a still acceptable level should also be recorded in order to reward reductions below the actual maximum peak value.[44]There are therefore some publications that are aimed at the beginner and want to help avoiding beginner's mistakes as well as leading an application project to success.[44][45][46]This includes clarifying the fundamental question of when an EA should be used to solve a problem and when it is better not to.
There are some other proven and widely used methods of nature inspired global search techniques such as
In addition, many new nature-inspired or methaphor-guided algorithms have been proposed since the beginning of this century. For criticism of most publications on these, see the remarks at the end of the introduction to the article onmetaheuristics.
In 2020,Googlestated that their AutoML-Zero can successfully rediscover classic algorithms such as the concept of neural networks.[47]
The computer simulationsTierraandAvidaattempt to modelmacroevolutionarydynamics.
[48][49]
|
https://en.wikipedia.org/wiki/Evolutionary_algorithm
|
Ingeometry, afamily of curvesis asetofcurves, each of which is given by afunctionorparametrizationin which one or more of theparametersis variable. In general, the parameter(s) influence the shape of the curve in a way that is more complicated than a simplelinear transformation. Sets of curves given by animplicit relationmay also represent families of curves.
Families of curves appear frequently in solutions ofdifferential equations; when an additiveconstant of integrationis introduced, it will usually be manipulated algebraically until it no longer represents a simple linear transformation.
Families of curves may also arise in other areas. For example, all non-degenerateconic sectionscan be represented using a singlepolar equationwith one parameter, theeccentricityof the curve:
as the value ofechanges, the appearance of the curve varies in a relatively complicated way.
Families of curves may arise in various topics in geometry, including theenvelopeof a set of curves and thecausticof a given curve.
Inmachine learning,neural networksare families of curves with parameters chosen by anoptimization algorithme.g. to minimize the value of a loss function on a given training dataset.
Inalgebraic geometry, an algebraic generalization is given by the notion of alinear system of divisors.
|
https://en.wikipedia.org/wiki/Family_of_curves
|
Incomputer scienceandoperations research, agenetic algorithm(GA) is ametaheuristicinspired by the process ofnatural selectionthat belongs to the larger class ofevolutionary algorithms(EA).[1]Genetic algorithms are commonly used to generate high-quality solutions tooptimizationandsearch problemsvia biologically inspired operators such asselection,crossover, andmutation.[2]Some examples of GA applications include optimizingdecision treesfor better performance, solvingsudoku puzzles,[3]hyperparameter optimization, andcausal inference.[4]
In a genetic algorithm, apopulationofcandidate solutions(called individuals, creatures, organisms, orphenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (itschromosomesorgenotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.[5]
The evolution usually starts from a population of randomly generated individuals, and is aniterative process, with the population in each iteration called ageneration. In each generation, thefitnessof every individual in the population is evaluated; the fitness is usually the value of theobjective functionin the optimization problem being solved. The more fit individuals arestochasticallyselected from the current population, and each individual's genome is modified (recombinedand possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of thealgorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
A standard representation of each candidate solution is as anarray of bits(also calledbit setorbit string).[5]Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simplecrossoveroperations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored ingenetic programmingand graph-form representations are explored inevolutionary programming; a mix of both linear chromosomes and trees is explored ingene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (thesearch space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest.[6]
During each successive generation, a portion of the existing population isselectedto reproduce for a new generation. Individual solutions are selected through afitness-basedprocess, wherefittersolutions (as measured by afitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
The fitness function is defined over the genetic representation and measures thequalityof the represented solution. The fitness function is always problem-dependent. For instance, in theknapsack problemone wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. Thefitnessof the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these cases, asimulationmay be used to determine the fitness function value of aphenotype(e.g.computational fluid dynamicsis used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or eveninteractive genetic algorithmsare used.
The next step is to generate a second generation population of solutions from those selected, through a combination ofgenetic operators:crossover(also called recombination), andmutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research[7][8]suggests that more than two "parents" generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
Opinion is divided over the importance of crossover versus mutation. There are many references inFogel(2006) that support the importance of mutation-based search.
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.[citation needed]
It is worth tuning parameters such as themutationprobability,crossoverprobability and population size to find reasonable settings for the problem'scomplexity classbeing worked on. A very small mutation rate may lead togenetic drift(which is non-ergodicin nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unlesselitist selectionis employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
In addition to the main operators above, otherheuristicsmay be employed to make the calculation faster or more robust. Thespeciationheuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.[9][10]
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
Goldberg describes the heuristic as follows:
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Manyestimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold.[12][13]Although good results have been reported for someclasses of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.[14][15][16]
The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms:
The simplest algorithm represents each chromosome as abit string. Typically, numeric parameters can be represented byintegers, though it is possible to usefloating pointrepresentations. The floating point representation is natural toevolution strategiesandevolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed byJohn Henry Hollandin the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in alinked list,hashes,objects, or any other imaginabledata structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
When bit-string representations of integers are used,Gray codingis often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-calledHamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming avirtual alphabet(when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.[19][20]
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome.[21]This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known aselitist selectionand guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.[22]
Parallelimplementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
Other variants, like genetic algorithms foronline optimizationproblems, introduce time-dependence or noise in the fitness function.
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically.[23][24]
Instead of using fixed values ofpcandpm, AGAs utilize the population information in each generation and adaptively adjust thepcandpmin order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm),[25]the adjustment ofpcandpmdepends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence.[26]InCAGA(clustering-based adaptive genetic algorithm),[27]through the use of clustering analysis to judge the optimization states of the population, the adjustment ofpcandpmdepends on these optimization states. Recent approaches use more abstract variables for decidingpcandpm. Examples are dominance & co-dominance principles[28]and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity.[29]
It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such assimple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA[citation needed]while overcoming the lack of robustness of hill climbing.
This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, theinversion operatorhas the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.[30]
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA,[31]GEMGA[32]and LLGA.[33]
Problems which appear to be particularly appropriate for solution by genetic algorithms includetimetabling and scheduling problems, and many scheduling software packages are based on GAs[citation needed]. GAs have also been applied toengineering.[34]Genetic algorithms are often applied as an approach to solveglobal optimizationproblems.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complexfitness landscapeas mixing, i.e.,mutationin combination withcrossover, is designed to move the population away fromlocal optimathat a traditionalhill climbingalgorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provideergodicityof the overall genetic algorithm process (seen as aMarkov chain).
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector,[35]antennae designed to pick up radio signals in space,[36]walking methods for computer figures,[37]optimal design of aerodynamic bodies in complex flowfields[38]
In hisAlgorithm Design Manual,Skienaadvises against genetic algorithms for any task:
[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.
[...]
I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick tosimulated annealingfor your heuristic search voodoo needs.
In 1950,Alan Turingproposed a "learning machine" which would parallel the principles of evolution.[40]Computer simulation of evolution started as early as in 1954 with the work ofNils Aall Barricelli, who was using the computer at theInstitute for Advanced StudyinPrinceton, New Jersey.[41][42]His 1954 publication was not widely noticed. Starting in 1957,[43]the Australian quantitative geneticistAlex Fraserpublished a series of papers on simulation ofartificial selectionof organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[44]and Crosby (1973).[45]Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition,Hans-Joachim Bremermannpublished a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms.[46]Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted byFogel(1998).[47]
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,[48]artificial evolutiononly became a widely recognized optimization method as a result of the work ofIngo RechenbergandHans-Paul Schwefelin the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems throughevolution strategies.[49][50][51][52]Another approach was the evolutionary programming technique ofLawrence J. Fogel, which was proposed for generating artificial intelligence.Evolutionary programmingoriginally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work ofJohn Hollandin the early 1970s, and particularly his bookAdaptation in Natural and Artificial Systems(1975). His work originated with studies ofcellular automata, conducted byHollandand his students at theUniversity of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known asHolland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held inPittsburgh, Pennsylvania.
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.[53][circular reference]In 1989, Axcelis, Inc. releasedEvolver, the world's first commercial GA product for desktop computers.The New York Timestechnology writerJohn Markoffwrote[54]about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995.[55]Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version.[56]Since the 1990s,MATLABhas built in threederivative-free optimizationheuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).[57]
Genetic algorithms are a sub-field:
Evolutionary algorithms is a sub-field ofevolutionary computing.
Swarm intelligence is a sub-field ofevolutionary computing.
Evolutionary computation is a sub-field of themetaheuristicmethods.
Metaheuristic methods broadly fall withinstochasticoptimisation methods.
|
https://en.wikipedia.org/wiki/Genetic_algorithm
|
Hyperdimensional computing(HDC) is an approach to computation, particularlyArtificial General Intelligence. HDC is motivated by the observation that thecerebellum cortexoperates on high-dimensional data representations.[1]In HDC, information is thereby represented as a hyperdimensional (long)vectorcalled a hypervector. A hyperdimensional vector (hypervector) could include thousands of numbers that represent a point in a space of thousands of dimensions,[2]as vector symbolic architectures is an older name for the same approach. This research extenuates intoArtificial Immune Systemsfor creatingArtificial General Intelligence.
Data is mapped from the input space to sparse HD space under an encoding function φ : X → H. HD representations are stored in data structures that are subject to corruption by noise/hardware failures. Noisy/corrupted HD representations can still serve as input for learning, classification, etc. They can also be decoded to recover the input data. H is typically restricted to range-limited integers (-v-v)[3]
This is analogous to the learning process conducted byfruit fliesolfactory system. The input is a roughly 50-dimensional vector corresponding to odor receptor neuron types. The HD representation uses ~2,000-dimensions.[3]
HDC algebra reveals the logic of how and why systems makes decisions, unlikeartificial neural networks. Physical world objects can be mapped to hypervectors, to be processed by the algebra.[2]
HDC is suitable for "in-memory computing systems", which compute and hold data on a single chip, avoiding data transfer delays. Analog devices operate at low voltages. They are energy-efficient, but prone to error-generating noise. HDC's can tolerate such errors.[2]
Various teams have developed low-power HDC hardware accelerators.[3]
Nanoscalememristivedevices can be exploited to perform computation. An in-memory hyperdimensional computing system can implement operations on two memristive crossbar engines together with peripheral digitalCMOScircuits. Experiments using 760,000 phase-change memory devices performing analog in-memory computing achieved accuracy comparable to software implementations.[4]
HDC is robust to errors such as an individual bit error (a 0 flips to 1 or vice versa) missed by error-correcting mechanisms. Eliminating such error-correcting mechanisms can save up to 25% of compute cost. This is possible because such errors leave the result "close" to the correct vector. Reasoning using vectors is not compromised. HDC is at least 10x more error tolerant than traditionalartificial neural networks, which are already orders of magnitude more tolerant than traditional computing.[2]
A simple example considers images containing black circles and white squares. Hypervectors can represent SHAPE and COLOR variables and hold the corresponding values: CIRCLE, SQUARE, BLACK and WHITE. Bound hypervectors can hold the pairs BLACK and CIRCLE, etc.[2]
High-dimensional space allows many mutuallyorthogonalvectors. However, If vectors are instead allowed to benearly orthogonal, the number of distinct vectors in high-dimensional space is vastly larger.[2]
HDC uses the concept of distributed representations, in which an object/observation is represented by a pattern of values across many dimensions rather than a single constant.[3]
HDC can combine hypervectors into new hypervectors using well-definedvector spaceoperations.
Groups,rings, andfieldsover hypervectors become the underlying computing structures with addition, multiplication, permutation, mapping, and inverse as primitive computing operations.[4]All computational tasks are performed in high-dimensional space using simple operations like element-wise additions anddot products.[3]
Binding creates ordered point tuples and is also a function ⊗ : H × H → H. The input is two points inH, while the output is a dissimilar point. Multiplying the SHAPE vector with CIRCLEbindsthe two, representing the idea “SHAPE is CIRCLE”. This vector is "nearly orthogonal" to SHAPE and CIRCLE. The components are recoverable from the vector (e.g., answer the question "is the shape a circle?").[3]
Addition creates a vector that combines concepts. For example, adding “SHAPE is CIRCLE” to “COLOR is RED,” creates a vector that represents a red circle.
Permutation rearranges the vector elements. For example, permuting a three-dimensional vector with values labeledx,yandz, can interchangextoy,ytoz, andztox. Events represented by hypervectors A and B can be added, forming one vector, but that would sacrifice the event sequence. Combining addition with permutation preserves the order; the event sequence can be retrieved by reversing the operations.
Bundling combines a set of elements in H as function ⊕ : H ×H → H. The input is two points in H and the output is a third point that is similar to both.[3]
Vector symbolic architectures (VSA) provided a systematic approach to high-dimensional symbol representations to support operations such as establishing relationships. Early examples include holographic reduced representations, binary spatter codes, and matrix binding of additive terms. HD computing advanced these models, particularly emphasizing hardware efficiency.[3]
In 2018, Eric Weiss showed how to fully represent an image as a hypervector. A vector could contain information about all the objects in the image, including properties such as color, position, and size.[2]
In 2023, Abbas Rahimi et al., used HDC with neural networks to solveRaven's progressive matrices.[2]
In 2023, Mike Heddes et Al. under the supervision of Professors Givargis, Nicolau and Veidenbaum created ahyper-dimensional computing library[5]that is built on top ofPyTorch.
HDC algorithms can replicate tasks long completed bydeep neural networks, such as classifying images.[2]
Classifying an annotated set of handwritten digits uses an algorithm to analyze the features of each image, yielding a hypervector per image. The algorithm then adds the hypervectors for all labeled images of e.g., zero, to create a prototypical hypervector for the concept of zero and repeats this for the other digits.[2]
Classifying an unlabeled image involves creating a hypervector for it and comparing it to the reference hypervectors. This comparison identifies the digit that the new image most resembles.[2]
Given labeled example setS={(xi,yi)}i=1N,wherexi∈Xandyi∈{ci}i=1K{\displaystyle S=\{(x_{i},y_{i})\}_{i=1}^{N},\ {\scriptstyle {\text{where}}}\ x_{i}\in X\ {\scriptstyle {\text{and}}}\ y_{i}\in \{c_{i}\}_{i=1}^{K}}is the class of a particularxi.[3]
Given query xq∈ X the most similar prototype can be found withk∗=k∈1,...,Kargmaxp(ϕ(xq)),ϕ(ck)){\displaystyle k^{*}=_{k\in 1,...,K}^{argmax}\ p(\phi (x_{q})),\phi (c_{k}))}. The similarity metric ρ is typically the dot-product.[3]
Hypervectors can also be used for reasoning. Raven's progressive matrices presents images of objects in a grid. One position in the grid is blank. The test is to choose from candidate images the one that best fits.[2]
A dictionary of hypervectors represents individual objects. Each hypervector represents an object concept with its attributes. For each test image a neural network generates a binary hypervector (values are +1 or −1) that is as close as possible to some set of dictionary hypervectors. The generated hypervector thus describes all the objects and their attributes in the image.[2]
Another algorithm creates probability distributions for the number of objects in each image and their characteristics. These probability distributions describe the likely characteristics of both the context and candidate images. They too are transformed into hypervectors, then algebra predicts the most likely candidate image to fill the slot.[2]
This approach achieved 88% accuracy on one problem set, beating neural network–only solutions that were 61% accurate. For 3-by-3 grids, the system was 250x faster than a method that usedsymbolic logicto reason, because of the size of the associated rulebook.[2]
Other applications include bio-signal processing, natural language processing, and robotics.[3]
|
https://en.wikipedia.org/wiki/Hyperdimensional_computing
|
Artificial neural networksare a class of models used inmachine learning, and inspired bybiological neural networks. They are the core component of moderndeep learningalgorithms. Computation in artificial neural networks is usually organized into sequential layers ofartificial neurons. The number of neurons in a layer is called the layer width. Theoretical analysis of artificial neural networks sometimes considers the limiting case that layer width becomes large or infinite. This limit enables simple analytic statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces. This wide layer limit is also of practical interest, since finite width neural networks often perform strictly better as layer width is increased.[1][2][3][4][5][6]
|
https://en.wikipedia.org/wiki/Large_width_limits_of_neural_networks
|
The followingoutlineis provided as an overview of, and topical guide to, machine learning:
Machine learning(ML) is a subfield ofartificial intelligencewithincomputer sciencethat evolved from the study ofpattern recognitionandcomputational learning theory.[1]In 1959,Arthur Samueldefined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed".[2]ML involves the study and construction ofalgorithmsthat canlearnfrom and make predictions ondata.[3]These algorithms operate by building amodelfrom atraining setof example observations to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.
Dimensionality reduction
Ensemble learning
Meta-learning
Reinforcement learning
Supervised learning
Bayesian statistics
Decision tree algorithm
Linear classifier
Unsupervised learning
Artificial neural network
Association rule learning
Hierarchical clustering
Cluster analysis
Anomaly detection
Semi-supervised learning
Deep learning
History of machine learning
Machine learning projects:
|
https://en.wikipedia.org/wiki/List_of_machine_learning_concepts
|
Amemristor(/ˈmɛmrɪstər/; aportmanteauofmemory resistor) is a non-lineartwo-terminalelectrical componentrelatingelectric chargeand magneticflux linkage. It was described and named in 1971 byLeon Chua, completing a theoretical quartet of fundamental electrical components which also comprises theresistor,capacitorandinductor.[1]
Chua and Kang later generalized the concept tomemristive systems.[2]Such a system comprises a circuit, of multiple conventional components, which mimics key properties of the ideal memristor component and is also commonly referred to as a memristor. Several such memristor system technologies have been developed, notablyReRAM.
The identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated.[3][4]
Chua in his 1971 paper identified a theoretical symmetry between the non-linear resistor (voltage vs. current), non-linear capacitor (voltage vs. charge), and non-linear inductor (magnetic flux linkage vs. current). From this symmetry he inferred the characteristics of a fourth fundamental non-linear circuit element, linking magnetic flux and charge, which he called the memristor. In contrast to a linear (or non-linear) resistor, the memristor has a dynamic relationship between current and voltage, including a memory of past voltages or currents. Other scientists had proposed dynamic memory resistors such as thememistorof Bernard Widrow, but Chua introduced a mathematical generality.
The memristor was originally defined in terms of a non-linear functional relationship between magnetic flux linkageΦm(t)and the amount of electric charge that has flowed,q(t):[1]f(Φm(t),q(t))=0{\displaystyle f(\mathrm {\Phi } _{\mathrm {m} }(t),q(t))=0}
Themagneticflux linkage,Φm, is generalized from the circuit characteristic of an inductor. Itdoes notrepresent a magnetic field here. Its physical meaning is discussed below. The symbolΦmmay be regarded as the integral of voltage over time.[5]
In the relationship betweenΦmandq, the derivative of one with respect to the other depends on the value of one or the other, and so each memristor is characterized by its memristance function describing the charge-dependent rate of change of flux with charge:
M(q)=dΦmdq.{\displaystyle M(q)={\frac {\mathrm {d} \Phi _{\rm {m}}}{\mathrm {d} q}}\,.}
Substituting the flux as the time integral of the voltage, and charge as the time integral of current, the more convenient forms are:
M(q(t))=dΦ/dtdq/dt=V(t)I(t).{\displaystyle M(q(t))={\cfrac {\mathrm {d} \Phi _{\rm {}}/\mathrm {d} t}{\mathrm {d} q/\mathrm {d} t}}={\frac {V(t)}{I(t)}}\,.}
To relate the memristor to the resistor, capacitor, and inductor, it is helpful to isolate the termM(q), which characterizes the device, and write it as a differential equation.
The above table covers all meaningful ratios of differentials ofI,q,Φm, andV. No device can relatedItodq, ordΦmtodV, becauseIis the time derivative ofqandΦmis the integral ofVwith respect to time.
It can be inferred from this that memristance is charge-dependentresistance. IfM(q(t))is a constant, then we obtainOhm's law,R(t) =V(t)/I(t). IfM(q(t))is nontrivial, however, the equation is not equivalent becauseq(t)andM(q(t))can vary with time. Solving for voltage as a function of time produces
V(t)=M(q(t))I(t).{\displaystyle V(t)=\ M(q(t))I(t)\,.}
This equation reveals that memristance defines a linear relationship between current and voltage, as long asMdoes not vary with charge. Nonzero current implies time varying charge.Alternating current, however, may reveal the linear dependence in circuit operation by inducing a measurable voltage without net charge movement—as long as the maximum change inqdoes not causemuchchange inM.
Furthermore, the memristor is static if no current is applied. IfI(t) = 0, we findV(t) = 0andM(t)is constant. This is the essence of the memory effect.
Analogously, we can define aW(ϕ(t))as memductance:[1]
i(t)=W(ϕ(t))v(t).{\displaystyle i(t)=W(\phi (t))v(t)\,.}
Thepower consumptioncharacteristic recalls that of a resistor,I2R:
P(t)=I(t)V(t)=I2(t)M(q(t)).{\displaystyle P(t)=\ I(t)V(t)=\ I^{2}(t)M(q(t))\,.}
As long asM(q(t))varies little, such as under alternating current, the memristor will appear as a constant resistor. IfM(q(t))increases rapidly, however, current and power consumption will quickly stop.
M(q)is physically restricted to be positive for all values ofq(assuming the device is passive and does not becomesuperconductiveat someq). A negative value would mean that it would perpetually supply energy when operated with alternating current.
In order to understand the nature of memristor function, some knowledge of fundamental circuit theoretic concepts is useful, starting with the concept ofdevice modeling.[6]
Engineers and scientists seldom analyze a physical system in its original form. Instead, they construct a model which approximates the behaviour of the system. By analyzing the behaviour of the model, they hope to predict the behaviour of the actual system. The primary reason for constructing models is that physical systems are usually too complex to be amenable to a practical analysis.
In the 20th century, work was done on devices where researchers did not recognize the memristive characteristics. This has raised the suggestion that such devices should be recognised as memristors.[6]Pershin and Di Ventra[3]have proposed a test that can help to resolve some of the long-standing controversies about whether an ideal memristor does actually exist or is a purely mathematical concept.
The rest of this article primarily addresses memristors as related toReRAMdevices, since the majority of work since 2008 has been concentrated in this area.
Dr. Paul Penfield, in a 1974 MIT technical report[7]mentions the memristor in connection withJosephson junctions. This was an early use of the word "memristor" in the context of a circuit device.
One of the terms in the current through a Josephson junction is of the form:iM(v)=ϵcos(ϕ0)v=W(ϕ0)v{\displaystyle {\begin{aligned}i_{M}(v)&=\epsilon \cos(\phi _{0})v\\&=W(\phi _{0})v\end{aligned}}}whereϵis a constant based on the physical superconducting materials,vis the voltage across the junction andiMis the current through the junction.
Through the late 20th century, research regarding this phase-dependent conductance in Josephson junctions was carried out.[8][9][10][11]A more comprehensive approach to extracting this phase-dependent conductance appeared with Peotta and Di Ventra's seminal paper in 2014.[12]
Due to the practical difficulty of studying the ideal memristor, we will discuss other electrical devices which can be modelled using memristors. For a mathematical description of a memristive device (systems), see§ Theory.
A discharge tube can be modelled as a memristive device, with resistance being a function of the number of conduction electronsne.[2]
vM=R(ne)iMdnedt=βn+αR(ne)iM2{\displaystyle {\begin{aligned}v_{\mathrm {M} }&=R(n_{\mathrm {e} })i_{\mathrm {M} }\\{\frac {\mathrm {d} n_{\mathrm {e} }}{\mathrm {d} t}}&=\beta n+\alpha R(n_{\mathrm {e} })i_{\mathrm {M} }^{2}\end{aligned}}}
vMis the voltage across the discharge tube,iMis the current flowing through it, andneis the number of conduction electrons. A simple memristance function isR(ne) =F/ne. The parametersα,β, andFdepend on the dimensions of the tube and the gas fillings. Anexperimentalidentification of memristive behaviour is the "pinched hysteresis loop" in thev-iplane.[a][13][14]
Thermistors can be modeled as memristive devices:[14]
v=R0(T0)exp[β(1T−1T0)]i≡R(T)idTdt=1C[−δ⋅(T−T0)+R(T)i2]{\displaystyle {\begin{aligned}v&=R_{0}(T_{0})\exp \left[\beta \left({\frac {1}{T}}-{\frac {1}{T_{0}}}\right)\right]i\\&\equiv R(T)i\\{\frac {\mathrm {d} T}{\mathrm {d} t}}&={\frac {1}{C}}\left[-\delta \cdot (T-T_{0})+R(T)i^{2}\right]\end{aligned}}}
βis a material constant,Tis the absolute body temperature of the thermistor,T0is the ambient temperature (both temperatures in Kelvin),R0(T0)denotes the cold temperature resistance atT=T0,Cis the heat capacitance andδis the dissipation constant for the thermistor.
A fundamental phenomenon that has hardly been studied is memristive behaviour inp-n junctions.[15]The memristor plays a crucial role in mimicking the charge storage effect in the diode base, and is also responsible for the conductivity modulation phenomenon (that is so important during forward transients).
In 2008, a team atHP Labsfound experimental evidence for the Chua's memristor based on an analysis of athin filmoftitanium dioxide, thus connecting the operation ofReRAMdevices to the memristor concept. According to HP Labs, the memristor would operate in the following way: the memristor'selectrical resistanceis not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-callednon-volatility property.[16]When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again.[17][18]
The HP Labs result was published in the scientific journalNature.[17][19]Following this claim, Leon Chua has argued that the memristor definition could be generalized to cover all forms of two-terminal non-volatile memory devices based on resistance switching effects.[16]Chua also argued that the memristor is the oldest knowncircuit element, with its effects predating theresistor,capacitor, andinductor.[20]However, there are doubts as to whether a memristor can actually exist in physical reality.[21][22][23][24]Additionally, some experimental evidence contradicts Chua's generalization since a non-passivenanobatteryeffect is observable in resistance switching memory.[25]A simple test has been proposed by Pershin and Di Ventra[3]to analyze whether such an ideal or generic memristor does actually exist or is a purely mathematical concept. Up to now,[when?]there seems to be no experimental resistance switching device (ReRAM) which can pass the test.[3][4]
These devices are intended for applications innanoelectronicmemory devices, computer logic, andneuromorphic/neuromemristive computer architectures.[26][27]In 2013, Hewlett-Packard CTO Martin Fink suggested that memristor memory may become commercially available as early as 2018.[28]In March 2012, a team of researchers fromHRL Laboratoriesand theUniversity of Michiganannounced the first functioning memristor array built on aCMOSchip.[29]
According to the original 1971 definition, the memristor is the fourth fundamental circuit element, forming a non-linear relationship between electric charge and magnetic flux linkage. In 2011,Chuaargued for a broader definition that includes all two-terminal non-volatile memory devices based on resistance switching.[16]Williams argued thatMRAM,phase-change memoryandReRAMare memristor technologies.[32]Some researchers argued that biological structures such as blood[33]and skin[34][35]fit the definition. Others argued that the memory device under development byHP Labsand other forms ofReRAMare not memristors, but rather part of a broader class of variable-resistance systems,[36]and that a broader definition of memristor is a scientifically unjustifiableland grabthat favored HP's memristor patents.[37]
In 2011, Meuffels and Schroeder noted that one of the early memristor papers included a mistaken assumption regarding ionic conduction.[38]In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors.[21]They indicated inadequacies in the electrochemical modeling presented in theNaturearticle "The missing memristor found"[17]because the impact ofconcentration polarizationeffects on the behavior of metal−TiO2−x−metal structures under voltage or current stress was not considered.[25]
In a kind ofthought experiment, Meuffels and Soni[21]furthermore revealed a severe inconsistency: If a current-controlled memristor with the so-callednon-volatility property[16]exists in physical reality, its behavior would violateLandauer's principle, which places a limit on the minimum amount of energy required to change "information" states of a system. This critique was finally adopted byDi Ventraand Pershin[22]in 2013.
Within this context, Meuffels and Soni[21]pointed to a fundamental thermodynamic principle: Non-volatile information storage requires the existence offree-energybarriers that separate the distinct internal memory states of a system from each other; otherwise, one would be faced with an "indifferent" situation, and the system would arbitrarily fluctuate from one memory state to another just under the influence ofthermal fluctuations. When unprotected againstthermal fluctuations, the internal memory states exhibit some diffusive dynamics, which causes state degradation.[22]The free-energy barriers must therefore be high enough to ensure a lowbit-error probabilityof bit operation.[39]Consequently, there is always a lower limit of energy requirement – depending on the requiredbit-error probability– for intentionally changing a bit value in any memory device.[39][40]
In the general concept of memristive system the defining equations are (see§ Theory):y(t)=g(x,u,t)u(t),x˙=f(x,u,t),{\displaystyle {\begin{aligned}y(t)&=g(\mathbf {x} ,u,t)u(t),\\{\dot {\mathbf {x} }}&=f(\mathbf {x} ,u,t),\end{aligned}}}whereu(t)is an input signal, andy(t)is an output signal. The vectorx{\displaystyle \mathbf {x} }represents a set ofnstate variables describing the different internal memory states of the device.x˙{\displaystyle {\dot {\mathbf {x} }}}is the time-dependent rate of change of the state vectorx{\displaystyle \mathbf {x} }with time.
When one wants to go beyond merecurve fittingand aims at a real physical modeling of non-volatile memory elements, e.g.,resistive random-access memorydevices, one has to keep an eye on the aforementioned physical correlations. To check the adequacy of the proposed model and its resulting state equations, the input signalu(t)can be superposed with a stochastic termξ(t), which takes into account the existence of inevitablethermal fluctuations. The dynamic state equation in its general form then finally reads:x˙=f(x,u(t)+ξ(t),t),{\displaystyle {\dot {\mathbf {x} }}=f(\mathbf {x} ,u(t)+\xi (t),t),}whereξ(t)is, e.g., whiteGaussiancurrent or voltage noise. On the basis of an analytical or numerical analysis of the time-dependent response of the system towards noise, a decision on the physical validity of the modeling approach can be made, e.g., whether the system would be able to retain its memory states in power-off mode.
Such an analysis was performed by Di Ventra and Pershin[22]with regard to the genuine current-controlled memristor. As the proposed dynamic state equation provides no physical mechanism enabling such a memristor to cope with inevitable thermal fluctuations, a current-controlled memristor would erratically change its state in course of time just under the influence of current noise.[22][41]Di Ventra and Pershin[22]thus concluded that memristors whose resistance (memory) states depend solely on the current or voltage history would be unable to protect their memory states against unavoidableJohnson–Nyquist noiseand permanently suffer from information loss, a so-called "stochastic catastrophe". A current-controlled memristor can thus not exist as a solid-state device in physical reality.
The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. "resistance-switching" memory devices (ReRAM)) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are ofprobabilisticnature. The probability for a transition from state{i}to state{j}depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by "lowering" the free-energy barrier for the transition{i}→{j}by means of, for example, an externally applied bias.
A "resistance switching" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition{i}→{j}is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching (ReRAM) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything.
When a two-terminal non-volatile memory device is found to be in a distinct resistance state{j}, there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems.
An extra thermodynamic curiosity arises from the definition that memristors/memristive devices should energetically act like resistors. The instantaneous electrical power entering such a device is completely dissipated asJoule heatto the surrounding, so no extra energy remains in the system after it has been brought from one resistance statexito another onexj. Thus, theinternal energyof the memristor device in statexi,U(V,T,xi), would be the same as in statexj,U(V,T,xj), even though these different states would give rise to different device's resistances, which itself must be caused by physical alterations of the device's material.
Other researchers noted that memristor models based on the assumption of linearionic driftdo not account for asymmetry between set time (high-to-low resistance switching) and reset time (low-to-high resistance switching) and do not provide ionic mobility values consistent with experimental data. Non-linear ionic-drift models have been proposed to compensate for this deficiency.[42]
A 2014 article from researchers ofReRAMconcluded that Strukov's (HP's) initial/basic memristor modeling equations do not reflect the actual device physics well, whereas subsequent (physics-based) models such as Pickett's model or Menzel's ECM model (Menzel is a co-author of that article) have adequate predictability, but are computationally prohibitive. As of 2014, the search continues for a model that balances these issues; the article identifies Chang's and Yakopcic's models as potentially good compromises.[43]
Martin Reynolds, an electrical engineering analyst with research outfitGartner, commented that while HP was being sloppy in calling their device a memristor, critics were being pedantic in saying that it was not a memristor.[44]
Chuasuggested experimental tests to determine if a device may properly be categorized as a memristor:[2]
According to Chua[45][46]all resistive switching memories includingReRAM,MRAMandphase-change memorymeet these criteria and are memristors. However, the lack of data for the Lissajous curves over a range of initial conditions or over a range of frequencies complicates assessments of this claim.
Experimental evidence shows that redox-based resistance memory (ReRAM) includes ananobatteryeffect that is contrary to Chua's memristor model. This indicates that the memristor theory needs to be extended or corrected to enable accurate ReRAM modeling.[25]
In 2008, researchers fromHP Labsintroduced a model for a memristance function based on thin films oftitanium dioxide.[17]ForRon≪Roffthe memristance function was determined to beM(q(t))=Roff⋅(1−μvRonD2q(t)){\displaystyle M(q(t))=R_{\mathrm {off} }\cdot \left(1-{\frac {\mu _{v}R_{\mathrm {on} }}{D^{2}}}q(t)\right)}whereRoffrepresents the high resistance state,Ronrepresents the low resistance state,μvrepresents the mobility of dopants in the thin film, andDrepresents the film thickness. The HP Labs group noted that "window functions" were necessary to compensate for differences between experimental measurements and their memristor model due to non-linear ionic drift and boundary effects.
For some memristors, applied current or voltage causes substantial change in resistance. Such devices may be characterized as switches by investigating the time and energy that must be spent to achieve a desired change in resistance. This assumes that the applied voltage remains constant. Solving for energy dissipation during a single switching event reveals that for a memristor to switch fromRontoRoffin timeTontoToff, the charge must change byΔQ=Qon−Qoff.
Eswitch=V2∫ToffTondtM(q(t))=V2∫QoffQondqI(q)M(q)=V2∫QoffQondqV(q)=VΔQ{\displaystyle {\begin{aligned}E_{\mathrm {switch} }&=V^{2}\int _{T_{\mathrm {off} }}^{T_{\mathrm {on} }}{\frac {\mathrm {d} t}{M(q(t))}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{I(q)M(q)}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{V(q)}}\\&=V\Delta Q\end{aligned}}}
SubstitutingV=I(q)M(q), and then∫dq/V= ∆Q/Vfor constantVto produces the final expression. This power characteristic differs fundamentally from that of ametal oxide semiconductortransistor, which is capacitor-based. Unlike the transistor, the final state of the memristor in terms of charge does not depend on bias voltage.
The type of memristor described by Williams ceases to be ideal after switching over its entire resistance range, creatinghysteresis, also called the "hard-switching regime".[17]Another kind of switch would have a cyclicM(q)so that eachoff-onevent would be followed by anon-offevent under constant bias. Such a device would act as a memristor under all conditions, but would be less practical.
In the more general concept of ann-th order memristive system the defining equations are
y(t)=g(x,u,t)u(t),x˙=f(x,u,t){\displaystyle {\begin{aligned}y(t)&=g({\textbf {x}},u,t)u(t),\\{\dot {\textbf {x}}}&=f({\textbf {x}},u,t)\end{aligned}}}
whereu(t)is an input signal,y(t)is an output signal, the vectorxrepresents a set ofnstate variables describing the device, andgandfarecontinuous functions. For a current-controlled memristive system the signalu(t)represents the current signali(t)and the signaly(t)represents the voltage signalv(t). For a voltage-controlled memristive system the signalu(t)represents the voltage signalv(t)and the signaly(t)represents the current signali(t).
Thepurememristor is a particular case of these equations, namely whenxdepends only on charge (x=q) and since the charge is related to the current via the time derivativedq/dt=i(t). Thus forpurememristorsf(i.e. the rate of change of the state) must be equal or proportional to the currenti(t).
One of the resulting properties of memristors and memristive systems is the existence of a pinchedhysteresiseffect.[47]For a current-controlled memristive system, the inputu(t) is the currenti(t), the outputy(t) is the voltagev(t), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central toReRAMand other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors.[48]
The concept of memristive networks was first introduced by Leon Chua in his 1976 paper "Memristive Devices and Systems."[2]Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
A memristive network is a type of artificial neural network that is based on memristive devices, which are electronic components that exhibit the property of memristance.
In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. The network consists of layers of memristive devices, each of which is connected to other layers through a set of weights. These weights are adjusted during the training process, allowing the network to learn and adapt to new input data.
One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. They also have the potential to be more energy efficient than traditional artificial neural networks, as they can store and process information using less power. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations.
For the simplest model with only memristive devices with voltage generators in series,
there is an exact and in closed form equation (Caravelli–Traversa–Di Ventra equation, CTDV)[49]which describes the evolution of the internal memory of the network for each device.
For a simple memristor model (but not realistic) of a switch between two resistance values, given by the Williams-Strukov modelR(x)=Roff(1−x)+Ronx{\displaystyle R(x)=R_{off}(1-x)+R_{on}x}, withdx/dt=I/β−αx{\displaystyle dx/dt=I/\beta -\alpha x},
there is a set of nonlinearly coupled differential equations that takes the form:
whereX{\displaystyle X}is the diagonal matrix with elementsxi{\displaystyle x_{i}}on the diagonal,α,β,χ{\displaystyle \alpha ,\beta ,\chi }are based on the memristors physical parameters. The vectorS→{\displaystyle {\vec {S}}}is the vector of voltage generators in series to the memristors. The circuit topology enters only in the projector operatorΩ2=Ω{\displaystyle \Omega ^{2}=\Omega }, defined in terms of the cycle matrix of the graph. The equation provides a concise mathematical description of the interactions due to Kirchhoff 's laws. Interestingly, the equation shares many properties in common with aHopfield network, such as the existence of Lyapunov functions and classical tunnelling phenomena.[50]In the context of memristive networks, the CTD equation may be used to predict the behavior of memristive devices under different operating conditions, or to design and optimize memristive circuits for specific applications.
Some researchers have raised the question of the scientific legitimacy of HP's memristor models in explaining the behavior ofReRAM.[36][37]and have suggested extended memristive models to remedy perceived deficiencies.[25]
One example[51]attempts to extend the memristive systems framework by including dynamic systems incorporating higher-order derivatives of the input signalu(t) as a series expansion
wheremis a positive integer,u(t) is an input signal,y(t) is an output signal, the vectorxrepresents a set ofnstate variables describing the device, and the functionsgandfarecontinuous functions. This equation produces the same zero-crossing hysteresis curves as memristive systems but with a differentfrequency responsethan that predicted by memristive systems.
Another example suggests including an offset valuea{\displaystyle a}to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect.[25]
There exist implementations of memristors with a hysteretic current-voltage curve or with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with hysteretic current-voltage curve use a resistance dependent on the history of the current and voltage and bode well for the future of memory technology due to their simple structure, high energy efficiency, and high integration [DOI: 10.1002/aisy.202200053].
Interest in the memristor revived when an experimental solid-state version was reported byR. Stanley WilliamsofHewlett Packardin 2007.[52][53][54]The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior ofnanoscalethin films. The device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current.
Although not cited in HP's initial reports on theirTiO2memristor, the resistance switching characteristics of titanium dioxide were originally described in the 1960s.[55]
The HP device is composed of a thin (50nm)titanium dioxidefilm between two 5 nm thickelectrodes, onetitanium, the otherplatinum. Initially, there are two layers to the titanium dioxide film, one of which has a slight depletion ofoxygenatoms. The oxygen vacancies act ascharge carriers, meaning that the depleted layer has a much lower resistance than the non-depleted layer. When an electric field is applied, the oxygen vacancies drift (seeFast-ion conductor), changing the boundary between the high-resistance and low-resistance layers. Thus the resistance of the film as a whole is dependent on how much charge has been passed through it in a particular direction, which is reversible by changing the direction of current.[17]Since the HP device displays fast-ion conduction at nanoscale, it is considered ananoionic device.[56]
Memristance is displayed only when both the doped layer and depleted layer contribute to resistance. When enough charge has passed through the memristor that the ions can no longer move, the device entershysteresis. It ceases to integrateq=∫Idt, but rather keepsqat an upper bound andMfixed, thus acting as a constant resistor until current is reversed.
Memory applications of thin-film oxides had been an area of active investigation for some time.IBMpublished an article in 2000 regarding structures similar to that described by Williams.[57]Samsunghas a U.S. patent for oxide-vacancy based switches similar to that described by Williams.[58]
In April 2010, HP labs announced that they had practical memristors working at 1ns(~1 GHz) switching times and 3 nm by 3 nm sizes,[59]which bodes well for the future of the technology.[60]At these densities it could easily rival the current sub-25 nmflash memorytechnology.
It seems that memristance has been reported innanoscalethin films of silicon dioxide as early as the 1960s
.[61]
However, hysteretic conductance in silicon was associated to memristive effects only in 2009.[62]
More recently, beginning in 2012, Tony Kenyon, Adnan Mehonic and their group clearly demonstrated that the resistive switching in silicon oxide thin films is due to the formation of oxygen vacancy filaments in defect-engineered silicon dioxide, having probed directly the movement of oxygen under electrical bias, and imaged the resultant conductive filaments using conductive atomic force microscopy.[63]
In 2004, Krieger and Spitzer described dynamic doping of polymer and inorganic dielectric-like materials that improved the switching characteristics and retention required to create functioning nonvolatile memory cells.[64]They used a passive layer between electrode and active thin films, which enhanced the extraction of ions from the electrode. It is possible to usefast-ion conductoras this passive layer, which allows a significant reduction of the ionic extraction field.
In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor.[65]
In 2010, Alibart, Gamrat, Vuillaume et al.[66]introduced a new hybrid organic/nanoparticledevice (theNOMFET: Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor[67]and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).[68]
In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristors.[69]The synapse circuit demonstratedlong-term potentiationfor learning as well as inactivity based forgetting. Using a grid of circuits, a pattern of light was stored and later recalled. This mimics the behavior of the V1 neurons in theprimary visual cortexthat act as spatiotemporal filters that process visual signals such as edges and moving lines.
In 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting based on polymeric memristor.[70]
In 2014, Bessonov et al. reported a flexible memristive device comprising aMoOx/MoS2heterostructure sandwiched between silver electrodes on a plastic foil.[71]The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layeredtransition metal dichalcogenides(TMDs). The memristors are mechanically flexible,optically transparentand produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies.
Atomristor is defined as the electrical devices showing memristive behavior in atomically thinnanomaterialsor atomic sheets. In 2018, Ge and Wu et al.[72]in theAkinwandegroup at the University of Texas, first reported a universal memristive effect in single-layerTMD(MX2, M = Mo, W; and X = S, Se) atomic sheets based on verticalmetal-insulator-metal(MIM) device structure. The work was later extended to monolayerhexagonal boron nitride, which is the thinnest memory material of around 0.33 nm.[73]These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared viaCVD/MOCVD, enabling low-cost fabrication. Afterwards, taking advantage of the low "on" resistance and large on/off ratio, a high-performance zero-powerRF switchis proved based on MoS2or h-BN atomristors, indicating a new application of memristors for5G,6Gand THz communication and connectivity systems.[74][75]In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology.[76]
Theferroelectricmemristor[77]is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of theferroelectricmaterial by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation:ROFF≫ RON(an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither RONor ROFF, but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved.
In 2013, Ageev, Blinov et al.[78]reported observing memristor effect in structure based on vertically aligned carbon nanotubes studying bundles of CNT byscanning tunneling microscope.
Later it was found[79]that CNT memristive switching is observed when a nanotube has a non-uniform elastic strain ΔL0. It was shown that the memristive switching mechanism of strained СNT is based on the formation and subsequent redistribution of non-uniform elastic strain and piezoelectric fieldEdefin the nanotube under the influence of an external electric fieldE(x,t).
Biomaterials have been evaluated for use in artificial synapses and have shown potential for application in neuromorphic systems.[80]In particular, the feasibility of using a collagen‐based biomemristor as an artificial synaptic device has been investigated,[81]whereas a synaptic device based on lignin demonstrated rising or lowering current with consecutive voltage sweeps depending on the sign of the voltage[82]furthermore a natural silk fibroin demonstrated memristive properties;[83]spin-memristive systems based on biomolecules are also being studied.[84]
In 2012,Sandro Carraraand co-authors have proposed the first biomolecular memristor with aims to realize highly sensitive biosensors.[85]Since then, several memristivesensorshave been demonstrated.[86]
Chen and Wang, researchers at disk-drive manufacturerSeagate Technologydescribed three examples of possible magnetic memristors.[87]In one device resistance occurs when the spin of electrons in one section of the device points in a different direction from those in another section, creating a "domain wall", a boundary between the two sections. Electrons flowing into the device have a certain spin, which alters the device's magnetization state. Changing the magnetization, in turn, moves the domain wall and changes the resistance. The work's significance led to an interview byIEEE Spectrum.[88]A first experimental proof of thespintronicmemristor based on domain wall motion by spin currents in a magnetic tunnel junction was given in 2011.[89]
Themagnetic tunnel junctionhas been proposed to act as a memristor through several potentially complementary mechanisms, both extrinsic (redox reactions, charge trapping/detrapping and electromigration within the barrier) and intrinsic (spin-transfer torque).
Based on research performed between 1999 and 2003, Bowen et al. published experiments in 2006 on amagnetic tunnel junction(MTJ) endowed with bi-stable spin-dependent states[90](resistive switching).
The MTJ consists in a SrTiO3 (STO) tunnel barrier that separateshalf-metallic oxideLSMO and ferromagnetic metal CoCr electrodes. The MTJ's usual two device resistance states, characterized by a parallel or antiparallel alignment of electrode magnetization, are altered by applying an electric field. When the electric field is applied from the CoCr to the LSMO electrode, thetunnel magnetoresistance(TMR) ratio is positive. When the direction of electric field is reversed, the TMR is negative. In both cases, large amplitudes of TMR on the order of 30% are found. Since a fully spin-polarized current flows from thehalf-metallicLSMO electrode, within theJulliere model, this sign change suggests a sign change in the effective spin polarization of the STO/CoCr interface. The origin to this multistate effect lies with the observed migration of Cr into the barrier and its state of oxidation. The sign change of TMR can originate from modifications to the STO/CoCr interface density of states, as well as from changes to the tunneling landscape at the STO/CoCr interface induced by CrOx redox reactions.
Reports on MgO-based memristive switching within MgO-based MTJs appeared starting in 2008[91]and 2009.[92]While the drift of oxygen vacancies within the insulating MgO layer has been proposed to describe the observed memristive effects,[92]another explanation could be charge trapping/detrapping on the localized states of oxygen vacancies[93]and its impact[94]on spintronics. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity[95]or multiferroicity.[96]
The magnetization state of a MTJ can be controlled bySpin-transfer torque, and can thus, through this intrinsic physical mechanism, exhibit memristive behavior. This spin torque is induced by current flowing through the junction, and leads to an efficient means of achieving aMRAM. However, the length of time the current flows through the junction determines the amount of current needed, i.e., charge is the key variable.[97]
The combination of intrinsic (spin-transfer torque) and extrinsic (resistive switching) mechanisms naturally leads to a second-order memristive system described by the state vectorx= (x1,x2), wherex1describes the magnetic state of the electrodes andx2denotes the resistive state of the MgO barrier. In this case the change ofx1is current-controlled (spin torque is due to a high current density) whereas the change ofx2is voltage-controlled (the drift of oxygen vacancies is due to high electric fields). The presence of both effects in a memristive magnetic tunnel junction led to the idea of a nanoscopic synapse-neuron system.[98]
A fundamentally different mechanism for memristive behavior has been proposed by Pershin andDi Ventra.[99][100]The authors show that certain types of semiconductor spintronic structures belong to a broad class of memristive systems as defined by Chua and Kang.[2]The mechanism of memristive behavior in such structures is based entirely on the electron spin degree of freedom which allows for a more convenient control than the ionic transport in nanostructures. When an external control parameter (such as voltage) is changed, the adjustment of electron spin polarization is delayed because of the diffusion and relaxation processes causing hysteresis. This result was anticipated in the study of spin extraction at semiconductor/ferromagnet interfaces,[101]but was not described in terms of memristive behavior. On a short time scale, these structures behave almost as an ideal memristor.[1]This result broadens the possible range of applications of semiconductor spintronics and makes a step forward in future practical applications.
In 2017, Kris Campbell formally introduced the self-directed channel (SDC) memristor.[102]The SDC device is the first memristive device available commercially to researchers, students and electronics enthusiast worldwide.[103]The SDC device is operational immediately after fabrication. In the Ge2Se3active layer, Ge-Ge homopolar bonds are found and switching occurs. The three layers consisting of Ge2Se3/Ag/Ge2Se3, directly below the top tungsten electrode, mix together during deposition and jointly form the silver-source layer. A layer of SnSe is between these two layers ensuring that the silver-source layer is not in direct contact with the active layer. Since silver does not migrate into the active layer at high temperatures, and the active layer maintains a high glass transition temperature of about 350 °C (662 °F), the device has significantly higher processing and operating temperatures at 250 °C (482 °F) and at least 150 °C (302 °F), respectively. These processing and operating temperatures are higher than most ion-conducting chalcogenide device types, including the S-based glasses (e.g. GeS) that need to be photodoped or thermally annealed. These factors allow the SDC device to operate over a wide range of temperatures, including long-term continuous operation at 150 °C (302 °F).
There exist implementations of memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve use a memristance dependent on the history of the flux and charge. Those memristors can merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365].
Time-integrated Formingfree (TiF) memristors reveal a hysteretic flux-charge curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. And TiF memristors also reveal a hysteretic current-voltage curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. The memristance state of a TiF memristor can be controlled by both the flux and the charge [DOI: 10.1063/1.4775718]. A TiF memristor was first demonstrated byHeidemarie Schmidtand her team in 2011 [DOI: 10.1063/1.3601113]. This TiF memristor is composed of a BiFeO3thin film between metallically conducting electrodes, one gold, the other platinum. The hysteretic flux-charge curve of the TiF memristor changes its slope continuously in one branch in the positive and in one branch in the negative bias range (write branches) and has a constant slope in one branch in the positive and in one branch in the negative bias range (read branches) [arXiv:2403.20051]. According to Leon O. Chua [Reference 1:10.1.1.189.3614] the slope of the flux-charge curve corresponds to the memristance of a memristor or to its internal state variables. The TiF memristors can be considered as memristors with a constant memristance in the two read branches and with a reconfigurable memristance in the two write branches. The physical memristor model which describes the hysteretic current-voltage curves of the TiF memristor implements static and dynamic internal state variables in the two read branches and in the two write branches [arXiv:2402.10358].
The static and dynamic internal state variables of a non-linear memristors can be used to implement operations on non-linear memristors representing linear, non-linear, and even transcendental, e.g. exponential or logarithmic, input-output functions.
The transport characteristics of the TiF memristor in the small current – small voltage range are non-linear. This non-linearity well compares to the non-linear characteristics in the small current – small voltage range of the basic former and present building blocks in the arithmetic logic unit of von-Neumann computers, i.e. of vacuum tubes and of transistors. In contrast to vacuum tubes and transistors, the signal output of hysteretic flux-charge memristors, i.e. of TiF memristors, is not lost when the operation power is switched off before storing the signal output to the memory. Therefore, hysteretic flux-charge memristors are said to merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. The transport characteristics in the small current – small voltage range of hysteretic current-voltage memristors are linear. This explains why hysteretic current-voltage memristors are well established memory units and why they can not merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [arXiv:2403.20051].
Memristors remain a laboratory curiosity, as yet made in insufficient numbers to gain any commercial applications.
A potential application of memristors is in analog memories for superconducting quantum computers.[12]
Memristors can potentially be fashioned intonon-volatile solid-state memory, which could allow greater data density than hard drives with access times similar toDRAM, replacing both components.[31]HP prototyped a crossbar latch memory that can fit 100gigabitsin a square centimeter,[104]and proposed a scalable 3D design (consisting of up to 1000 layers or 1petabitper cm3).[105]In May 2008 HP reported that its device reaches currently about one-tenth the speed of DRAM.[106]The devices' resistance would be read withalternating currentso that the stored value would not be affected.[107]In May 2012, it was reported that the access time had been improved to 90 nanoseconds, which is nearly one hundred times faster than the contemporaneous Flash memory. At the same time, the energy consumption was just one percent of that consumed by Flash memory.[108]
Memristors have applications inprogrammable logic[109]signal processing,[110]super-resolution imaging[111]physical neural networks,[112]control systems,[113]reconfigurable computing,[114]in-memory computing,[115]brain–computer interfaces[116]andRFID.[117]Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation[118]Several early works have been reported in this direction.[119][120]
In 2009, a simple electronic circuit[121]consisting of an LC network and a memristor was used to model experiments on adaptive behavior of unicellular organisms.[122]It was shown that subjected to a train of periodic pulses, the circuit learns and anticipates the next pulse similar to the behavior of slime moldsPhysarum polycephalumwhere the viscosity of channels in the cytoplasm responds to periodic environment changes.[122]Applications of such circuits may include, e.g.,pattern recognition. TheDARPASyNAPSEproject funded HP Labs, in collaboration with theBoston UniversityNeuromorphics Lab, has been developing neuromorphic architectures which may be based on memristive systems. In 2010,Versaceand Chandler described the MoNETA (Modular Neural Exploring Traveling Agent) model.[123]MoNETA is the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent using memristive hardware.[124]Application of the memristor crossbar structure in the construction of an analog soft computing system was demonstrated by Merrikh-Bayat and Shouraki.[125]In 2011, they showed[126]how memristor crossbars can be combined withfuzzy logicto create an analog memristiveneuro-fuzzycomputing system with fuzzy input and output terminals. Learning is based on the creation of fuzzy relations inspired fromHebbian learning rule.
In 2013 Leon Chua published a tutorial underlining the broad span of complex phenomena and applications that memristors span and how they can be used as non-volatile analog memories and can mimic classic habituation and learning phenomena.[127]
Thememistorandmemtransistorare transistor-based devices which include memristor function.
In 2009,Di Ventra, Pershin, and Chua extended[128]the notion of memristive systems to capacitive and inductive elements in the form of memcapacitors and meminductors, whose properties depend on the state and history of the system, further extended in 2013 by Di Ventra and Pershin.[22]
In September 2014,Mohamed-Salah Abdelouahab,Rene Lozi, andLeon Chuapublished a general theory of 1st-, 2nd-, 3rd-, and nth-order memristive elements usingfractional derivatives.[129]
Sir Humphry Davyis said by some to have performed the first experiments which can be explained by memristor effects as long ago as 1808.[20][130]However the first device of a related nature to be constructed was thememistor(i.e. memory resistor), a term coined in 1960 byBernard Widrowto describe a circuit element of an early artificial neural network calledADALINE. A few years later, in 1968, Argall published an article showing the resistance switching effects of TiO2which was later claimed by researchers from Hewlett Packard to be evidence of a memristor.[55][citation needed]
Leon Chuapostulated his new two-terminal circuit element in 1971. It was characterized by a relationship between charge and flux linkage as a fourth fundamental circuit element.[1]Five years later he and his student Sung Mo Kang generalized the theory of memristors and memristive systems including a property of zero crossing in theLissajous curvecharacterizing current vs. voltage behavior.[2]
On May 1, 2008, Strukov, Snider, Stewart, and Williams published an article inNatureidentifying a link between the two-terminal resistance switching behavior found in nanoscale systems and memristors.[17]
On 23 January 2009,Di Ventra, Pershin, and Chua extended the notion of memristive systems to capacitive and inductive elements, namelycapacitorsandinductors, whose properties depend on the state and history of the system.[128]
In July 2014, the MeMOSat/LabOSatgroup[131](composed of researchers fromUniversidad Nacional de General San Martín (Argentina), INTI,CNEA, andCONICET) put memory devices into aLow Earth orbit.[132]Since then, seven missions with different devices[133]are performing experiments in low orbits, onboardSatellogic'sÑu-Satsatellites.[134][135][clarification needed]
On 7 July 2015, Knowm Inc announced Self Directed Channel (SDC) memristors commercially.[136]These devices remain available in small numbers.
On 13 July 2018, MemSat (Memristor Satellite) was launched to fly a memristor evaluation payload.[137]
In 2021,Jennifer Ruppand Martin Bazant ofMITstarted a "Lithionics" research programme to investigate applications oflithiumbeyond their use inbattery electrodes, includinglithium oxide-based memristors inneuromorphic computing.[138][139]
In May 2023, TECHiFAB GmbH [https://techifab.com/] announced TiF memristors commercially. [arXiv: 2403.20051, arXiv: 2402.10358] These TiF memristors remain available in small and medium numbers.
In the September 2023 issue ofScience Magazine, Chinese scientists Wenbin Zhanget al.described the development and testing of a memristor-basedintegrated circuit.[140]
|
https://en.wikipedia.org/wiki/Memristor
|
Neural gasis anartificial neural network, inspired by theself-organizing mapand introduced in 1991 byThomas MartinetzandKlaus Schulten.[1]The neural gas is a simple algorithm for finding optimal data representations based onfeature vectors. The algorithm was coined "neural gas" because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied wheredata compressionorvector quantizationis an issue, for examplespeech recognition,[2]image processing[3]orpattern recognition. As a robustly converging alternative to thek-means clusteringit is also used forcluster analysis.[4]
Suppose we want to model aprobability distributionP(x){\displaystyle P(x)}of data vectorsx{\displaystyle x}using a finite number offeature vectorswi{\displaystyle w_{i}}, wherei=1,⋯,N{\displaystyle i=1,\cdots ,N}.
In the algorithm,ε{\displaystyle \varepsilon }can be understood as the learning rate, andλ{\displaystyle \lambda }as the neighborhood range.ε{\displaystyle \varepsilon }andλ{\displaystyle \lambda }are reduced with increasingt{\displaystyle t}so that the algorithm converges after many adaptation steps.
The adaptation step of the neural gas can be interpreted asgradient descenton acost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online)k-means clusteringa much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes.
Compared to self-organized map, the neural gas model does not assume that some vectors are neighbors. If two vectors happen to be close together, they would tend to move together, and if two vectors happen to be apart, they would tend to not move together. In contrast, in an SOM, if two vectors are neighbors in the underlying graph, then they will always tend to move together, no matter whether the two vectors happen to be neighbors in the Euclidean space.
The name "neural gas" is because one can imagine it to be what an SOM would be like if there is no underlying graph, and all points are free to move without the bonds that bind them together.
A number of variants of the neural gas algorithm exists in the literature so as to mitigate some of its shortcomings. More notable is perhaps Bernd Fritzke's growing neural gas,[5]but also one should mention further elaborations such as the Growing When Required network[6]and also the incremental growing neural gas.[7]A performance-oriented approach that avoids the risk of overfitting is the Plastic Neural gas model.[8]
Fritzke describes the growing neural gas (GNG) as an incremental network model that learns topological relations by using a "Hebb-like learning rule",[5]only, unlike the neural gas, it has no parameters that change over time and it is capable of continuous learning, i.e. learning on data streams. GNG has been widely used in several domains,[9]demonstrating its capabilities for clustering data incrementally. The GNG is initialized with two randomly positioned nodes which are initially connected with a zero age edge and whose errors are set to 0. Since in the GNG input data is presented sequentially one by one, the following steps are followed at each iteration:
Another neural gas variant inspired by the GNG algorithm is the incremental growing neural gas (IGNG). The authors propose the main advantage of this algorithm to be "learning new data (plasticity) without degrading the previously trained network and forgetting the old input data (stability)."[7]
Having a network with a growing set of nodes, like the one implemented by the GNG algorithm was seen as a great advantage, however some limitation on the learning was seen by the introduction of the parameter λ, in which the network would only be able to grow when iterations were a multiple of this parameter.[6]The proposal to mitigate this problem was a new algorithm, the Growing When Required network (GWR), which would have the network grow more quickly, by adding nodes as quickly as possible whenever the network identified that the existing nodes would not describe the input well enough.
The ability to only grow a network may quickly introduce overfitting; on the other hand, removing nodes on the basis of age only, as in the GNG model, does not ensure that the removed nodes are actually useless, because removal depends on a model parameter that should be carefully tuned to the "memory length" of the stream of input data.
The "Plastic Neural Gas" model[8]solves this problem by making decisions to add or remove nodes using an unsupervised version of cross-validation, which controls an equivalent notion of "generalization ability" for the unsupervised setting.
While growing-only methods only cater for theincremental learningscenario, the ability to grow and shrink is suited to the more generalstreaming dataproblem.
To find the rankingi0,i1,…,iN−1{\displaystyle i_{0},i_{1},\ldots ,i_{N-1}}of the feature vectors, the neural gas algorithm involves sorting, which is a procedure that does not lend itself easily to parallelization or implementation in analog hardware. However, implementations in both parallel software[10]and analog hardware[11]were actually designed.
|
https://en.wikipedia.org/wiki/Neural_gas
|
Neural network softwareis used tosimulate,research,develop, and applyartificial neural networks, software concepts adapted frombiological neural networks, and in some cases, a wider array ofadaptive systemssuch asartificial intelligenceandmachine learning.
Neural network simulators are software applications that are used to simulate the behavior of artificial or biological neural networks. They focus on one or a limited number of specific types of neural networks. They are typically stand-alone and not intended to produce general neural networks that can be integrated in other software. Simulators usually have some form of built-invisualizationto monitor the training process. Some simulators also visualize the physical structure of the neural network.
Historically, the most common type of neural network software was intended for researching neural network structures and algorithms. The primary purpose of this type of software is, through simulation, to gain a better understanding of the behavior and the properties of neural networks. Today in the study of artificial neural networks, simulators have largely been replaced by more general component based development environments as research platforms.
Commonly used artificial neural network simulators include theStuttgart Neural Network Simulator(SNNS), andEmergent.
In the study of biological neural networks however, simulation software is still the only available approach. In such simulators the physical biological and chemical properties of neural tissue, as well as the electromagnetic impulses between the neurons are studied.
Commonly used biological network simulators includeNeuron,GENESIS,NESTandBrian.
Unlike the research simulators, data analysis simulators are intended for practical applications of artificial neural networks. Their primary focus is on data mining and forecasting. Data analysis simulators usually have some form of preprocessing capabilities. Unlike the more general development environments, data analysis simulators use a relatively simple static neural network that can be configured. A majority of the data analysis simulators on the market use backpropagating networks or self-organizing maps as their core. The advantage of this type of software is that it is relatively easy to use.Neural Designeris one example of a data analysis simulator.
When theParallel Distributed Processingvolumes[1][2][3]were released in 1986-87, they provided some relatively simple software. The original PDP software did not require any programming skills, which led to its adoption by a wide variety of researchers in diverse fields. The original PDP software was developed into a more powerful package called PDP++, which in turn has become an even more powerful platform calledEmergent. With each development, the software has become more powerful, but also more daunting for use by beginners.
In 1997, the tLearn software was released to accompany a book.[4]This was a return to the idea of providing a small, user-friendly, simulator that was designed with the novice in mind. tLearn allowed basic feed forward networks, along with simple recurrent networks, both of which can be trained by the simple back propagation algorithm. tLearn has not been updated since 1999.
In 2011, the Basic Prop simulator was released. Basic Prop is a self-contained application, distributed as a platform neutral JAR file, that provides much of the same simple functionality as tLearn.
Development environments for neural networks differ from the software described above primarily on two accounts – they can be used to develop custom types of neural networks and they supportdeploymentof the neural network outside the environment. In some cases they have advancedpreprocessing, analysis and visualization capabilities.
A more modern type of development environments that are currently favored in both industrial and scientific use are based on acomponent based paradigm. The neural network is constructed by connecting adaptive filter components in a pipe filter flow. This allows for greater flexibility as custom networks can be built as well as custom components used by the network. In many cases this allows a combination of adaptive and non-adaptive components to work together. The data flow is controlled by a control system which is exchangeable as well as the adaptation algorithms. The other important feature is deployment capabilities.
With the advent of component-based frameworks such as.NETandJava, component based development environments are capable of deploying the developed neural network to these frameworks as inheritable components. In addition some software can also deploy these components to several platforms, such asembedded systems.
Component based development environments include:PeltarionSynapse,NeuroDimensionNeuroSolutions,Scientific SoftwareNeuro Laboratory, and theLIONsolverintegrated software. Freeopen sourcecomponent based environments includeEncogandNeuroph.
A disadvantage of component-based development environments is that they are more complex than simulators. They require more learning to fully operate and are more complicated to develop.
The majority implementations of neural networks available are however custom implementations in various programming languages and on various platforms. Basic types of neural networks are simple to implement directly. There are also manyprogramming librariesthat contain neural network functionality and that can be used in custom implementations (such asTensorFlow,Theano, etc., typically providing bindings to languages such asPython,C++,Java).
In order for neural network models to be shared by different applications, a common language is necessary. ThePredictive Model Markup Language(PMML) has been proposed to address this need. PMML is an XML-based language which provides a way for applications to define and share neural network models (and other data mining models) between PMML compliant applications.
PMML provides applications a vendor-independent method of defining models so that proprietary issues and incompatibilities are no longer a barrier to the exchange of models between applications. It allows users to develop models within one vendor's application, and use other vendors' applications to visualize, analyze, evaluate or otherwise use the models. Previously, this was very difficult, but with PMML, the exchange of models between compliant applications is now straightforward.
A range of products are being offered to produce and consume PMML. This ever-growing list includes the following neural network products:
|
https://en.wikipedia.org/wiki/Neural_network_software
|
Anoptical neural networkis a physical implementation of anartificial neural networkwithoptical components. Early optical neural networks used a photorefractiveVolume hologramto interconnect arrays of input neurons to arrays of output with synaptic weights in proportion to the multiplexed hologram's strength.[2]Volume holograms were further multiplexed using spectral hole burning to add one dimension of wavelength to space to achieve four dimensional interconnects of two dimensional arrays of neural inputs and outputs.[3]This research led to extensive research on alternative methods using the strength of the optical interconnect for implementing neuronal communications.[4]
Some artificial neural networks that have been implemented as optical neural networks include theHopfield neural network[5]and the Kohonenself-organizing mapwithliquid crystalspatial light modulators[6]Optical neural networks can also be based on the principles ofneuromorphic engineering, creatingneuromorphic photonic systems. Typically, these systems encode information in the networks using spikes, mimicking the functionality ofspiking neural networksin optical and photonic hardware. Photonic devices that have demonstrated neuromorphic functionalities include (among others)vertical-cavity surface-emitting lasers,[7][8]integrated photonic modulators,[9]optoelectronic systems based onsuperconductingJosephson junctions[10]or systems based onresonant tunnelling diodes.[11]
Biological neural networksfunction on an electrochemical basis, while optical neural networks use electromagnetic waves. Optical interfaces tobiological neural networkscan be created withoptogenetics, but is not the same as an optical neural networks. In biological neural networks there exist a lot of different mechanisms for dynamically changing the state of the neurons, these include short-term and long-termsynaptic plasticity. Synaptic plasticity is among the electrophysiological phenomena used to control the efficiency of synaptic transmission, long-term for learning and memory, and short-term for short transient changes in synaptic transmission efficiency. Implementing this with optical components is difficult, and ideally requires advanced photonic materials. Properties that might be desirable in photonic materials for optical neural networks include the ability to change their efficiency of transmitting light, based on the intensity of incoming light.
With the increasing significance of computer vision in various domains, the computational cost of these tasks has increased, making it more important to develop the new approaches of the processing acceleration. Optical computing has emerged as a potential alternative to GPU acceleration for modern neural networks, particularly considering the looming obsolescence of Moore's Law. Consequently, optical neural networks have garnered increased attention in the research community. Presently, two primary methods of optical neural computing are under research: silicon photonics-based and free-space optics. Each approach has its benefits and drawbacks; while silicon photonics may offer superior speed, it lacks the massive parallelism that free-space optics can deliver.
Given the substantial parallelism capabilities of free-space optics, researchers have focused on taking advantage of it. One implementation, proposed by Lin et al.,[12]involves the training and fabrication of phase masks for a handwritten digit classifier. By stacking 3D-printed phase masks, light passing through the fabricated network can be read by a photodetector array of ten detectors, each representing a digit class ranging from 1 to 10. Although this network can achieve terahertz-range classification, it lacks flexibility, as the phase masks are fabricated for a specific task and cannot be retrained.
An alternative method for classification in free-space optics, introduced by Cahng et al.,[13]employs a 4F system that is based on the convolution theorem to perform convolution operations. This system uses two lenses to execute the Fourier transforms of the convolution operation, enabling passive conversion into the Fourier domain without power consumption or latency. However, the convolution operation kernels in this implementation are also fabricated phase masks, limiting the device's functionality to specific convolutional layers of the network only.
In contrast, Li et al.[14]proposed a technique involving kernel tiling to use the parallelism of the 4F system while using a Digital Micromirror Device (DMD) instead of a phase mask. This approach allows users to upload various kernels into the 4F system and execute the entire network's inference on a single device. Unfortunately, modern neural networks are not designed for the 4F systems, as they were primarily developed during the CPU/GPU era. Mostly because they tend to use a lower resolution and a high number of channels in their feature maps.
In 2007 there was one model of Optical Neural Network: the Programmable Optical Array/Analogic Computer (POAC). It had been implemented in the year 2000 and reported based on modified Joint Fourier Transform Correlator (JTC) and Bacteriorhodopsin (BR) as a holographic optical memory. Full parallelism, large array size and the speed of light are three promises offered by POAC to implement an optical CNN. They had been investigated during the last years with their practical limitations and considerations yielding the design of the first portable POAC version.
The practical details – hardware (optical setups) and software (optical templates) – were published. However, POAC is a general purpose and programmable array computer that has a wide range of applications including:
Taichifrom Tsinghua University in Beijing is a hybrid ONN that combines the power efficiency and parallelism of optical diffraction and the configurability of optical interference. Taichi offers 13.96 million parameters. Taichi avoids the high error rates that afflict deep (multi-layer) networks by combining clusters of fewer-layer diffractive units with arrays of interferometers for reconfigurable computation. Its encoding protocol divides large network models into sub-models that can be distributed across multiple chiplets in parallel.[15]
Taichi achieved 91.89% accuracy in tests with theOmniglotdatabase. It was also used to generate musicBachand generate images the styles ofVan GoghandMunch.[15]
The developers claimed energy efficiency of up to 160 trillion operations second−1watt−1and an area efficiency of 880 trillion multiply-accumulate operations mm−2or 103more energy efficient than theNVIDIA H100, and 102times more energy efficient and 10 times more area efficient than previous ONNs.[15]
Time dimension has recently been introduced into diffrative nueral network by fs laser lithography of perovskite hydration. The temporal behaviour of the neuron can be modulated by the fs laser at the nanoscale, enabling a programmable holographic neural network with temporal evolution functionality, i.e., the functionality can change with time under the hydration stimuli. An in-memory temporal inference functionality was demonstrated to mimic the function evolution of the human brain,i.e.,the functionality can change from simple digit image classification to more complicated digit and clothing product image classification with time. This is the first time of introducting time dimension into the optical neural network, laying a foundation for future brain-like photonic chip development.[16]
|
https://en.wikipedia.org/wiki/Optical_neural_network
|
Connectionismis an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks.[1]
Connectionism has had many "waves" since its beginnings. The first wave appeared 1943 withWarren Sturgis McCullochandWalter Pittsboth focusing on comprehending neural circuitry through a formal and mathematical approach,[2]andFrank Rosenblattwho published the 1958 paper "The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain" inPsychological Review, while working at the Cornell Aeronautical Laboratory.[3]The first wave ended with the 1969 book about the limitations of the original perceptron idea, written byMarvin MinskyandSeymour Papert, which contributed to discouraging major funding agencies in the US from investing in connectionist research.[4]With a few noteworthy deviations, most connectionist research entered a period of inactivity until the mid-1980s. The termconnectionist modelwas reintroduced in a 1982 paper in the journalCognitive Scienceby Jerome Feldman and Dana Ballard.
The second wave blossomed in the late 1980s, following a 1987 book about Parallel Distributed Processing byJames L. McClelland,David E. Rumelhartet al., which introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (now known as "hidden layers") alongside input and output units, and used asigmoidactivation functioninstead of the old "all-or-nothing" function. Their work built upon that ofJohn Hopfield, who was a key figure investigating the mathematical characteristics of sigmoid activation functions.[3]From the late 1980s to the mid-1990s, connectionism took on an almost revolutionary tone when Schneider,[5]Terence Horganand Tienson posed the question of whether connectionism represented afundamental shiftin psychology and so-called "good old-fashioned AI," orGOFAI.[3]Some advantages of the second wave connectionist approach included its applicability to a broad array of functions, structural approximation to biological neurons, low requirements for innate structure, and capacity forgraceful degradation.[6]Its disadvantages included the difficulty in deciphering how ANNs process information or account for the compositionality of mental representations, and a resultant difficulty explaining phenomena at a higher level.[7]
The current (third) wave has been marked by advances indeep learning, which have made possible the creation oflarge language models.[3]The success of deep-learning networks in the past decade has greatly increased the popularity of this approach, but the complexity and scale of such networks has brought with them increasedinterpretability problems.[8]
The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could representneuronsand the connections could representsynapses, as in thehuman brain. This principle has been seen as an alternative to GOFAI and the classicaltheories of mindbased on symbolic computation, but the extent to which the two approaches are compatible has been the subject of much debate since their inception.[8]
Internal states of any network change over time due to neurons sending a signal to a succeeding layer of neurons in the case of a feedforward network, or to a previous layer in the case of a recurrent network. Discovery of non-linear activation functions has enabled the second wave of connectionism.
Neural networks follow two basic principles:
Most of the variety among the models comes from:
Connectionist work in general does not need to be biologically realistic.[10][11][12][13][14][15][16]One area where connectionist models are thought to be biologically implausible is with respect to error-propagation networks that are needed to support learning,[17][18]but error propagation can explain some of the biologically-generated electrical activity seen at the scalp inevent-related potentialssuch as theN400andP600,[19]and this provides some biological support for one of the key assumptions of connectionist learning procedures. Many recurrent connectionist models also incorporatedynamical systems theory. Many researchers, such as the connectionistPaul Smolensky, have argued that connectionist models will evolve toward fullycontinuous, high-dimensional,non-linear,dynamic systemsapproaches.
Precursors of the connectionist principles can be traced to early work inpsychology, such as that ofWilliam James.[20]Psychological theories based on knowledge about the human brain were fashionable in the late 19th century. As early as 1869, the neurologistJohn Hughlings Jacksonargued for multi-level, distributed systems. Following from this lead,Herbert Spencer'sPrinciples of Psychology, 3rd edition (1872), andSigmund Freud'sProject for a Scientific Psychology(composed 1895) propounded connectionist or proto-connectionist theories. These tended to be speculative theories. But by the early 20th century,Edward Thorndikewas writing abouthuman learningthat posited a connectionist type network.[21]
Hopfield networks had precursors in theIsing modeldue toWilhelm Lenz(1920) andErnst Ising(1925), though the Ising model conceived by them did not involve time.Monte Carlosimulations of Ising model required the advent of computers in the 1950s.[22]
The first wave begun in 1943 withWarren Sturgis McCullochandWalter Pittsboth focusing on comprehending neural circuitry through a formal and mathematical approach. McCulloch and Pitts showed how neural systems could implementfirst-order logic: Their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here. They were influenced by the work ofNicolas Rashevskyin the 1930s and symbolic logic in the style ofPrincipia Mathematica.[23][3]
Hebbcontributed greatly to speculations about neural functioning, and proposed a learning principle,Hebbian learning.Lashleyargued for distributed representations as a result of his failure to find anything like a localizedengramin years oflesionexperiments.Friedrich Hayekindependently conceived the model, first in a brief unpublished manuscript in 1920,[24][25]then expanded into a book in 1952.[26]
The Perceptron machines were proposed and built byFrank Rosenblatt, who published the 1958 paper “The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain” inPsychological Review, while working at the Cornell Aeronautical Laboratory. He cited Hebb, Hayek, Uttley, andAshbyas main influences.
Another form of connectionist model was therelational networkframework developed by thelinguistSydney Lambin the 1960s.
The research group led by Widrow empirically searched for methods to train two-layeredADALINEnetworks (MADALINE), with limited success.[27][28]
A method to train multilayered perceptrons with arbitrary levels of trainable weights was published byAlexey Grigorevich Ivakhnenkoand Valentin Lapa in 1965, called theGroup Method of Data Handling. This method employs incremental layer by layer training based onregression analysis, where useless units in hidden layers are pruned with the help of a validation set.[29][30][31]
The first multilayered perceptrons trained bystochastic gradient descent[32]was published in 1967 byShun'ichi Amari.[33]In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned usefulinternal representationsto classify non-linearily separable pattern classes.[30]
In 1972,Shun'ichi Amariproduced an early example ofself-organizing network.[34]
There was some conflict among artificial intelligence researchers as to what neural networks are useful for. Around late 1960s, there was a widespread lull in research and publications on neural networks, "the neural network winter", which lasted through the 1970s, during which the field of artificial intelligence turned towards symbolic methods. The publication ofPerceptrons(1969) is typically regarded as a catalyst of this event.[35][36]
The second wave begun in the early 1980s. Some key publications included (John Hopfield, 1982)[37]which popularizedHopfield networks, the 1986 paper that popularized backpropagation,[38]and the 1987 two-volume book about theParallel Distributed Processing(PDP) byJames L. McClelland,David E. Rumelhartet al., which has introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (known as "hidden layers" now) alongside input and output units and usingsigmoidactivation functioninstead of the old 'all-or-nothing' function.
Hopfield approached the field from the perspective of statistical mechanics, providing some early forms of mathematical rigor that increased the perceived respectability of the field.[3]Another important series of publications proved that neural networks areuniversal function approximators, which also provided some mathematical respectability.[39]
Some early popular demonstration projects appeared during this time.NETtalk(1987) learned to pronounce written English. It achieved popular success, appearing on theTodayshow.[40]TD-Gammon(1992) reached top human level inbackgammon.[41]
As connectionism became increasingly popular in the late 1980s, some researchers (includingJerry Fodor,Steven Pinkerand others) reacted against it. They argued that connectionism, as then developing, threatened to obliterate what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach ofcomputationalism. Computationalism is a specific form of cognitivism that argues that mental activity iscomputational, that is, that the mind operates by performing purely formal operations on symbols, like aTuring machine. Some researchers argued that the trend in connectionism represented a reversion towardassociationismand the abandonment of the idea of alanguage of thought, something they saw as mistaken. In contrast, those very tendencies made connectionism attractive for other researchers.
Connectionism and computationalism need not be at odds, but the debate in the late 1980s and early 1990s led to opposition between the two approaches. Throughout the debate, some researchers have argued that connectionism and computationalism are fully compatible, though full consensus on this issue has not been reached. Differences between the two approaches include the following:
Despite these differences, some theorists have proposed that the connectionist architecture is simply the manner in which organic brains happen to implement the symbol-manipulation system. This is logically possible, as it is well known that connectionist models can implement symbol-manipulation systems of the kind used in computationalist models,[42]as indeed they must be able if they are to explain the human ability to perform symbol-manipulation tasks. Several cognitive models combining both symbol-manipulative and connectionist architectures have been proposed. Among them arePaul Smolensky's Integrated Connectionist/Symbolic Cognitive Architecture (ICS).[8][43]andRon Sun'sCLARION (cognitive architecture). But the debate rests on whether this symbol manipulation forms the foundation of cognition in general, so this is not a potential vindication of computationalism. Nonetheless, computational descriptions may be helpful high-level descriptions of cognition of logic, for example.
The debate was largely centred on logical arguments about whether connectionist networks could produce the syntactic structure observed in this sort of reasoning. This was later achieved although using fast-variable binding abilities outside of those standardly assumed in connectionist models.[42][44]
Part of the appeal of computational descriptions is that they are relatively easy to interpret, and thus may be seen as contributing to our understanding of particular mental processes, whereas connectionist models are in general more opaque, to the extent that they may be describable only in very general terms (such as specifying the learning algorithm, the number of units, etc.), or in unhelpfully low-level terms. In this sense, connectionist models may instantiate, and thereby provide evidence for, a broad theory of cognition (i.e., connectionism), without representing a helpful theory of the particular process that is being modelled. In this sense, the debate might be considered as to some extent reflecting a mere difference in the level of analysis in which particular theories are framed. Some researchers suggest that the analysis gap is the consequence of connectionist mechanisms giving rise toemergent phenomenathat may be describable in computational terms.[45]
In the 2000s, the popularity ofdynamical systemsinphilosophy of mindhave added a new perspective on the debate;[46][47]some authors[which?]now argue that any split between connectionism and computationalism is more conclusively characterized as a split between computationalism anddynamical systems.
In 2014,Alex Gravesand others fromDeepMindpublished a series of papers describing a novel Deep Neural Network structure called theNeural Turing Machine[48]able to read symbols on a tape and store symbols in memory. Relational Networks, another Deep Network module published by DeepMind, are able to create object-like representations and manipulate them to answer complex questions. Relational Networks and Neural Turing Machines are further evidence that connectionism and computationalism need not be at odds.
Smolensky's Subsymbolic Paradigm[49][50]has to meet the Fodor-Pylyshyn challenge[51][52][53][54]formulated by classical symbol theory for a convincing theory of cognition in modern connectionism. In order to be an adequate alternative theory of cognition, Smolensky's Subsymbolic Paradigm would have to explain the existence of systematicity or systematic relations in language cognition without the assumption that cognitive processes are causally sensitive to the classical constituent structure of mental representations. The subsymbolic paradigm, or connectionism in general, would thus have to explain the existence of systematicity and compositionality without relying on the mere implementation of a classical cognitive architecture. This challenge implies a dilemma: If the Subsymbolic Paradigm could contribute nothing to the systematicity and compositionality of mental representations, it would be insufficient as a basis for an alternative theory of cognition. However, if the Subsymbolic Paradigm's contribution to systematicity requires mental processes grounded in the classical constituent structure of mental representations, the theory of cognition it develops would be, at best, an implementation architecture of the classical model of symbol theory and thus not a genuine alternative (connectionist) theory of cognition.[55]The classical model of symbolism is characterized by (1) a combinatorial syntax and semantics of mental representations and (2) mental operations as structure-sensitive processes, based on the fundamental principle of syntactic and semantic constituent structure of mental representations as used in Fodor's "Language of Thought (LOT)".[56][57]This can be used to explain the following closely related properties of human cognition, namely its (1) productivity, (2) systematicity, (3) compositionality, and (4) inferential coherence.[58]
This challenge has been met in modern connectionism, for example, not only by Smolensky's "Integrated Connectionist/Symbolic (ICS) Cognitive Architecture",[59][60]but also by Werning and Maye's "Oscillatory Networks".[61][62][63]An overview of this is given for example by Bechtel & Abrahamsen,[64]Marcus[65]and Maurer.[66]
Recently, Heng Zhang and his colleagues have demonstrated that mainstream knowledge representation formalisms are, in fact, recursively isomorphic, provided they possess equivalent expressive power.[67]This finding implies that there is no fundamental distinction between using symbolic or connectionist knowledge representation formalisms for the realization ofartificial general intelligence(AGI). Moreover, the existence of recursive isomorphisms suggests that different technical approaches can draw insights from one another.
|
https://en.wikipedia.org/wiki/Parallel_distributed_processing
|
Thephilosophy of artificial intelligenceis a branch of thephilosophy of mindand thephilosophy of computer science[1]that exploresartificial intelligenceand its implications for knowledge and understanding ofintelligence,ethics,consciousness,epistemology,[2]andfree will.[3][4]Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; seeartificial life) so the discipline is of considerable interest to philosophers.[5]These factors contributed to the emergence of the philosophy of artificial intelligence.
The philosophy of artificial intelligence attempts to answer such questions as follows:[6]
Questions like these reflect the divergent interests ofAI researchers,cognitive scientistsandphilosophersrespectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.
Importantpropositionsin the philosophy of AI include some of the following:
Is it possible to create a machine that can solveallthe problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns thebehaviorof machines and ignores the issues of interest topsychologists, cognitive scientists andphilosophers, evoking the question: does it matter whether a machine isreallythinking, as a person thinks, rather than just producing outcomes that appear to result from thinking?[12]
The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for theDartmouth workshopof 1956:
Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.
It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamouschild machineproposal,[13]essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robottacit knowledge[14]eliminates the need for a precise description altogether.
The first step to answering the question is to clearly define "intelligence".
Alan Turing[16]reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answeranyquestion posed to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an onlinechat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.[7]Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks".[17]Turing's test extends this polite convention to machines:
One criticism of theTuring testis that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence.Stuart J. RussellandPeter Norvigwrite that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'".[18]
Twenty-first century AI research defines intelligence in terms of goal-directed behavior. It views intelligence as a set of problems that the machine is expected to solve – the more problems it can solve, and the better its solutions are, the more intelligent the program is. AI founderJohn McCarthydefined intelligence as "the computational part of the ability to achieve goals in the world."[19]
Stuart RussellandPeter Norvigformalized this definition using abstractintelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.[20]
Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes.[22]They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence.[23]
Hubert Dreyfusdescribes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device".[24]This argument, first introduced as early as 1943[25]and vividly described byHans Moravecin 1988,[26]is now associated with futuristRay Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029.[27]A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011neurons) was performed in 2005,[28]and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.
Even AI's harshest critics (such asHubert DreyfusandJohn Searle) agree that a brain simulation is possible in theory.[a]However, Searle points out that, in principle,anythingcan be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes.[31]Thus, merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind, like trying to build a jet airliner by copying a living bird precisely, feather by feather, with no theoretical understanding ofaeronautical engineering.[32]
In 1963,Allen NewellandHerbert A. Simonproposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:
This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system isnecessaryfor intelligence) and that machines can be intelligent (because a symbol system issufficientfor intelligence).[33]Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":
The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high level—symbols that directly correspond with objects in the world, such as <dog> and <tail>. Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed.
These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They donotshow that artificial intelligence is impossible, only that more than symbol processing is required.
In 1931,Kurt Gödelproved with anincompleteness theoremthat it is always possible to construct a "Gödelstatement" that a given consistentformal systemof logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.)[citation needed]More speculatively, Gödel conjectured that the human mind can eventually correctly determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to amechanism.[35]PhilosopherJohn Lucas(since 1961) andRoger Penrose(since 1989) have championedthis philosophical anti-mechanist argument.[36]
Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement)[citation needed]. This is probably impossible for a Turing machine to do (seeHalting problem); therefore, the Gödelian concludes that human reasoning is too powerful to be captured by a Turing machine, and by extension, any digital mechanical device.
However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version"Hof human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency ofH(otherwiseHis provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate.[37][38][39]This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly inArtificial Intelligence: "anyattempt to utilize (Gödel's incompleteness results) to attack thecomputationalistthesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."[40]
Stuart RussellandPeter Norvigagree that Gödel's argument does not consider the nature of real-world human reasoning. It applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to be able to prove everything in order to be an intelligent person.[41]
Less formally,Douglas Hofstadter, in hisPulitzer Prizewinning bookGödel, Escher, Bach: An Eternal Golden Braid,states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way theEpimenides paradoxuses statements that refer to themselves, such as "this statement is false" or "I am lying".[42]But, of course, theEpimenides paradoxapplies to anything that makes statements, whether it is a machineora human, even Lucas himself. Consider:
This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and soLucas's argument is pointless.[44]
After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse ofquantum mechanicalstates give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines.[citation needed][clarification needed]. By Penrose and Lucas's arguments, the fact that quantum computers are only able to complete Turing computable tasks implies that they cannot be sufficient for emulating the human mind.[citation needed]Therefore, Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of thePlanck massvia spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron.[45]However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.[46]
Hubert Dreyfusargued that human intelligenceand expertise depended primarily on fast intuitive judgements rather than step-by-step symbolic manipulation, and argued that these skills would never be captured in formal rules.[47]
Dreyfus's argument had been anticipated by Turing in his 1950 paperComputing machinery and intelligence, where he had classified this as the "argument from the informality of behavior."[48]Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"[49]
Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning.[50]Thesituatedmovement inroboticsresearch attempts to capture our unconscious skills at perception and attention.[51]Computational intelligenceparadigms, such asneural nets,evolutionary algorithmsand so on are mostly directed at simulated unconscious reasoning and learning.Statistical approaches to AIcan make predictions which approach the accuracy of human intuitive guesses. Research intocommonsense knowledgehas focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of ourintuitivereasoning.[50]
Cognitive science and psychology eventually came to agree with Dreyfus' description of human expertise.Daniel Kahnemannand others developed a similar theory where they identified two "systems" that humans use to solve problems, which he called "System 1" (fast intuitive judgements) and "System 2" (slow deliberate step by step thinking).[52]
Although Dreyfus' views have been vindicated in many ways, the work in cognitive science and in AI was in response to specific problems in those fields and was not directly influenced by Dreyfus. Historian and AI researcherDaniel Crevierwrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[53]
This is a philosophical question, related to theproblem of other mindsand thehard problem of consciousness. The question revolves around a position defined byJohn Searleas "strong AI":
Searle distinguished this position from what he called "weak AI":
Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued thateven if we assumethat we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.[10]
Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness isnecessaryfor intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]."[54]Russelland Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[55]
There are a few researchers who believe that consciousness is an essential element in intelligence, such asIgor Aleksander,Stan Franklin,Ron Sun, andPentti Haikonen, although their definition of "consciousness" strays very close to "intelligence". (Seeartificial consciousness.)
Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".
The words "mind" and "consciousness" are used by different communities in different ways. Somenew agethinkers, for example, use the word "consciousness" to describe something similar toBergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind.Science fictionwriters use the word to describe someessentialproperty that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires,will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience", "self-awareness" or "ghost"—as in theGhost in the Shellmanga and anime series—to describe this essential human property). For others[who?], the words "mind" or "consciousness" are used as a kind of secular synonym for thesoul.
Forphilosophers,neuroscientistsandcognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way weseesomething,knowsomething,meansomething orunderstandsomething.[56]"It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle.[57]What is mysterious and fascinating is not so muchwhatit is buthowit is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?
Philosophers call this thehard problem of consciousness. It is the latest version of a classic problem in thephilosophy of mindcalled the "mind-body problem".[58]A related problem is the problem ofmeaningorunderstanding(which philosophers call "intentionality"): what is the connection between ourthoughtsandwhat we are thinking about(i.e. objects and situations out in the world)? A third issue is the problem ofexperience(or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person?[59]
Neurobiologistsbelieve all these problems will be solved as we begin to identify theneural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics ofartificial intelligenceagree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain.[60]The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of theneuronsto create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?
John Searleasks us to consider athought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that theChinese roomcontains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state ofunderstanding, or which hasconsciousawarenessof what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. Thecardscertainly are not aware. Searle concludes that the Chinese room, oranyother physical symbol system, cannot have a mind.[61]
Searle goes on to argue that actual mental states andconsciousnessrequire (yet to be described) "actual physical-chemical properties of actual human brains."[62]He argues there are special "causal properties" ofbrainsandneuronsthat gives rise tominds: in his words "brains cause minds."[63]
Gottfried Leibnizmade essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill.[64]In 1974,Lawrence Davisimagined duplicating the brain using telephone lines and offices staffed by people, and in 1978Ned Blockenvisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym".[65]Ned Block also proposed hisBlockhead argument, which is a version of theChinese roomin which the program has beenre-factoredinto a simple set of rules of the form "see this, do that", removing all mystery from the program.
Responses to the Chinese room emphasize several different points.
Thecomputational theory of mindor "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between arunning program(software) and a computer (hardware). The idea has philosophical roots inHobbes(who claimed reasoning was "nothing more than reckoning"),Leibniz(who attempted to create a logical calculus of all human ideas),Hume(who thought perception could be reduced to "atomic impressions") and evenKant(who analyzed all experience as controlled by formal rules).[72]The latest version is associated with philosophersHilary PutnamandJerry Fodor.[73]
This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (asHobbeswrote):
In other words, our intelligence derives from a form ofcalculation, similar toarithmetic. This is thephysical symbol systemhypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions ofcomputationalismclaim that (asStevan Harnadcharacterizes it):
This is John Searle's "strong AI" discussed above, and it is the real target of theChinese roomargument (according toHarnad).[74]
If "emotions" are defined only in terms of their effect onbehavioror on how theyfunctioninside an organism, then emotions can be viewed as a mechanism that anintelligent agentuses to maximize theutilityof its actions. Given this definition of emotion,Hans Moravecbelieves that "robots in general will be quite emotional about being nice people".[75]Fear is a source of urgency. Empathy is a necessary component of goodhuman computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."[75]Daniel Crevierwrites "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."[76]
"Self-awareness", as noted above, is sometimes used byscience fictionwriters as a name for theessentialhuman property that makes a character fully human.Turingstrips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can itthink about itself? Viewed in this way, a program can be written that can report on its own internal states, such as adebugger.[77]
Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest.[78]He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways.[79]It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat'sAutomated Mathematician, as one example, combined ideas to discover new mathematical truths.)Kaplanand Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned.[80]
In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings.[81]Also in 2009, researchers atCornelldevelopedEureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.
This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in termsfunctionorbehavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such asintentions) in another form.[54]
The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as theMachine Intelligence Research Institute). The obvious element of drama has also made the subject popular inscience fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind; seeArtificial intelligence in fiction.
One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly.Vernor Vingehas suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity".[82]He suggests that it may be somewhat or possibly very dangerous for humans.[83]This is discussed by a philosophy calledSingularitarianism.
In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able tomake their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that somecomputer virusescan evade elimination and have achieved "cockroach intelligence". They noted thatself-awarenessas depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[82]
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[84]The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[85][86]
The President of theAssociation for the Advancement of Artificial Intelligencehas commissioned a study to look at this issue.[87]They point to programs like the Language Acquisition Device which can emulate human interaction.
Some have suggested a need to build "Friendly AI", a term coined byEliezer Yudkowsky, meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[88]
Turing said "It is customary ... to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. ... I cannot offer any such comfort, for I believe that no such bounds can be set."[89]
Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.[77]
Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression."[77]All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.
Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man'simmortalsoul." Alan Turing called this "the theological objection". He writes:
In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.[90]
The discussion on the topic has been reignited as a result of recent claims made byGoogle's LaMDA artificialintelligencesystem that it is sentient and had a "soul".[91]
LaMDA(Language Modelfor Dialogue Applications) is anartificial intelligence systemthat createschatbots—AI robots designed to communicate with humans—by gathering vast amounts of text from the internet and usingalgorithmsto respond to queries in the most fluid and natural way possible.
The transcripts of conversations between scientists and LaMDA reveal that the AI system excels at this, providing answers to challenging topics about the nature ofemotions, generatingAesop-style fables on the moment, and even describing its alleged fears.[92]Pretty much all philosophers doubt LaMDA's sentience.[93]
Some scholars argue that the AI community's dismissal of philosophy is detrimental. In theStanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated.[5]PhysicistDavid Deutschargues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress.[94]
The main conference series on the issue is"Philosophy and Theory of AI"(PT-AI), run byVincent C. Müller.
The main bibliography on the subject, with several sub-sections, is onPhilPapers.
A recent survey forPhilosophy of AIis Müller (2023).[4]
|
https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence
|
Quantum neural networksarecomputational neural networkmodels which are based on the principles ofquantum mechanics. The first ideas on quantum neural computation were published independently in 1995 bySubhash Kakand Ron Chrisley,[1][2]engaging with the theory ofquantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classicalartificial neural networkmodels (which are widely used in machine learning for the important task of pattern recognition) with the advantages ofquantum informationin order to develop more efficient algorithms.[3][4][5]One important motivation for these investigations is the difficulty to train classical neural networks, especially inbig data applications. The hope is that features ofquantum computingsuch asquantum parallelismor the effects ofinterferenceandentanglementcan be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.
Most Quantum neural networks are developed asfeed-forwardnetworks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits.[6][7]The layers do not have to be of the same width, meaning they don't have to have the same number of qubits as the layer before or after it. This structure is trained on which path to take similar to classicalartificial neural networks. This is discussed in a lower section. Quantum neural networks refer to three different categories: Quantum computer with classical data, classical computer with quantum data, and quantum computer with quantum data.[6]
Quantum neural network research is still in its infancy, and a conglomeration of proposals and ideas of varying scope and mathematical rigor have been put forward. Most of them are based on the idea of replacing classical binary orMcCulloch-Pitts neuronswith aqubit(which can be called a “quron”), resulting in neural units that can be in asuperpositionof the state ‘firing’ and ‘resting’.
A lot of proposals attempt to find a quantum equivalent for theperceptronunit from which neural nets are constructed. A problem is that nonlinear activation functions do not immediately correspond to the mathematical structure of quantum theory, since a quantum evolution is described by linear operations and leads to probabilistic observation. Ideas to imitate the perceptron activation function with a quantum mechanical formalism reach from special measurements[8][9]to postulating non-linear quantum operators (a mathematical framework that is disputed).[10][11]A direct implementation of the activation function using thecircuit-based model of quantum computationhas recently been proposed by Schuld, Sinayskiy and Petruccione based on thequantum phase estimation algorithm.[12]
At a larger scale, researchers have attempted to generalize neural networks to the quantum setting. One way of constructing a quantum neuron is to first generalise classical neurons and then generalising them further to make unitary gates. Interactions between neurons can be controlled quantumly, withunitarygates, or classically, viameasurementof the network states. This high-level theoretical technique can be applied broadly, by taking different types of networks and different implementations of quantum neurons, such asphotonicallyimplemented neurons[7][13]andquantum reservoir processor(quantum version ofreservoir computing).[14]Most learning algorithms follow the classical model of training an artificial neural network to learn the input-output function of a giventraining setand use classical feedback loops to update parameters of the quantum system until they converge to an optimal configuration. Learning as a parameter optimisation problem has also been approached by adiabatic models of quantum computing.[15]
Quantum neural networks can be applied to algorithmic design: givenqubitswith tunable mutual interactions, one can attempt to learn interactions following the classicalbackpropagationrule from atraining setof desired input-output relations, taken to be the desired output algorithm's behavior.[16][17]The quantum network thus ‘learns’ an algorithm.
The first quantum associative memory algorithm was introduced by Dan Ventura and Tony Martinez in 1999.[18]The authors do not attempt to translate the structure of artificial neural network models into quantum theory, but propose an algorithm for acircuit-based quantum computerthat simulatesassociative memory. The memory states (inHopfield neural networkssaved in the weights of the neural connections) are written into a superposition, and aGrover-like quantum search algorithmretrieves the memory state closest to a given input. As such, this is not a fully content-addressable memory, since only incomplete patterns can be retrieved.
The first truly content-addressable quantum memory, which can retrieve patterns also from corrupted inputs, was proposed by Carlo A. Trugenberger.[19][20][21]Both memories can store an exponential (in terms of n qubits) number of patterns but can be used only once due to the no-cloning theorem and their destruction upon measurement.
Trugenberger,[20]however, has shown that his probabilistic model of quantum associative memory can be efficiently implemented and re-used multiples times for any polynomial number of stored patterns, a large advantage with respect to classical associative memories.
A substantial amount of interest has been given to a “quantum-inspired” model that uses ideas from quantum theory to implement a neural network based onfuzzy logic.[22]
Quantum Neural Networks can be theoretically trained similarly to training classical/artificial neural networks. A key difference lies in communication between the layers of a neural networks. For classical neural networks, at the end of a given operation, the currentperceptroncopies its output to the next layer of perceptron(s) in the network. However, in a quantum neural network, where each perceptron is a qubit, this would violate theno-cloning theorem.[6][23]A proposed generalized solution to this is to replace the classicalfan-outmethod with an arbitraryunitarythat spreads out, but does not copy, the output of one qubit to the next layer of qubits. Using this fan-out Unitary (Uf{\displaystyle U_{f}}) with a dummy state qubit in a known state (Ex.|0⟩{\displaystyle |0\rangle }in thecomputational basis), also known as anAncilla bit, the information from the qubit can be transferred to the next layer of qubits.[7]This process adheres to the quantum operation requirement ofreversibility.[7][24]
Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the Quantum neural network being discussed uses fan-out Unitary operators, and each operator only acts on its respective input, only two layers are used at any given time.[6]In other words, no Unitary operator is acting on the entire network at any given time, meaning the number of qubits required for a given step depends on the number of inputs in a given layer. Since Quantum Computers are notorious for their ability to run multiple iterations in a short period of time, the efficiency of a quantum neural network is solely dependent on the number of qubits in any given layer, and not on the depth of the network.[24]
To determine the effectiveness of a neural network, a cost function is used, which essentially measures the proximity of the network's output to the expected or desired output. In a Classical Neural Network, the weights (w{\displaystyle w}) and biases (b{\displaystyle b}) at each step determine the outcome of the cost functionC(w,b){\displaystyle C(w,b)}.[6]When training a Classical Neural network, the weights and biases are adjusted after each iteration, and given equation 1 below, wherey(x){\displaystyle y(x)}is the desired output andaout(x){\displaystyle a^{\text{out}}(x)}is the actual output, the cost function is optimized whenC(w,b){\displaystyle C(w,b)}= 0. For a quantum neural network, the cost function is determined by measuring the fidelity of the outcome state (ρout{\displaystyle \rho ^{\text{out}}}) with the desired outcome state (ϕout{\displaystyle \phi ^{\text{out}}}), seen in Equation 2 below. In this case, the Unitary operators are adjusted after each iteration, and the cost function is optimized when C = 1.[6]
Gradient descent is widely used and successful in classical algorithms. However, although the simplified structure is very similar to neural networks such as CNNs, QNNs perform much worse.
Since the quantum space exponentially expands as the q-bit grows, the observations will concentrate around the mean value at an exponential rate, where also have exponentially small gradients.[26]
This situation is known as Barren Plateaus, because most of the initial parameters are trapped on a "plateau" of almost zero gradient, which approximates random wandering[26]rather than gradient descent. This makes the model untrainable.
In fact, not only QNN, but almost all deeper VQA algorithms have this problem. In the presentNISQ era, this is one of the problems that have to be solved if more applications are to be made of the various VQA algorithms, including QNN.
|
https://en.wikipedia.org/wiki/Quantum_neural_network
|
Spiking neural networks(SNNs) areartificial neural networks(ANN) that mimic natural neural networks.[1]These models leverage timing of discrete spikes as the main information carrier.[2]
In addition toneuronalandsynapticstate, SNNs incorporate the concept of time into their operating model. The idea is thatneuronsin the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layerperceptron networks), but rather transmit information only when amembrane potential—an intrinsic quality of the neuron related to itsmembraneelectrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called aspiking neuron model.[3]
While spike rates can be considered the analogue of the variable output of a traditional ANN,[4]neurobiology research indicated that high speed processing cannot be performed solely through a rate-based scheme. For example humans can perform an image recognition task requiring no more than 10ms of processing time per neuron through the successive layers (going from theretinato thetemporal lobe). This time window is too short for rate-based encoding. The precise spike timings in a small set of spiking neurons also has a higher information coding capacity compared with a rate-based approach.[5]
The most prominent spiking neuron model is theleaky integrate-and-firemodel.[6]In that model, the momentary activation level (modeled as adifferential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher or lower, until the state eventually either decays or—if the firing threshold is reached—the neuron fires. After firing, the state variable is reset to a lower value.
Various decoding methods exist for interpreting the outgoingspike trainas a real-value number, relying on either the frequency of spikes (rate-code), the time-to-first-spike after stimulation, or the interval between spikes.
Many multi-layer artificial neural networks arefully connected, receiving input from every neuron in the previous layer and signalling every neuron in the subsequent layer. Although these networks have achieved breakthroughs, they do not match biological networks and do not mimic neurons.[citation needed]
The biology-inspiredHodgkin–Huxley modelof a spiking neuron was proposed in 1952. This model described howaction potentialsare initiated and propagated. Communication between neurons, which requires the exchange of chemicalneurotransmittersin thesynapticgap, is described in models such as theintegrate-and-firemodel,FitzHugh–Nagumo model(1961–1962), andHindmarsh–Rose model(1984). The leaky integrate-and-fire model (or a derivative) is commonly used as it is easier to compute than Hodgkin–Huxley.[7]
While the notion of an artificial spiking neural network became popular only in the twenty-first century,[8][9][10]studies between 1980 and 1995 supported the concept. The first models of this type of ANN appeared to simulate non-algorithmic intelligent information processing systems.[11][12][13]However, the notion of the spiking neural network as a mathematical model was first worked on in the early 1970s.[14]
As of 2019 SNNs lagged behind ANNs in accuracy, but the gap is decreasing, and has vanished on some tasks.[15]
Information in the brain is represented as action potentials (neuron spikes), which may group into spike trains or coordinated waves. A fundamental question of neuroscience is to determine whether neurons communicate by arate or temporal code.[16]Temporal codingimplies that a single spiking neuron can replace hundreds of hidden units on a conventional neural net.[1]
SNNs define a neuron's current state as its potential (possibly modeled as adifferential equation).[17]An input pulse causes the potential to rise and then gradually decline. Encoding schemes can interpret these pulse sequences as a number, considering pulse frequency and pulse interval.[18]Using the precise time of pulse occurrence, a neural network can consider more information and offer better computing properties.[19]
SNNs compute in the continuous domain. Such neurons test for activation only when their potentials reach a certain value. When a neuron is activated, it produces a signal that is passed to connected neurons, accordingly raising or lowering their potentials.
The SNN approach produces a continuous output instead of the binary output of traditional ANNs. Pulse trains are not easily interpretable, hence the need for encoding schemes. However, a pulse train representation may be more suited for processing spatiotemporal data (or real-world sensory data classification).[20]SNNs connect neurons only to nearby neurons so that they process input blocks separately (similar toCNNusing filters). They consider time by encoding information as pulse trains so as not to lose information. This avoids the complexity of arecurrent neural network(RNN). Impulse neurons are more powerful computational units than traditional artificial neurons.[21]
SNNs are theoretically more powerful than so called "second-generation networks" defined as ANNs "based on computational units that apply activation function with a continuous set of possible output values to a weighted sum (or polynomial) of the inputs"; however, SNN training issues and hardware requirements limit their use. Although unsupervised biologically inspired learning methods are available such asHebbian learningandSTDP, no effective supervised training method is suitable for SNNs that can provide better performance than second-generation networks.[21]Spike-based activation of SNNs is not differentiable, thusgradient descent-basedbackpropagation(BP) is not available.
SNNs have much larger computational costs for simulating realistic neural models than traditional ANNs.[22]
Pulse-coupled neural networks(PCNN) are often confused with SNNs. A PCNN can be seen as a kind of SNN.
Researchers are actively working on various topics. The first concerns differentiability. The expressions for both the forward- and backward-learning methods contain the derivative of the neural activation function which is not differentiable because a neuron's output is either 1 when it spikes, and 0 otherwise. This all-or-nothing behavior disrupts gradients and makes these neurons unsuitable for gradient-based optimization. Approaches to resolving it include:
The second concerns the optimization algorithm. Standard BP can be expensive in terms of computation, memory, and communication and may be poorly suited to the hardware that implements it (e.g., a computer, brain, or neuromorphic device).[23]
Incorporating additional neuron dynamics such as Spike Frequency Adaptation (SFA) is a notable advance, enhancing efficiency and computational power.[6][24]These neurons sit between biological complexity and computational complexity.[25]Originating from biological insights, SFA offers significant computational benefits by reducing power usage,[26]especially in cases of repetitive or intense stimuli. This adaptation improves signal/noise clarity and introduces an elementary short-term memory at the neuron level, which in turn, improves accuracy and efficiency.[27]This was mostly achieved usingcompartmental neuron models. The simpler versions are of neuron models with adaptive thresholds, are an indirect way of achieving SFA. It equips SNNs with improved learning capabilities, even with constrained synaptic plasticity, and elevates computational efficiency.[28][29]This feature lessens the demand on network layers by decreasing the need for spike processing, thus lowering computational load and memory access time—essential aspects of neural computation. Moreover, SNNs utilizing neurons capable of SFA achieve levels of accuracy that rival those of conventional ANNs,[30][31]while also requiring fewer neurons for comparable tasks. This efficiency streamlines the computational workflow and conserves space and energy, while maintaining technical integrity. High-performance deep spiking neural networks can operate with 0.3 spikes per neuron.[32]
SNNs can in principle be applied to the same applications as traditional ANNs.[33]In addition, SNNs can model thecentral nervous systemof biological organisms, such as an insect seeking food without prior knowledge of the environment.[34]Due to their relative realism, they can be used to studybiological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function,recordingsof this circuit can be compared to the output of a corresponding SNN, evaluating the plausibility of the hypothesis. SNNs lack effective training mechanisms, which can complicate some applications, including computer vision.
When using SNNs for image based data, the images need to be converted into binary spike trains.[35]Types of encodings include:[36]
A diverse range ofapplication softwarecan simulate SNNs. This software can be classified according to its uses:
These simulate complex neural models. Large networks usually require lengthy processing. Candidates include:[37]
Sutton and Barton proposed that future neuromorphic architectures[40]will comprise billions of nanosynapses, which require a clear understanding of the accompanying physical mechanisms. Experimental systems based on ferroelectric tunnel junctions have been used to show that STDP can be harnessed from heterogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, conductance variations can be modelled by nucleation-dominated domain reversal. Simulations showed that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towardsunsupervised learning.[41]
Classification capabilities of spiking networks trained according to unsupervised learning methods[42]have been tested on benchmark datasets such as Iris, Wisconsin Breast Cancer or Statlog Landsat dataset.[43][44]Various approaches to information encoding and network design have been used such as a 2-layer feedforward network for data clustering and classification. Based onHopfield(1995) the authors implemented models of local receptive fields combining the properties ofradial basis functionsand spiking neurons to convert input signals having a floating-point representation into a spiking representation.[45][46]
|
https://en.wikipedia.org/wiki/Spiking_neural_network
|
Atensor product network, inartificial neural networks, is a network that exploits the properties oftensorsto modelassociativeconcepts such asvariableassignment.Orthonormal vectorsare chosen to model the ideas (such as variable names and target assignments), and the tensor product of thesevectorsconstruct a network whose mathematical properties allow the user to easily extract the association from it.
This science article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Tensor_product_network
|
AlexNetis aconvolutional neural networkarchitecture developed for image classification tasks, notably achieving prominence through its performance in theImageNetLarge Scale Visual Recognition Challenge (ILSVRC). It classifies images into 1,000 distinct object categories and is regarded as the first widely recognized application of deep convolutional networks in large-scale visual recognition.
Developed in 2012 byAlex Krizhevskyin collaboration withIlya Sutskeverand his Ph.D. advisorGeoffrey Hintonat theUniversity of Toronto, the model contains 60 million parameters and 650,000neurons.[1]The original paper's primary result was that the depth of the model was essential for its high performance, which was computationally expensive, but made feasible due to the utilization ofgraphics processing units(GPUs) during training.[1]
The three formed team SuperVision and submitted AlexNet in theImageNet Large Scale Visual Recognition Challengeon September 30, 2012.[2]The network achieved a top-5 error of 15.3%, more than 10.8 percentage points better than that of the runner-up.
The architecture influenced a large number of subsequent work indeep learning, especially in applyingneural networkstocomputer vision.
AlexNet contains eightlayers: the first five areconvolutionallayers, some of them followed bymax-poolinglayers, and the last three arefully connected layers. The network, except the last layer, is split into two copies, each run on one GPU.[1]The entire structure can be written as
(CNN → RN → MP)² → (CNN³ → MP) → (FC → DO)² → Linear → softmax
where
It used the non-saturatingReLUactivation function, which trained better thantanhandsigmoid.[1]
Because the network did not fit onto a singleNvidiaGTX 5803GB GPU, it was split into two halves, one on each GPU.[1]:Section 3.2
TheImageNettraining setcontained 1.2 million images. The model was trained for 90 epochs over a period of five to six days using two Nvidia GTX 580 GPUs (3GB each).[1]These GPUs have a theoretical performance of 1.581TFLOPSinfloat32and were priced at US$500 upon release.[3]Each forward pass of AlexNet required approximately 1.43 GFLOPs.[4]Based on these values, the two GPUs together were theoretically capable of performing over 2,200 forward passes per second under ideal conditions.
AlexNet was trained withmomentum gradient descentwith a batch size of 128 examples, momentum of 0.9, and weight decay of 0.0005. Learning rate started at 10−2and was manually decreased 10-fold whenever validation error appeared to stop decreasing. It was reduced three times during training, ending at 10−5.
It used two forms ofdata augmentation, both computed on the fly on the CPU, thus "computationally free":
It usedlocal response normalization, anddropout regularizationwith drop probability 0.5.
Allweights were initializedasgaussianswith 0 mean and 0.01 standard deviation. Biases in convolutional layers 2, 4, 5, and all fully-connected layers, were initialized to constant 1 to avoid thedying ReLUproblem.
In 1980,Kunihiko Fukushimaproposed an early CNN namedneocognitron.[5][6]It was trained by anunsupervised learningalgorithm. TheLeNet-5(Yann LeCunet al., 1989)[7][8]was trained by supervised learning withbackpropagationalgorithm, with an architecture that is essentially the same as AlexNet on a small scale.
Max poolingwas used in 1990 for speech processing (essentially a 1-dimensional CNN),[9]and for image processing, was first used in the Cresceptron of 1992.[10]
During the 2000s, asGPUhardware improved, some researchers adapted these forgeneral-purpose computing, including neural network training. (K. Chellapilla et al., 2006) trained a CNN on GPU that was 4 times faster than an equivalent CPU implementation.[11](Raina et al 2009) trained adeep belief networkwith 100 million parameters on an Nvidia GeForceGTX 280at up to 70 times speedup over CPUs.[12]A deep CNN of (Dan Cireșanet al., 2011) atIDSIAwas 60 times faster than an equivalent CPU implementation.[13]Between May 15, 2011, and September 10, 2012, their CNN won four image competitions and achieved SOTA for multiple imagedatabases.[14][15][16]According to the AlexNet paper,[1]Cireșan's earlier net is "somewhat similar." Both were written withCUDAto run onGPU.
During the 1990–2010 period, neural networks were not better than other machine learning methods likekernel regression,support vector machines,AdaBoost, structured estimation,[17]among others. For computer vision in particular, much progress came from manualfeature engineering, such asSIFTfeatures,SURFfeatures,HoGfeatures,bags of visual words, etc. It was a minority position in computer vision that features can be learned directly from data, a position which became dominant after AlexNet.[18]
In 2011,Geoffrey Hintonstarted reaching out to colleagues about "What do I have to do to convince you that neural networks are the future?", andJitendra Malik, a sceptic of neural networks, recommended the PASCAL Visual Object Classes challenge. Hinton said its dataset was too small, so Malik recommended to him the ImageNet challenge.[19]
TheImageNetdataset, which became central to AlexNet’s success, was created byFei-Fei Liand her collaborators beginning in 2007. Aiming to advance visual recognition through large-scale data, Li built a dataset far larger than earlier efforts, ultimately containing over 14 million labeled images across 22,000 categories. The images were labeled usingAmazon Mechanical Turkand organized via theWordNethierarchy. Initially met with skepticism, ImageNet later became the foundation of theImageNet Large Scale Visual Recognition Challenge(ILSVRC) and a key resource in the rise of deep learning.[20]
Sutskever and Krizhevsky were both graduate students. Before 2011, Krizhevsky had already writtencuda-convnetto train small CNNs onCIFAR-10with a single GPU. Sutskever convinced Krizhevsky, who could doGPGPUwell, to train a CNN on ImageNet, with Hinton serving as principal investigator. So Krizhevsky extendedcuda-convnetfor multi-GPU training. AlexNet was trained on 2 Nvidia GTX 580 in Krizhevsky's bedroom at his parents' house. Over 2012, Krizhevsky tinkered with the network hyperparameters until it won theImageNet competition in 2012. Hinton commented that, "Ilya thought we should do it, Alex made it work, and I got the Nobel Prize".[21]At the 2012European Conference on Computer Vision, following AlexNet’s win, researcherYann LeCundescribed the model as “an unequivocal turning point in the history of computer vision".[20]
AlexNet’s success in 2012 was enabled by the convergence of three developments that had matured over the previous decade: large-scale labeled datasets, general-purposeGPU computing, and improved training methods for deep neural networks. The availability of ImageNet provided the data necessary for training deep models on a broad range of object categories. Advances in GPU programming throughNvidia’sCUDAplatform enabled practical training of large models. Together with algorithmic improvements, these factors enabled AlexNet to achieve high performance on large-scale visual recognition benchmarks.[20]Reflecting on its significance over a decade later, Fei-Fei Li stated in a 2024 interview: “That moment was pretty symbolic to the world of AI because three fundamental elements of modern AI converged for the first time”.[20]
While AlexNet and LeNet share essentially the same design and algorithm, AlexNet is much larger than LeNet and was trained on a much larger dataset on much faster hardware. Over the period of 20 years, both data and compute became cheaply available.[18]
AlexNet is highly influential, resulting in much subsequent work in using CNNs for computer vision and using GPUs to accelerate deep learning. As of early 2025, the AlexNet paper has been cited over 172,000 times according to Google Scholar.[22]
At the time of publication, there was no framework available for GPU-based neural network training and inference. The codebase for AlexNet was released under a BSD license, and had been commonly used in neural network research for several subsequent years.[23][18]
In one direction, subsequent works aimed to train increasingly deep CNNs that achieve increasingly higher performance on ImageNet. In this line of research areGoogLeNet(2014),VGGNet(2014),Highway network(2015), andResNet(2015). Another direction aimed to reproduce the performance of AlexNet at a lower cost. In this line of research areSqueezeNet(2016),MobileNet(2017),EfficientNet(2019).
Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky formed DNNResearch soon afterwards and sold the company, and the AlexNet source code along with it, to Google. There had been improvements and reimplementations for the AlexNet, but the original version as of 2012, at the time of its winning of ImageNet, had been released underBSD-2 licenseviaComputer History Museum.[24]
|
https://en.wikipedia.org/wiki/AlexNet
|
Aconvolutional neural network(CNN) is a type offeedforward neural networkthat learnsfeaturesviafilter(or kernel) optimization. This type ofdeep learningnetwork has been applied to process and makepredictionsfrom many different types of data including text, images and audio.[1]Convolution-based networks are the de-facto standard indeep learning-based approaches tocomputer vision[2]and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as thetransformer.
Vanishing gradientsand exploding gradients, seen duringbackpropagationin earlier neural networks, are prevented by theregularizationthat comes from using shared weights over fewer connections.[3][4]For example, foreachneuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascadedconvolution(or cross-correlation) kernels,[5][6]only 25 weights for each convolutional layer are required to process 5x5-sized tiles.[7][8]Higher-layer features are extracted from wider context windows, compared to lower-layer features.
Some applications of CNNs include:
CNNs are also known asshift invariantorspace invariant artificial neural networks, based on the shared-weight architecture of theconvolutionkernels or filters that slide along input features and provide translation-equivariantresponses known as feature maps.[14][15]Counter-intuitively, most convolutional neural networks are notinvariant to translation, due to the downsampling operation they apply to the input.[16]
Feedforward neural networksare usually fully connected networks, that is, each neuron in onelayeris connected to all neurons in the nextlayer. The "full connectivity" of these networks makes them prone tooverfittingdata. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) Robust datasets also increase the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set.[17]
Convolutional networks wereinspiredbybiologicalprocesses[18][19][20][21]in that the connectivity pattern betweenneuronsresembles the organization of the animalvisual cortex. Individualcortical neuronsrespond to stimuli only in a restricted region of thevisual fieldknown as thereceptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.
CNNs use relatively little pre-processing compared to otherimage classification algorithms. This means that the network learns to optimize thefilters(or kernels) through automated learning, whereas in traditional algorithms these filters arehand-engineered. This simplifies and automates the process, enhancing efficiency and scalability overcoming human-intervention bottlenecks.
A convolutional neural network consists of an input layer,hidden layersand an output layer. In a convolutional neural network, the hidden layers include one or more layers that perform convolutions. Typically this includes a layer that performs adot productof the convolution kernel with the layer's input matrix. This product is usually theFrobenius inner product, and its activation function is commonlyReLU. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map, which in turn contributes to the input of the next layer. This is followed by other layers such aspooling layers, fully connected layers, and normalization layers.
Here it should be noted how close a convolutional neural network is to amatched filter.[22]
In a CNN, the input is atensorwith shape:
(number of inputs) × (input height) × (input width) × (inputchannels)
After passing through a convolutional layer, the image becomes abstracted to a feature map, also called an activation map, with shape:
(number of inputs) × (feature map height) × (feature map width) × (feature mapchannels).
Convolutional layers convolve the input and pass its result to the next layer. This is similar to the response of a neuron in the visual cortex to a specific stimulus.[23]Each convolutional neuron processes data only for itsreceptive field.
Althoughfully connected feedforward neural networkscan be used to learn features and classify data, this architecture is generally impractical for larger inputs (e.g., high-resolution images), which would require massive numbers of neurons because each pixel is a relevant input feature. A fully connected layer for an image of size 100 × 100 has 10,000 weights foreachneuron in the second layer. Convolution reduces the number of free parameters, allowing the network to be deeper.[7]For example, using a 5 × 5 tiling region, each with the same shared weights, requires only 25 neurons. Using shared weights means there are many fewer parameters, which helps avoid the vanishing gradients and exploding gradients problems seen duringbackpropagationin earlier neural networks.[3][4]
To speed processing, standard convolutional layers can be replaced by depthwise separable convolutional layers,[24]which are based on a depthwise convolution followed by a pointwise convolution. Thedepthwise convolutionis a spatial convolution applied independently over each channel of the input tensor, while thepointwise convolutionis a standard convolution restricted to the use of1×1{\displaystyle 1\times 1}kernels.
Convolutional networks may include local and/or global pooling layers along with traditional convolutional layers. Pooling layers reduce the dimensions of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, tiling sizes such as 2 × 2 are commonly used. Global pooling acts on all the neurons of the feature map.[25][26]There are two common types of pooling in popular use: max and average.Max poolinguses the maximum value of each local cluster of neurons in the feature map,[27][28]whileaverage poolingtakes the average value.
Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditionalmultilayer perceptronneural network (MLP). The flattened matrix goes through a fully connected layer to classify the images.
In neural networks, each neuron receives input from some number of locations in the previous layer. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron'sreceptive field. Typically the area is a square (e.g. 5 by 5 neurons). Whereas, in a fully connected layer, the receptive field is theentire previous layer. Thus, in each convolutional layer, each neuron takes input from a larger area in the input than previous layers. This is due to applying the convolution over and over, which takes the value of a pixel into account, as well as its surrounding pixels. When using dilated layers, the number of pixels in the receptive field remains constant, but the field is more sparsely populated as its dimensions grow when combining the effect of several layers.
To manipulate the receptive field size as desired, there are some alternatives to the standard convolutional layer. For example, atrous or dilated convolution[29][30]expands the receptive field size without increasing the number of parameters by interleaving visible and blind regions. Moreover, a single dilated convolutional layer can comprise filters with multiple dilation ratios,[31]thus having a variable receptive field size.
Each neuron in a neural network computes an output value by applying a specific function to the input values received from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias (typically real numbers). Learning consists of iteratively adjusting these biases and weights.
The vectors of weights and biases are calledfiltersand represent particularfeaturesof the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter. This reduces thememory footprintbecause a single bias and a single vector of weights are used across all receptive fields that share that filter, as opposed to each receptive field having its own bias and vector weighting.[32]
A deconvolutional neural network is essentially the reverse of a CNN. It consists of deconvolutional layers and unpooling layers.[33]
A deconvolutional layer is the transpose of a convolutional layer. Specifically, a convolutional layer can be written as a multiplication with a matrix, and a deconvolutional layer is multiplication with the transpose of that matrix.[34]
An unpooling layer expands the layer. The max-unpooling layer is the simplest, as it simply copies each entry multiple times. For example, a 2-by-2 max-unpooling layer is[x]↦[xxxx]{\displaystyle [x]\mapsto {\begin{bmatrix}x&x\\x&x\end{bmatrix}}}.
Deconvolution layers are used in image generators. By default, it creates periodic checkerboard artifact, which can be fixed by upscale-then-convolve.[35]
CNN are often compared to the way the brain achieves vision processing in livingorganisms.[36]
Work byHubelandWieselin the 1950s and 1960s showed that catvisual corticescontain neurons that individually respond to small regions of thevisual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as itsreceptive field.[37]Neighboring cells have similar and overlapping receptive fields. Receptive field size and location varies systematically across the cortex to form a complete map of visual space.[citation needed]The cortex in each hemisphere represents the contralateralvisual field.[citation needed]
Their 1968 paper identified two basic visual cell types in the brain:[19]
Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks.[38][37]
In 1969,Kunihiko Fukushimaintroduced a multilayer visual feature detection network, inspired by the above-mentioned work of Hubel and Wiesel, in which "All the elements in one layer have the same set of interconnecting coefficients; the arrangement of the elements and their interconnections are all homogeneous over a given layer." This is the essential core of a convolutional network, but the weights were not trained. In the same paper, Fukushima also introduced theReLU(rectified linear unit)activation function.[39][40]
The "neocognitron"[18]was introduced by Fukushima in 1980.[20][28][41]The neocognitron introduced the two basic types of layers:
Severalsupervisedandunsupervised learningalgorithms have been proposed over the decades to train the weights of a neocognitron.[18]Today, however, the CNN architecture is usually trained throughbackpropagation.
Fukushima's ReLU activation function was not used in his neocognitron since all the weights were nonnegative; lateral inhibition was used instead. The rectifier has become a very popular activation function for CNNs anddeep neural networksin general.[42]
The term "convolution" first appears in neural networks in a paper by Toshiteru Homma, Les Atlas, and Robert Marks II at the firstConference on Neural Information Processing Systemsin 1987. Their paper replaced multiplication with convolution in time, inherently providing shift invariance, motivated by and connecting more directly to thesignal-processing concept of a filter, and demonstrated it on a speech recognition task.[8]They also pointed out that as a data-trainable system, convolution is essentially equivalent to correlation since reversal of the weights does not affect the final learned function ("For convenience, we denote * as correlation instead of convolution. Note that convolving a(t) with b(t) is equivalent to correlating a(-t) with b(t).").[8]Modern CNN implementations typically do correlation and call it convolution, for convenience, as they did here.
Thetime delay neural network(TDNN) was introduced in 1987 byAlex Waibelet al. for phoneme recognition and was an early convolutional network exhibiting shift-invariance.[43]A TDNN is a 1-D convolutional neural net where the convolution is performed along the time axis of the data. It is the first CNN utilizing weight sharing in combination with a training by gradient descent, usingbackpropagation.[44]Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one.[43]
TDNNs are convolutional networks that share weights along the temporal dimension.[45]They allow speech signals to be processed time-invariantly. In 1990 Hampshire and Waibel introduced a variant that performs a two-dimensional convolution.[46]Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both time and frequency shifts, as with images processed by a neocognitron.
TDNNs improved the performance of far-distance speech recognition.[47]
Denker et al. (1989) designed a 2-D CNN system to recognize hand-writtenZIP Codenumbers.[48]However, the lack of an efficient training method to determine the kernel coefficients of the involved convolutions meant that all the coefficients had to be laboriously hand-designed.[49]
Following the advances in the training of 1-D CNNs by Waibel et al. (1987),Yann LeCunet al. (1989)[49]used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types.
Wei Zhang et al. (1988)[14][15]used back-propagation to train the convolution kernels of a CNN for alphabets recognition. The model was called shift-invariant pattern recognition neural network before the name CNN was coined later in the early 1990s. Wei Zhang et al. also applied the same CNN without the last fully connected layer for medical image object segmentation (1991)[50]and breast cancer detection in mammograms (1994).[51]
This approach became a foundation of moderncomputer vision.
In 1990 Yamaguchi et al. introduced the concept of max pooling, a fixed filtering operation that calculates and propagates the maximum value of a given region. They did so by combining TDNNs with max pooling to realize a speaker-independent isolated word recognition system.[27]In their system they used several TDNNs per word, one for eachsyllable. The results of each TDNN over the input signal were combined using max pooling and the outputs of the pooling layers were then passed on to networks performing the actual word classification.
In a variant of the neocognitron called thecresceptron, instead of using Fukushima's spatial averaging with inhibition and saturation, J. Weng et al. in 1993 used max pooling, where a downsampling unit computes the maximum of the activations of the units in its patch,[52]introducing this method into the vision field.
Max pooling is often used in modern CNNs.[53]
LeNet-5, a pioneering 7-level convolutional network byLeCunet al. in 1995,[54]classifies hand-written numbers on checks (British English:cheques) digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources.
It was superior than other commercial courtesy amount reading systems (as of 1995). The system was integrated inNCR's check reading systems, and fielded in several American banks since June 1996, reading millions of checks per day.[55]
A shift-invariant neural network was proposed by Wei Zhang et al. for image character recognition in 1988.[14][15]It is a modified Neocognitron by keeping only the convolutional interconnections between the image feature layers and the last fully connected layer. The model was trained with back-propagation. The training algorithm was further improved in 1991[56]to improve its generalization ability. The model architecture was modified by removing the last fully connected layer and applied for medical image segmentation (1991)[50]and automatic detection of breast cancer inmammograms (1994).[51]
A different convolution-based design was proposed in 1988[57]for application to decomposition of one-dimensionalelectromyographyconvolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs.[58][59]
Although CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations ongraphics processing units(GPUs).
In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation onCPU.[60]In 2005, another paper also emphasised the value ofGPGPUformachine learning.[61]
The first GPU-implementation of a CNN was described in 2006 by K. Chellapilla et al. Their implementation was 4 times faster than an equivalent implementation on CPU.[62]In the same period, GPUs were also used for unsupervised training ofdeep belief networks.[63][64][65][66]
In 2010, Dan Ciresan et al. atIDSIAtrained deep feedforward networks on GPUs.[67]In 2011, they extended this to CNNs, accelerating by 60 compared to training CPU.[25]In 2011, the network won an image recognition contest where they achieved superhuman performance for the first time.[68]Then they won more competitions and achieved state of the art on several benchmarks.[69][53][28]
Subsequently,AlexNet, a similar GPU-based CNN by Alex Krizhevsky et al. won theImageNet Large Scale Visual Recognition Challenge2012.[70]It was an early catalytic event for theAI boom.
Compared to the training of CNNs usingGPUs, not much attention was given to CPU. (Viebke et al 2019) parallelizes CNN by thread- andSIMD-level parallelism that is available on theIntel Xeon Phi.[71][72]
In the past, traditionalmultilayer perceptron(MLP) models were used for image recognition.[example needed]However, the full connectivity between nodes caused thecurse of dimensionality, and was computationally intractable with higher-resolution images. A 1000×1000-pixel image withRGB colorchannels has 3 million weights per fully-connected neuron, which is too high to feasibly process efficiently at scale.
For example, inCIFAR-10, images are only of size 32×32×3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in the first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights. A 200×200 image, however, would lead to neurons that have 200*200*3 = 120,000 weights.
Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. This ignoreslocality of referencein data with a grid-topology (such as images), both computationally and semantically. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated byspatially localinput patterns.
Convolutional neural networks are variants of multilayer perceptrons, designed to emulate the behavior of avisual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features:
Together, these properties allow CNNs to achieve better generalization onvision problems. Weight sharing dramatically reduces the number offree parameterslearned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks.
A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below.
The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnablefilters(orkernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter isconvolvedacross the width and height of the input volume, computing thedot productbetween the filter entries and the input, producing a 2-dimensionalactivation mapof that filter. As a result, the network learns filters that activate when it detects some specific type offeatureat some spatial position in the input.[75][nb 1]
Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input. Each entry in an activation map use the same set of parameters that define the filter.
Self-supervised learninghas been adapted for use in convolutional layers by using sparse patches with a high-mask ratio and a global response normalization layer.[citation needed]
When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing asparse local connectivitypattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume.
The extent of this connectivity is ahyperparametercalled thereceptive fieldof the neuron. The connections arelocal in space(along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned filters produce the strongest response to a spatially local input pattern.[76]
Threehyperparameterscontrol the size of the output volume of the convolutional layer: the depth,stride, and padding size:
The spatial size of the output volume is a function of the input volume sizeW{\displaystyle W}, the kernel field sizeK{\displaystyle K}of the convolutional layer neurons, the strideS{\displaystyle S}, and the amount of zero paddingP{\displaystyle P}on the border. The number of neurons that "fit" in a given volume is then:
If this number is not aninteger, then the strides are incorrect and the neurons cannot be tiled to fit across the input volume in asymmetricway. In general, setting zero padding to beP=(K−1)/2{\textstyle P=(K-1)/2}when the stride isS=1{\displaystyle S=1}ensures that the input volume and output volume will have the same size spatially. However, it is not always completely necessary to use all of the neurons of the previous layer. For example, a neural network designer may decide to use just a portion of padding.
A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on the assumption that if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. Denoting a single 2-dimensional slice of depth as adepth slice, the neurons in each depth slice are constrained to use the same weights and bias.
Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as aconvolutionof the neuron's weights with the input volume.[nb 2]Therefore, it is common to refer to the sets of weights as a filter (or akernel), which is convolved with the input. The result of this convolution is anactivation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to thetranslation invarianceof the CNN architecture.[16]
Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer".
Another important concept of CNNs is pooling, which is used as a form of non-lineardown-sampling. Pooling provides downsampling because it reduces the spatial dimensions (height and width) of the input feature maps while retaining the most important information. There are several non-linear functions to implement pooling, wheremax poolingandaverage poolingare the most common. Pooling aggregates information from small regions of the input creatingpartitionsof the input feature map, typically using a fixed-size window (like 2x2) and applying a stride (often 2) to move the window across the input.[78]Note that without using a stride greater than 1, pooling would not perform downsampling, as it would simply move the pooling window across the input one step at a time, without reducing the size of the feature map. In other words, the stride is what actually causes the downsampling by determining how much the pooling window moves over the input.
Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters,memory footprintand amount of computation in the network, and hence to also controloverfitting. This is known as down-sampling. It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by an activation function, such as aReLU layer) in a CNN architecture.[75]: 460–461While pooling layers contribute to local translation invariance, they do not provide global translation invariance in a CNN, unless a form of global pooling is used.[16][74]The pooling layer commonly operates independently on every depth, or slice, of the input and resizes it spatially. A very common form of max pooling is a layer with filters of size 2×2, applied with a stride of 2, which subsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations:fX,Y(S)=maxa,b=01S2X+a,2Y+b.{\displaystyle f_{X,Y}(S)=\max _{a,b=0}^{1}S_{2X+a,2Y+b}.}In this case, everymax operationis over 4 numbers. The depth dimension remains unchanged (this is true for other forms of pooling as well).
In addition to max pooling, pooling units can use other functions, such asaveragepooling orℓ2-normpooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which generally performs better in practice.[79]
Due to the effects of fast spatial reduction of the size of the representation,[which?]there is a recent trend towards using smaller filters[80]or discarding pooling layers altogether.[81]
A channel max pooling (CMP) operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gather together within fewer channels, which is important for fine-grained image classification that needs more discriminating features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller before it connects to the first fully connected (FC) layer. Similar to the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ R(C×M×N) and C ∈ R(c×M×N), respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively. Note that the CMP operation only changes the channel number of the feature maps. The width and the height of the feature maps are not changed, which is different from the MP operation.[82]
See[83][84]for reviews for pooling methods.
ReLU is the abbreviation ofrectified linear unit. It was proposed byAlston Householderin 1941,[85]and used in CNN byKunihiko Fukushimain 1969.[39]ReLU applies the non-saturatingactivation functionf(x)=max(0,x){\textstyle f(x)=\max(0,x)}.[70]It effectively removes negative values from an activation map by setting them to zero.[86]It introducesnonlinearityto thedecision functionand in the overall network without affecting the receptive fields of the convolution layers.
In 2011, Xavier Glorot, Antoine Bordes andYoshua Bengiofound that ReLU enables better training of deeper networks,[87]compared to widely used activation functions prior to 2011.
Other functions can also be used to increase nonlinearity, for example the saturatinghyperbolic tangentf(x)=tanh(x){\displaystyle f(x)=\tanh(x)},f(x)=|tanh(x)|{\displaystyle f(x)=|\tanh(x)|}, and thesigmoid functionσ(x)=(1+e−x)−1{\textstyle \sigma (x)=(1+e^{-x})^{-1}}. ReLU is often preferred to other functions because it trains the neural network several times faster without a significant penalty togeneralizationaccuracy.[88]
After several convolutional and max pooling layers, the final classification is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular (non-convolutional)artificial neural networks. Their activations can thus be computed as anaffine transformation, withmatrix multiplicationfollowed by a bias offset (vector additionof a learned or fixed bias term).
The "loss layer", or "loss function", exemplifies howtrainingpenalizes the deviation between the predicted output of the network, and thetruedata labels (during supervised learning). Variousloss functionscan be used, depending on the specific task.
TheSoftmaxloss function is used for predicting a single class ofKmutually exclusive classes.[nb 3]Sigmoidcross-entropyloss is used for predictingKindependent probability values in[0,1]{\displaystyle [0,1]}.Euclideanloss is used forregressingtoreal-valuedlabels(−∞,∞){\displaystyle (-\infty ,\infty )}.
Hyperparameters are various settings that are used to control the learning process. CNNs use morehyperparametersthan a standard multilayer perceptron (MLP).
Padding is the addition of (typically) 0-valued pixels on the borders of an image. This is done so that the border pixels are not undervalued (lost) from the output because they would ordinarily participate in only a single receptive field instance. The padding applied is typically one less than the corresponding kernel dimension. For example, a convolutional layer using 3x3 kernels would receive a 2-pixel pad, that is 1 pixel on each side of the image.[citation needed]
The stride is the number of pixels that the analysis window moves on each iteration. A stride of 2 means that each kernel is offset by 2 pixels from its predecessor.
Since feature map size decreases with depth, layers near the input layer tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the product of feature valuesvawith pixel position is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next.
The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity.
Common filter sizes found in the literature vary greatly, and are usually chosen based on the data set. Typical filter sizes range from 1x1 to 7x7. As two famous examples,AlexNetused 3x3, 5x5, and 11x11.Inceptionv3used 1x1, 3x3, and 5x5.
The challenge is to find the right level of granularity so as to create abstractions at the proper scale, given a particular data set, and withoutoverfitting.
Max poolingis typically used, often with a 2x2 dimension. This implies that the input is drasticallydownsampled, reducing processing cost.
Greater poolingreduces the dimensionof the signal, and may result in unacceptableinformation loss. Often, non-overlapping pooling windows perform best.[79]
Dilation involves ignoring pixels within a kernel. This reduces processing memory potentially without significant signal loss. A dilation of 2 on a 3x3 kernel expands the kernel to 5x5, while still processing 9 (evenly spaced) pixels. Specifically, the processed pixels after the dilation are the cells (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5), where (i,j) denotes the cell of the i-th row and j-th column in the expanded 5x5 kernel. Accordingly, dilation of 4 expands the kernel to 7x7.[citation needed]
It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeedequivariantto translations of the input.[74]However, layers with a stride greater than one ignore theNyquist–Shannon sampling theoremand might lead toaliasingof the input signal[74]While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice,[89]and therefore yield models that are not equivariant to translations.
Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input.[90][16]One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer.[74]Additionally, several other partial solutions have been proposed, such asanti-aliasingbefore downsampling operations,[91]spatial transformer networks,[92]data augmentation, subsampling combined with pooling,[16]andcapsule neural networks.[93]
The accuracy of the final model is typically estimated on a sub-part of the dataset set apart at the start, often called a test set. Alternatively, methods such ask-fold cross-validationare applied. Other strategies include usingconformal prediction.[94][95]
Regularizationis a process of introducing additional information to solve anill-posed problemor to preventoverfitting. CNNs use various types of regularization.
Because networks have so many parameters, they are prone to overfitting. One method to reduce overfitting isdropout, introduced in 2014.[96]At each training stage, individual nodes are either "dropped out" of the net (ignored) with probability1−p{\displaystyle 1-p}or kept with probabilityp{\displaystyle p}, so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights.
In the training stages,p{\displaystyle p}is usually 0.5; for input nodes, it is typically much higher because information is directly lost when input nodes are ignored.
At testing time after training has finished, we would ideally like to find a sample average of all possible2n{\displaystyle 2^{n}}dropped-out networks; unfortunately this is unfeasible for large values ofn{\displaystyle n}. However, we can find an approximation by using the full network with each node's output weighted by a factor ofp{\displaystyle p}, so theexpected valueof the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates2n{\displaystyle 2^{n}}neural nets, and as such allows for model combination, at test time only a single network needs to be tested.
By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes the model combination practical, even fordeep neural networks. The technique seems to reduce node interactions, leading them to learn more robust features[clarification needed]that better generalize to new data.
DropConnect is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability1−p{\displaystyle 1-p}. Each unit thus receives input from a random subset of units in the previous layer.[97]
DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage.
A major drawback to dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected.
Even before dropout, in 2013 a technique called stochastic pooling,[98]the conventionaldeterministicpooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to amultinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout anddata augmentation.
An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small localdeformations. This is similar to explicitelastic deformationsof the input images,[99]which delivers excellent performance on theMNIST data set.[99]Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below.
Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Because there is often not enough available data to train, especially considering that some part should be spared for later testing, two approaches are to either generate new data from scratch (if possible) or perturb existing data to create new ones. The latter one is used since mid-1990s.[54]For example, input images can be cropped, rotated, or rescaled to create new examples with the same labels as the original training set.[100]
One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted.
Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm".
A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant('alpha' hyperparameter), thus increasing the penalty for large weight vectors.
L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot.
L1 regularization is also common. It makes the weight vectors sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs. L1 with L2 regularization can be combined; this is calledelastic net regularization.
Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and useprojected gradient descentto enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vectorw→{\displaystyle {\vec {w}}}of every neuron to satisfy‖w→‖2<c{\displaystyle \|{\vec {w}}\|_{2}<c}. Typical values ofc{\displaystyle c}are order of 3–4. Some papers report improvements[101]when using this form of regularization.
Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint.[102]
An earlier common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to theretina. The pose relative to the retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame.[103]
Thus, one way to represent something is to embed the coordinate frame within it. This allows large features to be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). This approach ensures that the higher-level entity (e.g. face) is present when the lower-level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the humanvisual systemimposes coordinate frames in order to represent shapes.[104]
CNNs are often used inimage recognitionsystems. In 2012, anerror rateof 0.23% on theMNIST databasewas reported.[28]Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database.[25]Subsequently, a similar CNN calledAlexNet[105]won theImageNet Large Scale Visual Recognition Challenge2012.
When applied tofacial recognition, CNNs achieved a large decrease in error rate.[106]Another paper reported a 97.6% recognition rate on "5,600 still images of more than 10 subjects".[21]CNNs were used to assessvideo qualityin an objective way after manual training; the resulting system had a very lowroot mean square error.[107]
TheImageNet Large Scale Visual Recognition Challengeis a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014,[108]a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winnerGoogLeNet[109](the foundation ofDeepDream) increased the mean averageprecisionof object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans.[110]The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this.[citation needed]
In 2015, a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations.[111]
Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space.[112][113]Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream.[114][115][116]Long short-term memory(LSTM)recurrentunits are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies.[117][118]Unsupervised learningschemes for training spatio-temporal features have been introduced, based on Convolutional Gated RestrictedBoltzmann Machines[119]and Independent Subspace Analysis.[120]Its application can be seen intext-to-video model.[citation needed]
CNNs have also been explored fornatural language processing. CNN models are effective for various NLP problems and achieved excellent results insemantic parsing,[121]search query retrieval,[122]sentence modeling,[123]classification,[124]prediction[125]and other traditional NLP tasks.[126]Compared to traditional language processing methods such asrecurrent neural networks, CNNs can represent different contextual realities of language that do not rely on a series-sequence assumption, while RNNs are better suitable when classical time series modeling is required.[127][128][129][130]
A CNN with 1-D convolutions was used on time series in the frequency domain (spectral residual) by an unsupervised model to detect anomalies in the time domain.[131]
CNNs have been used indrug discovery. Predicting the interaction between molecules and biologicalproteinscan identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network forstructure-based drug design.[132]The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures,[133]AtomNet discovers chemical features, such asaromaticity,sp3carbons, andhydrogen bonding. Subsequently, AtomNet was used to predict novel candidatebiomoleculesfor multiple disease targets, most notably treatments for theEbola virus[134]andmultiple sclerosis.[135]
CNNs have been used in the game ofcheckers. From 1999 to 2001,Fogeland Chellapilla published papers showing how a convolutional neural network could learn to play checkers using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the difference in number of pieces between the two sides. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%.[136][137]It also earned a win against the programChinookat its "expert" level of play.[138]
CNNs have been used incomputer Go. In December 2014, Clark andStorkeypublished a paper showing that a CNN trained by supervised learning from a database of human professional games could outperformGNU Goand win some games againstMonte Carlo tree searchFuego 1.1 in a fraction of the time it took Fuego to play.[139]Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a6 danhuman player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of theMonte Carlo tree searchprogram Fuego simulating ten thousand playouts (about a million positions) per move.[140]
A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used byAlphaGo, the first to beat the best human player at the time.[141]
Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better.[142][13]Dilated convolutions[143]might enable one-dimensional convolutional neural networks to effectively learn time series dependences.[144]Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients.[145]Convolutional networks can provide an improved forecasting performance when there are multiple similar time series to learn from.[146]CNNs can also be applied to further tasks in time series analysis (e.g., time series classification[147]or quantile forecasting[148]).
As archaeological findings such asclay tabletswithcuneiform writingare increasingly acquired using3D scanners, benchmark datasets are becoming available, includingHeiCuBeDa[149]providing almost 2000 normalized 2-D and 3-D datasets prepared with theGigaMesh Software Framework.[150]Socurvature-based measures are used in conjunction with geometric neural networks (GNNs), e.g. for period classification of those clay tablets being among the oldest documents of human history.[151][152]
For many applications, training data is not very available. Convolutional neural networks usually require a large amount of training data in order to avoidoverfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights, this is known astransfer learning. Furthermore, this technique allows convolutional network architectures to successfully be applied to problems with tiny training sets.[153]
End-to-end training and prediction are common practice incomputer vision. However, human interpretable explanations are required forcritical systemssuch as aself-driving cars.[154]With recent advances invisual salience,spatial attention, andtemporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions.[155][156]
A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network withQ-learning, a form ofreinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.[157]
Preliminary results were presented in 2014, with an accompanying paper in February 2015.[158]The research described an application toAtari 2600gaming. Other deep reinforcement learning models preceded it.[159]
Convolutional deep belief networks(CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training likedeep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR[160]have been obtained using CDBNs.[161]
The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid[162]by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated, e.g., for semantic segmentation, image reconstruction, and object localization tasks.
|
https://en.wikipedia.org/wiki/Convolutional_neural_network#Dropout
|
Hierarchicalclassificationis a system of grouping things according to a hierarchy.[1]
In the field ofmachine learning, hierarchical classification is sometimes referred to asinstance space decomposition,[2]which splits a completemulti-classproblem into a set of smaller classification problems.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hierarchical_classification
|
TheLeiden algorithmis a community detection algorithm developed by Traaget al[1]atLeiden University. It was developed as a modification of theLouvain method. Like the Louvain method, the Leiden algorithm attempts to optimizemodularityin extracting communities from networks; however, it addresses key issues present in the Louvain method, namely poorly connected communities and theresolution limit of modularity.
Broadly, the Leiden algorithm uses the same two primary phases as the Louvain algorithm: a local node moving step (though, the method by which nodes are considered in Leiden is more efficient[1]) and a graph aggregation step. However, to address the issues with poorly-connected communities and the merging of smaller communities into larger communities (the resolution limit of modularity), the Leiden algorithm employs an intermediate refinement phase in which communities may be split to guarantee that all communities are well-connected.
Consider, for example, the following graph:
Three communities are present in this graph (each color represents a community). Additionally, the center "bridge" node (represented with an extra circle) is a member of the community represented by blue nodes. Now consider the result of a node-moving step which merges the communities denoted by red and green nodes into a single community (as the two communities are highly connected):
Notably, the center "bridge" node is now a member of the larger red community after node moving occurs (due to the greedy nature of the local node moving algorithm). In the Louvain method, such a merging would be followed immediately by the graph aggregation phase. However, this causes a disconnection between two different sections of the community represented by blue nodes. In the Leiden algorithm, the graph is instead refined:
The Leiden algorithm's refinement step ensures that the center "bridge" node is kept in the blue community to ensure that it remains intact and connected, despite the potential improvement in modularity from adding the center "bridge" node to the red community.
Before defining theLeiden algorithm, it will be helpful to define some of the components of a graph.
A graph is composed ofvertices (nodes)andedges. Each edge is connected to two vertices, and each vertex may be connected to zero or more edges. Edges are typically represented by straight lines, while nodes are represented by circles or points. In set notation, letV{\displaystyle V}be the set of vertices, andE{\displaystyle E}be the set of edges:
V:={v1,v2,…,vn}E:={eij,eik,…,ekl}{\displaystyle {\begin{aligned}V&:=\{v_{1},v_{2},\dots ,v_{n}\}\\E&:=\{e_{ij},e_{ik},\dots ,e_{kl}\}\end{aligned}}}
whereeij{\displaystyle e_{ij}}is the directed edge from vertexvi{\displaystyle v_{i}}to vertexvj{\displaystyle v_{j}}. We can also write this as an ordered pair:
eij:=(vi,vj){\displaystyle {\begin{aligned}e_{ij}&:=(v_{i},v_{j})\end{aligned}}}
A community is a unique set of nodes:
Ci⊆VCi⋂Cj=∅∀i≠j{\displaystyle {\begin{aligned}C_{i}&\subseteq V\\C_{i}&\bigcap C_{j}=\emptyset ~\forall ~i\neq j\end{aligned}}}
and the union of all communities must be the total set of vertices:
V=⋃i=1Ci{\displaystyle {\begin{aligned}V&=\bigcup _{i=1}C_{i}\end{aligned}}}
A partition is the set of all communities:
P={C1,C2,…,Cn}{\displaystyle {\begin{aligned}{\mathcal {P}}&=\{C_{1},C_{2},\dots ,C_{n}\}\end{aligned}}}
How communities are partitioned is an integral part on the Leiden algorithm. How partitions are decided can depend on how their quality is measured. Additionally, many of these metrics contain parameters of their own that can change the outcome of their communities.
Modularityis a highly used quality metric for assessing how well a set of communities partition a graph. The equation for this metric is defined for an adjacency matrix, A, as:[2]
Q=12m∑ij(Aij−kikj2m)δ(ci,cj){\displaystyle Q={\frac {1}{2m}}\sum _{ij}(A_{ij}-{\frac {k_{i}k_{j}}{2m}})\delta (c_{i},c_{j})}
where:
δ(ci,cj)={1ifciandcjare the same community0otherwise{\displaystyle {\begin{aligned}\delta (c_{i},c_{j})&={\begin{cases}1&{\text{if }}c_{i}{\text{ and }}c_{j}{\text{ are the same community}}\\0&{\text{otherwise}}\end{cases}}\end{aligned}}}
One of the most well used metrics for the Leiden algorithm is the Reichardt Bornholdt Potts Model (RB).[3]This model is used by default in most mainstream Leiden algorithm libraries under the nameRBConfigurationVertexPartition.[4][5]This model introduces a resolution parameterγ{\displaystyle \gamma }and is highly similar to the equation for modularity. This model is defined by the following quality function for an adjacency matrix, A, as:[4]
Q=∑ij(Aij−γkikj2m)δ(ci,cj){\displaystyle Q=\sum _{ij}(A_{ij}-\gamma {\frac {k_{i}k_{j}}{2m}})\delta (c_{i},c_{j})}
where:
Another metric similar to RB, is the Constant Potts Model (CPM). This metric also relies on a resolution parameterγ{\displaystyle \gamma }[6]The quality function is defined as:
H=−∑ij(Aijwij−γ)δ(ci,cj){\displaystyle H=-\sum _{ij}(A_{ij}w_{ij}-\gamma )\delta (c_{i},c_{j})}
Typically Potts models such as RB or CPM include a resolution parameter in their calculation.[3][6]Potts models are introduced as a response to the resolution limit problem that is present in modularity maximization based community detection. The resolution limit problem is that, for some graphs, maximizing modularity may cause substructures of a graph to merge and become a single community and thus smaller structures are lost.[7]These resolution parameters allow modularity adjacent methods to be modified to suit the requirements of the user applying the Leiden algorithm to account for small substructures at a certain granularity.
The figure on the right illustrates why resolution can be a helpful parameter when using modularity based quality metrics. In the first graph, modularity only captures the large scale structures of the graph; however, in the second example, a more granular quality metric could potentially detect all substructures in a graph.
The Leiden algorithm starts with a graph of disorganized nodes(a)and sorts it by partitioning them to maximizemodularity(the difference in quality between the generated partition and a hypothetical randomized partition of communities). The method it uses is similar to the Louvain algorithm, except that after moving each node it also considers that node's neighbors that are not already in the community it was placed in. This process results in our first partition(b), also referred to asP{\displaystyle {\mathcal {P}}}. Then the algorithm refines this partition by first placing each node into its own individual community and then moving them from one community to another to maximize modularity. It does this iteratively until each node has been visited and moved, and each community has been refined - this creates partition(c), which is the initial partition ofPrefined{\displaystyle {\mathcal {P}}_{\text{refined}}}. Then an aggregate network(d)is created by turning each community into a node.Prefined{\displaystyle {\mathcal {P}}_{\text{refined}}}is used as the basis for the aggregate network whileP{\displaystyle {\mathcal {P}}}is used to create its initial partition. Because we use the original partitionP{\displaystyle {\mathcal {P}}}in this step, we must retain it so that it can be used in future iterations. These steps together form the first iteration of the algorithm.
In subsequent iterations, the nodes of the aggregate network (which each represent a community) are once again placed into their own individual communities and then sorted according to modularity to form a newPrefined{\displaystyle {\mathcal {P}}_{\text{refined}}}, forming(e)in the above graphic. In the case depicted by the graph, the nodes were already sorted optimally, so no change took place, resulting in partition(f). Then the nodes of partition(f)would once again be aggregated using the same method as before, with the original partitionP{\displaystyle {\mathcal {P}}}still being retained. This portion of the algorithm repeats until each aggregate node is in its own individual network; this means that no further improvements can be made.
The Leiden algorithm consists of three main steps: local moving of nodes, refinement of the partition, and aggregation of the network based on the refined partition. All of the functions in the following steps are called using our main function Leiden, depicted below: The Fast Louvain method is borrowed by the authors of Leiden from "A Simple Acceleration Method for the Louvain Algorithm".[8]
Step 1: Local Moving of Nodes
First, we move the nodes fromP{\displaystyle {\mathcal {P}}}into neighboring communities to maximizemodularity(the difference in quality between the generated partition and a hypothetical randomized partition of communities). In the above image, our initial collection of unsorted nodes is represented by the graph on the left, with each node's unique color representing that they do not belong to a community yet. The graph on the right is a representation of this step's result, the sorted graphP{\displaystyle {\mathcal {P}}}; note how the nodes have all been moved into one of three communities, as represented by the nodes' colors (red, blue, and green).
Step 2: Refinement of the Partition
Next, each node in the network is assigned to its own individual community and then moved them from one community to another to maximize modularity. This occurs iteratively until each node has been visited and moved, and is very similar to the creation ofP{\displaystyle {\mathcal {P}}}except that each community is refined after a node is moved. The result is our initial partition forPrefined{\displaystyle {\mathcal {P}}_{\text{refined}}}, as shown on the right. Note that we're also keeping track of the communities fromP{\displaystyle {\mathcal {P}}}, which are represented by the colored backgrounds behind the nodes.
Step 3: Aggregation of the Network
We then convert each community inPrefined{\displaystyle {\mathcal {P}}_{\text{refined}}}into a single node. Note how, as is depicted in the above image, the communities ofP{\displaystyle {\mathcal {P}}}are used to sort these aggregate nodes after their creation.
We repeat these steps until each community contains only one node, with each of these nodes representing an aggregate of nodes from the original network that are strongly connected with each other.
The Leiden algorithm does a great job of creating a quality partition which places nodes into distinct communities. However, Leiden creates a hard partition, meaning nodes can belong to only one community. In many networks such as social networks, nodes may belong to multiple communities and in this case other methods may be preferred.
Leiden is more efficient than Louvain, but in the case of massive graphs may result in extended processing times. Recent advancements have boosted the speed using a "parallel multicore implementation of the Leiden algorithm".[9]
The Leiden algorithm does much to overcome the resolution limit problem. However, there is still the possibility that small substructures can be missed in certain cases. The selection of the gamma parameter is crucial to ensure that these structures are not missed, as it can vary significantly from one graph to the next.
[3]
|
https://en.wikipedia.org/wiki/Leiden_algorithm
|
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour
Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks
Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics
Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication
Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety
Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation
Rational choice theoryBounded rationality
Network scienceis an academic field which studiescomplex networkssuch astelecommunication networks,computer networks,biological networks,cognitiveandsemantic networks, andsocial networks, considering distinct elements or actors represented bynodes(orvertices) and the connections between the elements or actors aslinks(oredges). The field draws on theories and methods includinggraph theoryfrom mathematics,statistical mechanicsfrom physics,data miningandinformation visualizationfrom computer science,inferential modelingfrom statistics, andsocial structurefrom sociology. TheUnited States National Research Councildefines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."[1]
The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. The earliest known paper in this field is the famousSeven Bridges of Königsbergwritten byLeonhard Eulerin 1736. Euler's mathematical description of vertices and edges was the foundation ofgraph theory, a branch of mathematics that studies the properties of pairwise relations in a network structure. The field ofgraph theorycontinued to develop and found applications in chemistry (Sylvester, 1878).
Dénes Kőnig, a Hungarian mathematician and professor, wrote the first book in Graph Theory, entitled "Theory of finite and infinite graphs", in 1936.[2]
In the 1930sJacob Moreno, a psychologist in theGestalttradition, arrived in the United States. He developed thesociogramand presented it to the public in April 1933 at a convention of medical scholars. Moreno claimed that "before the advent of sociometry no one knew what the interpersonal structure of a group 'precisely' looked like".[3]The sociogram was a representation of the social structure of a group of elementary school students. The boys were friends of boys and the girls were friends of girls with the exception of one boy who said he liked a single girl. The feeling was not reciprocated. This network representation of social structure was found so intriguing that it was printed inThe New York Times.[4]The sociogram has found many applications and has grown into the field ofsocial network analysis.
Probabilistic theory in network science developed as an offshoot ofgraph theorywithPaul ErdősandAlfréd Rényi's eight famous papers onrandom graphs. Forsocial networkstheexponential random graph modelor p* is a notational framework used to represent the probability space of a tie occurring in asocial network. An alternate approach to network probability structures is thenetwork probability matrix, which models the probability of edges occurring in a network, based on the historic presence or absence of the edge in a sample of networks.
Interest in networks exploded around 2000, following new discoveries that offered novel mathematical framework to describe different network topologies, leading to the term 'network science'.Albert-László BarabásiandReka Albertdiscovered thescale-free networks[5]nature of many real networks, from the WWW to the cell. The scale-free property captures the fact that in real network hubs coexist with many small degree vertices, and the authors offered a dynamical model to explain the origin of this scale-free state.[5]Duncan WattsandSteven Strogatzreconciled empirical data on networks with mathematical representation, describing thesmall-world network.[6]
The definition of deterministic network is defined compared with the definition of probabilistic network. In un-weighted deterministic networks, edges either exist or not, usually we use 0 to represent non-existence of an edge while 1 to represent existence of an edge. In weighted deterministic networks, the edge value represents the weight of each edge, for example, the strength level.
In probabilistic networks, values behind each edge represent the likelihood of the existence of each edge. For example, if one edge has a value equals to 0.9, we say the existence probability of this edge is 0.9.[7]
Often, networks have certain attributes that can be calculated to analyze the properties & characteristics of the network. The behavior of these network properties often definenetwork modelsand can be used to analyze how certain models contrast to each other. Many of the definitions for other terms used in network science can be found inGlossary of graph theory.
The size of a network can refer to the number of nodesN{\displaystyle N}or, less commonly, the number of edgesE{\displaystyle E}which (for connected graphs with no multi-edges) can range fromN−1{\displaystyle N-1}(a tree) toEmax{\displaystyle E_{\max }}(a complete graph). In the case of a simple graph (a network in which at most one (undirected) edge exists between each pair of vertices, and in which no vertices connect to themselves), we haveEmax=(N2)=N(N−1)/2{\displaystyle E_{\max }={\tbinom {N}{2}}=N(N-1)/2}; for directed graphs (with no self-connected nodes),Emax=N(N−1){\displaystyle E_{\max }=N(N-1)}; for directed graphs with self-connections allowed,Emax=N2{\displaystyle E_{\max }=N^{2}}. In the circumstance of a graph within which multiple edges may exist between a pair of vertices,Emax=∞{\displaystyle E_{\max }=\infty }.
The densityD{\displaystyle D}of a network is defined as a normalized ratio between 0 and 1 of the number of edgesE{\displaystyle E}to the number of possible edges in a network withN{\displaystyle N}nodes. Network density is a measure of the percentage of "optional" edges that exist in the network and can be computed asD=E−EminEmax−Emin{\displaystyle D={\frac {E-E_{\mathrm {min} }}{E_{\mathrm {max} }-E_{\mathrm {min} }}}}whereEmin{\displaystyle E_{\mathrm {min} }}andEmax{\displaystyle E_{\mathrm {max} }}are the minimum and maximum number of edges in a connected network withN{\displaystyle N}nodes, respectively. In the case of simple graphs,Emax{\displaystyle E_{\mathrm {max} }}is given by thebinomial coefficient(N2){\displaystyle {\tbinom {N}{2}}}andEmin=N−1{\displaystyle E_{\mathrm {min} }=N-1}, giving densityD=E−(N−1)Emax−(N−1)=2(E−N+1)N(N−3)+2{\displaystyle D={\frac {E-(N-1)}{E_{\mathrm {max} }-(N-1)}}={\frac {2(E-N+1)}{N(N-3)+2}}}.
Another possible equation isD=T−2N+2N(N−3)+2,{\displaystyle D={\frac {T-2N+2}{N(N-3)+2}},}whereas the tiesT{\displaystyle T}are unidirectional (Wasserman & Faust 1994).[8]This gives a better overview over the network density, because unidirectional relationships can be measured.
The densityD{\displaystyle D}of a network, where there is no intersection between edges, is defined as a ratio of the number of edgesE{\displaystyle E}to the number of possible edges in a network withN{\displaystyle N}nodes, given by a graph with no intersecting edges(Emax=3N−6){\displaystyle (E_{\max }=3N-6)}, givingD=E−N+12N−5.{\displaystyle D={\frac {E-N+1}{2N-5}}.}
Thedegreek{\displaystyle k}of a node is the number of edges connected to it. Closely related to the density of a network is the average degree,⟨k⟩=2EN{\displaystyle \langle k\rangle ={\tfrac {2E}{N}}}(or, in the case of directed graphs,⟨k⟩=EN{\displaystyle \langle k\rangle ={\tfrac {E}{N}}}, the former factor of 2 arising from each edge in an undirected graph contributing to the degree of two distinct vertices). In theER random graph model(G(N,p){\displaystyle G(N,p)}) we can compute the expected value of⟨k⟩{\displaystyle \langle k\rangle }(equal to the expected value ofk{\displaystyle k}of an arbitrary vertex): a random vertex hasN−1{\displaystyle N-1}other vertices in the network available, and with probabilityp{\displaystyle p}, connects to each. Thus,E[⟨k⟩]=E[k]=p(N−1){\displaystyle \mathbb {E} [\langle k\rangle ]=\mathbb {E} [k]=p(N-1)}.
Degree distribution
The degree distributionP(k){\displaystyle P(k)}is a fundamental property of both real networks, such as theInternetandsocial networks, and of theoretical models. The degree distributionP(k) of a network is defined to be the fraction of nodes in the network with degreek. The simplest network model, for example, the (Erdős–Rényi model)random graph, in which each ofnnodes is independently connected (or not) with probabilityp(or 1 −p), has abinomial distributionof degreesk(orPoissonin the limit of largen). Most real networks, from the WWW to theprotein interaction networks, however, have a degree distribution that are highlyright-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. For suchscale-free networksthe degree distribution approximately follows apower law:P(k)∼k−γ{\displaystyle P(k)\sim k^{-\gamma }}, whereγis the degree exponent, and is a constant. Suchscale-free networkshave unexpected structural and dynamical properties, rooted in the diverging second moment of the degree distribution.[9][10][11][12]
The average shortest path length is calculated by finding theshortest pathbetween all pairs of nodes, and taking the average over all paths of the length thereof (the length being the number of intermediate edges contained in the path, i.e., the distancedu,v{\displaystyle d_{u,v}}between the two verticesu,v{\displaystyle u,v}within the graph). This shows us, on average, the number of steps it takes to get from one member of the network to another. The behavior of the expected average shortest path length (that is, the ensemble average of the average shortest path length) as a function of the number of verticesN{\displaystyle N}of a random network model defines whether that model exhibits the small-world effect; if it scales asO(lnN){\displaystyle O(\ln N)}, the model generates small-world nets. For faster-than-logarithmic growth, the model does not produce small worlds. The special case ofO(lnlnN){\displaystyle O(\ln \ln N)}is known as ultra-small world effect.
As another means of measuring network graphs, we can define the diameter of a network as the longest of all the calculated shortest paths in a network. It is the shortest distance between the two most distant nodes in the network. In other words, once the shortest path length from every node to all other nodes is calculated, the diameter is the longest of all the calculated path lengths. The diameter is representative of the linear size of a network. If node A-B-C-D are connected, going from A->D this would be the diameter of 3 (3-hops, 3-links).[citation needed]
The clustering coefficient is a measure of an "all-my-friends-know-each-other" property. This is sometimes described as the friends of my friends are my friends. More precisely, the clustering coefficient of a node is the ratio of existing links connecting a node's neighbors to each other to the maximum possible number of such links. The clustering coefficient for the entire network is the average of the clustering coefficients of all the nodes. A high clustering coefficient for a network is another indication of asmall world.
The clustering coefficient of thei{\displaystyle i}'th node is
whereki{\displaystyle k_{i}}is the number of neighbours of thei{\displaystyle i}'th node, andei{\displaystyle e_{i}}is the number of connections between these neighbours. The maximum possible number of connections between neighbors is, then,
From a probabilistic standpoint, the expected local clustering coefficient is the likelihood of a link existing between two arbitrary neighbors of the same node.
The way in which a network is connected plays a large part into how networks are analyzed and interpreted. Networks are classified in four different categories:
Centrality indices produce rankings which seek to identify the most important nodes in a network model. Different centrality indices encode different contexts for the word "importance." Thebetweenness centrality, for example, considers a node highly important if it form bridges between many other nodes. Theeigenvalue centrality, in contrast, considers a node highly important if many other highly important nodes link to it. Hundreds of such measures have been proposed in the literature.
Centrality indices are only accurate for identifying the most important nodes. The measures are seldom, if ever, meaningful for the remainder of network nodes.[13][14]Also, their indications are only accurate within their assumed context for importance, and tend to "get it wrong" for other contexts.[15]For example, imagine two separate communities whose only link is an edge between the most junior member of each community. Since any transfer from one community to the other must go over this link, the two junior members will have high betweenness centrality. But, since they are junior, (presumably) they have few connections to the "important" nodes in their community, meaning their eigenvalue centrality would be quite low.
Limitations to centrality measures have led to the development of more general measures.
Two examples are
theaccessibility, which uses the diversity of random walks to measure how accessible the rest of the network is from a given start node,[16]and theexpected force, derived from the expected value of theforce of infectiongenerated by a node.[13]Both of these measures can be meaningfully computed from the structure of the network alone.
Nodes in a network may be partitioned into groups representing communities. Depending on the context, communities may be distinct or overlapping. Typically, nodes in such communities will be strongly connected to other nodes in the same community, but weakly connected to nodes outside the community. In the absence of aground truthdescribing thecommunity structureof a specific network, several algorithms have been developed to infer possible community structures using either supervised of unsupervised clustering methods.
Network models serve as a foundation to understanding interactions within empirical complex networks. Variousrandom graphgeneration models produce network structures that may be used in comparison to real-world complex networks.
TheErdős–Rényi model, named forPaul ErdősandAlfréd Rényi, is used for generatingrandom graphsin which edges are set between nodes with equal probabilities. It can be used in theprobabilistic methodto prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs.
To generate an Erdős–Rényi modelG(n,p){\displaystyle G(n,p)}two parameters must be specified: the total number of nodesnand the probabilitypthat a random pair of nodes has an edge.
Because the model is generated without bias to particular nodes, the degree distribution is binomial: for a randomly chosen vertexv{\displaystyle v},
In this model the clustering coefficient is0a.s. The behavior ofG(n,p){\displaystyle G(n,p)}can be broken into three regions.
Subcriticalnp<1{\displaystyle np<1}: All components are simple and very small, the largest component has size|C1|=O(logn){\displaystyle |C_{1}|=O(\log n)};
Criticalnp=1{\displaystyle np=1}:|C1|=O(n23){\displaystyle |C_{1}|=O(n^{\frac {2}{3}})};
Supercriticalnp>1{\displaystyle np>1}:|C1|≈yn{\displaystyle |C_{1}|\approx yn}wherey=y(np){\displaystyle y=y(np)}is the positive solution to the equatione−pny=1−y{\displaystyle e^{-pny}=1-y}.
The largest connected component has high complexity. All other components are simple and small|C2|=O(logn){\displaystyle |C_{2}|=O(\log n)}.
The configuration model takes a degree sequence[17][18]or degree distribution[19][20](which subsequently is used to generate a degree sequence) as the input, and produces randomly connected graphs in all respects other than the degree sequence. This means that for a given choice of the degree sequence, the graph is chosen uniformly at random from the set of all graphs that comply with this degree sequence. The degreek{\displaystyle k}of a randomly chosen vertex is anindependent and identically distributedrandom variable with integer values. WhenE[k2]−2E[k]>0{\textstyle \mathbb {E} [k^{2}]-2\mathbb {E} [k]>0}, the configuration graph contains thegiant connected component, which has infinite size.[18]The rest of the components have finite sizes, which can be quantified with the notion of the size distribution. The probabilityw(n){\displaystyle w(n)}that a randomly sampled node is connected to a component of sizen{\displaystyle n}is given byconvolution powersof the degree distribution:[21]w(n)={E[k]n−1u1∗n(n−2),n>1,u(0)n=1,{\displaystyle w(n)={\begin{cases}{\frac {\mathbb {E} [k]}{n-1}}u_{1}^{*n}(n-2),&n>1,\\u(0)&n=1,\end{cases}}}whereu(k){\displaystyle u(k)}denotes the degree distribution andu1(k)=(k+1)u(k+1)E[k]{\displaystyle u_{1}(k)={\frac {(k+1)u(k+1)}{\mathbb {E} [k]}}}. The giant component can be destroyed by randomly removing the critical fractionpc{\displaystyle p_{c}}of all edges. This process is calledpercolation on random networks. When the second moment of the degree distribution is finite,E[k2]<∞{\textstyle \mathbb {E} [k^{2}]<\infty }, this critical edge fraction is given by[22]pc=1−E[k]E[k2]−E[k]{\displaystyle p_{c}=1-{\frac {\mathbb {E} [k]}{\mathbb {E} [k^{2}]-\mathbb {E} [k]}}}, and theaverage vertex-vertex distancel{\displaystyle l}in the giant component scales logarithmically with the total size of the network,l=O(logN){\displaystyle l=O(\log N)}.[20]
In the directed configuration model, the degree of a node is given by two numbers, in-degreekin{\displaystyle k_{\text{in}}}and out-degreekout{\displaystyle k_{\text{out}}}, and consequently, the degree distribution is two-variate. The expected number of in-edges and out-edges coincides, so thatE[kin]=E[kout]{\textstyle \mathbb {E} [k_{\text{in}}]=\mathbb {E} [k_{\text{out}}]}. The directed configuration model contains thegiant componentiff[23]2E[kin]E[kinkout]−E[kin]E[kout2]−E[kin]E[kin2]+E[kin2]E[kout2]−E[kinkout]2>0.{\displaystyle 2\mathbb {E} [k_{\text{in}}]\mathbb {E} [k_{\text{in}}k_{\text{out}}]-\mathbb {E} [k_{\text{in}}]\mathbb {E} [k_{\text{out}}^{2}]-\mathbb {E} [k_{\text{in}}]\mathbb {E} [k_{\text{in}}^{2}]+\mathbb {E} [k_{\text{in}}^{2}]\mathbb {E} [k_{\text{out}}^{2}]-\mathbb {E} [k_{\text{in}}k_{\text{out}}]^{2}>0.}Note thatE[kin]{\textstyle \mathbb {E} [k_{\text{in}}]}andE[kout]{\textstyle \mathbb {E} [k_{\text{out}}]}are equal and therefore interchangeable in the latter inequality. The probability that a randomly chosen vertex belongs to a component of sizen{\displaystyle n}is given by:[24]hin(n)=E[kin]n−1u~in∗n(n−2),n>1,u~in=kin+1E[kin]∑kout≥0u(kin+1,kout),{\displaystyle h_{\text{in}}(n)={\frac {\mathbb {E} [k_{in}]}{n-1}}{\tilde {u}}_{\text{in}}^{*n}(n-2),\;n>1,\;{\tilde {u}}_{\text{in}}={\frac {k_{\text{in}}+1}{\mathbb {E} [k_{\text{in}}]}}\sum \limits _{k_{\text{out}}\geq 0}u(k_{\text{in}}+1,k_{\text{out}}),}for in-components, and
for out-components.
TheWatts and Strogatz modelis a random graph generation model that produces graphs withsmall-world properties.
An initial lattice structure is used to generate a Watts–Strogatz model. Each node in the network is initially linked to its⟨k⟩{\displaystyle \langle k\rangle }closest neighbors. Another parameter is specified as the rewiring probability. Each edge has a probabilityp{\displaystyle p}that it will be rewired to the graph as a random edge. The expected number of rewired links in the model ispE=pN⟨k⟩/2{\displaystyle pE=pN\langle k\rangle /2}.
As the Watts–Strogatz model begins as a non-random lattice structure, it has a very high clustering coefficient along with a high average path length. Each rewire is likely to create a shortcut between highly connected clusters. As the rewiring probability increases, the clustering coefficient decreases slower than the average path length. In effect, this allows the average path length of the network to decrease significantly with only slight decreases in the clustering coefficient. Higher values of p force more rewired edges, which, in effect, makes the Watts–Strogatz model a random network.
TheBarabási–Albert modelis a random network model used to demonstrate a preferential attachment or a "rich-get-richer" effect. In this model, an edge is most likely to attach to nodes with higher degrees.
The network begins with an initial network ofm0nodes.m0≥ 2 and the degree of each node in the initial network should be at least 1, otherwise it will always remain disconnected from the rest of the network.
In the BA model, new nodes are added to the network one at a time. Each new node is connected tom{\displaystyle m}existing nodes with a probability that is proportional to the number of links that the existing nodes already have. Formally, the probabilitypithat the new node is connected to nodeiis[25]
wherekiis the degree of nodei. Heavily linked nodes ("hubs") tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a "preference" to attach themselves to the already heavily linked nodes.
The degree distribution resulting from the BA model is scale free, in particular, for large degree it is a power law of the form:
Hubs exhibit high betweenness centrality which allows short paths to exist between nodes. As a result, the BA model tends to have very short average path lengths. The clustering coefficient of this model also tends to 0.
TheBarabási–Albert model[26]was developed for undirected networks, aiming to explain the universality of the scale-free property, and applied to a wide range of different networks and applications. The directed version of this model is thePrice model[27][28]which was developed to just citation networks.
In non-linear preferential attachment (NLPA), existing nodes in the network gain new edges proportionally to the node degree raised to a constant positive power,α{\displaystyle \alpha }.[29]Formally, this means that the probability that nodei{\displaystyle i}gains a new edge is given by
Ifα=1{\displaystyle \alpha =1}, NLPA reduces to the BA model and is referred to as "linear". If0<α<1{\displaystyle 0<\alpha <1}, NLPA is referred to as "sub-linear" and the degree distribution of the network tends to astretched exponential distribution. Ifα>1{\displaystyle \alpha >1}, NLPA is referred to as "super-linear" and a small number of nodes connect to almost all other nodes in the network. For bothα<1{\displaystyle \alpha <1}andα>1{\displaystyle \alpha >1}, the scale-free property of the network is broken in the limit of infinite system size. However, ifα{\displaystyle \alpha }is only slightly larger than1{\displaystyle 1}, NLPA may result indegree distributionswhich appear to be transiently scale free.[30]
Another model where the key ingredient is the nature of the vertex has been introduced by Caldarelli et al.[31]Here a link is created between two verticesi,j{\displaystyle i,j}with a probability given by a linking functionf(ηi,ηj){\displaystyle f(\eta _{i},\eta _{j})}of thefitnessesof the vertices involved.
The degree of a vertex i is given by[32]
Ifk(ηi){\displaystyle k(\eta _{i})}is an invertible and increasing function ofηi{\displaystyle \eta _{i}}, then
the probability distributionP(k){\displaystyle P(k)}is given by
As a result, if the fitnessesη{\displaystyle \eta }are distributed as a power law, then also the node degree does.
Less intuitively with a fast decaying probability distribution asρ(η)=e−η{\displaystyle \rho (\eta )=e^{-\eta }}together with a linking function of the kind
withZ{\displaystyle Z}a constant andΘ{\displaystyle \Theta }the Heavyside function, we also obtain
scale-free networks.
Such model has been successfully applied to describe trade between nations by using GDP as fitness for the various nodesi,j{\displaystyle i,j}and a linking function of the kind[33][34]
Exponential Random Graph Models(ERGMs)are a family ofstatistical modelsfor analyzing data fromsocialand other networks.[35]TheExponential familyis a broad family of models for covering many types of data, not just networks. An ERGM is a model from this family which describes networks.
We adopt the notation to represent arandom graphY∈Y{\displaystyle Y\in {\mathcal {Y}}}via a set ofn{\displaystyle n}nodes and a collection oftievariables{Yij:i=1,…,n;j=1,…,n}{\displaystyle \{Y_{ij}:i=1,\dots ,n;j=1,\dots ,n\}}, indexed by pairs of nodesij{\displaystyle ij}, whereYij=1{\displaystyle Y_{ij}=1}if the nodes(i,j){\displaystyle (i,j)}are connected by an edge andYij=0{\displaystyle Y_{ij}=0}otherwise.
The basic assumption of ERGMs is that the structure in an observed graphy{\displaystyle y}can be explained by a given vector ofsufficient statisticss(y){\displaystyle s(y)}which are a function of the observed network and, in some cases, nodal attributes. The probability of a graphy∈Y{\displaystyle y\in {\mathcal {Y}}}in an ERGM is defined by:
P(Y=y|θ)=exp(θTs(y))c(θ){\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}}}
whereθ{\displaystyle \theta }is a vector of model parameters associated withs(y){\displaystyle s(y)}andc(θ)=∑y′∈Yexp(θTs(y′)){\displaystyle c(\theta )=\sum _{y'\in {\mathcal {Y}}}\exp(\theta ^{T}s(y'))}is a normalising constant.
Social networkanalysisexamines the structure of relationships between social entities.[36]These entities are often persons, but may also begroups,organizations,nation states,web sites,scholarly publications.
Since the 1970s, the empirical study of networks has played a central role in social science, and many of themathematicalandstatisticaltools used for studying networks have been first developed insociology.[37]Amongst many other applications, social network analysis has been used to understand the diffusion ofinnovation, news andrumors. Similarly, it has been used to examine the spread of bothdiseasesandhealth-related behaviors. It has also been applied to thestudy of markets, where it has been used to examine the role of trust inexchange relationshipsand of social mechanisms in setting prices. Similarly, it has been used to study recruitment intopolitical movementsand social organizations. It has also been used to conceptualize scientific disagreements as well as academic prestige. More recently, network analysis (and its close cousintraffic analysis) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical andleaderlessnature.[38][39]Incriminology, it is being used to identify influential actors in criminal gangs, offender movements, co-offending, predict criminal activities and make policies.[40]
Dynamic network analysisexamines the shifting structure of relationships among different classes of entities in complex socio-technical systems effects, and reflects social stability and changes such as the emergence of new groups, topics, and leaders.[41][42][43]Dynamic Network Analysis focuses on meta-networks composed of multiple types of nodes (entities) andmultiple types of links. These entities can be highly varied. Examples include people, organizations, topics, resources, tasks, events, locations, and beliefs.
Dynamic network techniques are particularly useful for assessing trends and changes in networks over time, identification of emergent leaders, and examining the co-evolution of people and ideas.
With the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this content are closely related to social network analysis, but often focusing on local patterns in the network. For example,network motifsare small subgraphs that are over-represented in the network.Activity motifsare similar over-represented patterns in the attributes of nodes and edges in the network that are over represented given the network structure. The analysis ofbiological networkshas led to the development ofnetwork medicine, which looks at the effect of diseases in theinteractome.[44]
Semantic networkanalysis is a sub-field of network analysis that focuses on the relationships between words andconceptsin a network. Words are represented as nodes and their proximity or co-occurrences in the text are represented as edges. Semantic networks are therefore graphical representations of knowledge and are commonly used inneurolinguisticsandnatural language processingapplications. Semantic network analysis is also used as a method to analyze large texts and identify the main themes and topics (e.g., ofsocial mediaposts), to reveal biases (e.g., in news coverage), or even to map an entire research field.[45]
Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed, financial transactions they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of the police investigation. Link analysis here provides the crucial relationships and associations between objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed bybanksandinsuranceagencies infrauddetection, by telecommunication operators in telecommunication network analysis, by medical sector inepidemiologyandpharmacology, in law enforcementinvestigations, bysearch enginesforrelevancerating (and conversely by thespammersforspamdexingand by business owners forsearch engine optimization), and everywhere else where relationships between many objects have to be analyzed.
TheSIR modelis one of the most well known algorithms on predicting the spread of global pandemics within an infectious population.
The formula above describes the "force" of infection for each susceptible unit in an infectious population, whereβis equivalent to the transmission rate of said disease.
To track the change of those susceptible in an infectious population:
Over time, the number of those infected fluctuates by: the specified rate of recovery, represented byμ{\displaystyle \mu }but deducted to one over the average infectious period1τ{\displaystyle {1 \over \tau }}, the numbered of infectious individuals,I{\displaystyle I}, and the change in time,Δt{\displaystyle \Delta t}.
Whether a population will be overcome by a pandemic, with regards to the SIR model, is dependent on the value ofR0{\displaystyle R_{0}}or the "average people infected by an infected individual."
SeveralWeb searchrankingalgorithms use link-based centrality metrics, including (in order of appearance)Marchiori'sHyper Search,Google'sPageRank, Kleinberg'sHITS algorithm, theCheiRankandTrustRankalgorithms. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example, the analysis might be of the interlinking between politicians' web sites or blogs.
PageRankworks by randomly picking "nodes" or websites and then with a certain probability, "randomly jumping" to other nodes. By randomly jumping to these other nodes, it helps PageRank completely traverse the network as some webpages exist on the periphery and would not as readily be assessed.
Each node,xi{\displaystyle x_{i}}, has a PageRank as defined by the sum of pagesj{\displaystyle j}that link toi{\displaystyle i}times one over the outlinks or "out-degree" ofj{\displaystyle j}times the "importance" or PageRank ofj{\displaystyle j}.
As explained above, PageRank enlists random jumps in attempts to assign PageRank to every website on the internet. These random jumps find websites that might not be found during the normal search methodologies such asbreadth-first searchanddepth-first search.
In an improvement over the aforementioned formula for determining PageRank includes adding these random jump components. Without the random jumps, some pages would receive a PageRank of 0 which would not be good.
The first isα{\displaystyle \alpha }, or the probability that a random jump will occur. Contrasting is the "damping factor", or1−α{\displaystyle 1-\alpha }.
Another way of looking at it:
Information about the relative importance of nodes and edges in a graph can be obtained throughcentralitymeasures, widely used in disciplines likesociology. Centrality measures are essential when a network analysis has to answer questions such as: "Which nodes in the network should be targeted to ensure that a message or information spreads to all or most nodes in the network?" or conversely, "Which nodes should be targeted to curtail the spread of a disease?". Formally established measures of centrality aredegree centrality,closeness centrality,betweenness centrality,eigenvector centrality, andkatz centrality. The objective of network analysis generally determines the type of centrality measure(s) to be used.[36]
Content in acomplex networkcan spread via two major methods: conserved spread and non-conserved spread.[46]In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes. The pitcher represents the source, and the water represents the spread content. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes. Here, the amount of water from the source is infinite. Also, any funnels exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of mostinfectious diseases.
In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments, susceptible:S(t){\displaystyle S(t)}, infected,I(t){\displaystyle I(t)}, and recovered,R(t){\displaystyle R(t)}. The compartments used for this model consist of three classes:
The flow of this model may be considered as follows:
Using a fixed population,N=S(t)+I(t)+R(t){\displaystyle N=S(t)+I(t)+R(t)}, Kermack and McKendrick derived the following equations:
Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate ofβ{\displaystyle \beta }, which is considered the contact or infection rate of the disease. Therefore, an infected individual makes contact and is able to transmit the disease withβN{\displaystyle \beta N}others per unit time and the fraction of contacts by an infected with a susceptible isS/N{\displaystyle S/N}. The number of new infections in unit time per infective then isβN(S/N){\displaystyle \beta N(S/N)}, giving the rate of new infections (or those leaving the susceptible category) asβN(S/N)I=βSI{\displaystyle \beta N(S/N)I=\beta SI}(Brauer & Castillo-Chavez, 2001). For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, infectives are leaving this class per unit time to enter the recovered/removed class at a rateγ{\displaystyle \gamma }per unit time (whereγ{\displaystyle \gamma }represents the mean recovery rate, or1/γ{\displaystyle 1/\gamma }the mean infective period). These processes which occur simultaneously are referred to as theLaw of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned (Daley & Gani, 2005). Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model.
More can be read on this model on theEpidemic modelpage.
Amaster equationcan express the behaviour of an undirected growing network where, at each time step, a new node is added to the network, linked to an old node (randomly chosen and without preference). The initial network is formed by two nodes and two links between them at timet=2{\displaystyle t=2}, this configuration is necessary only to simplify further calculations, so at timet=n{\displaystyle t=n}the network haven{\displaystyle n}nodes andn{\displaystyle n}links.
The master equation for this network is:
wherep(k,s,t){\displaystyle p(k,s,t)}is the probability to have the nodes{\displaystyle s}with degreek{\displaystyle k}at timet+1{\displaystyle t+1}, ands{\displaystyle s}is the time step when this node was added to the network. Note that there are only two ways for an old nodes{\displaystyle s}to havek{\displaystyle k}links at timet+1{\displaystyle t+1}:
After simplifying this model, the degree distribution isP(k)=2−k.{\displaystyle P(k)=2^{-k}.}[47]
Based on this growing network, an epidemic model is developed following a simple rule: Each time the new node is added and after choosing the old node to link, a decision is made: whether or not this new node will be infected. The master equation for this epidemic model is:
wherert{\displaystyle r_{t}}represents the decision to infect (rt=1{\displaystyle r_{t}=1}) or not (rt=0{\displaystyle r_{t}=0}). Solving this master equation, the following solution is obtained:P~r(k)=(r2)k.{\displaystyle {\tilde {P}}_{r}(k)=\left({\frac {r}{2}}\right)^{k}.}[48]
Multilayer networksare networks with multiple kinds of relations.[49]Attempts to model real-world systems as multidimensional networks have been used in various fields such as social network analysis,[50]economics, history, urban and international transport, ecology, psychology, medicine, biology, commerce, climatology, physics, computational neuroscience, operations management, and finance.
Network problems that involve finding an optimal way of doing something are studied under the name ofcombinatorial optimization. Examples includenetwork flow,shortest path problem,transport problem,transshipment problem,location problem,matching problem,assignment problem,packing problem,routing problem,critical path analysisandPERT(Program Evaluation & Review Technique).
In recent years, innovative research has emerged focusing on the optimization of network problems. For example,Dr. Michael Mann's research which published inIEEEaddresses the optimization of transportation networks.[51]
Interdependent networksare networks where the functioning of nodes in one network depends on the functioning of nodes in another network. In nature, networks rarely appear in isolation, rather, usually networks are typically elements in larger systems, and interact with elements in that complex system. Such complex dependencies can have non-trivial effects on one another. A well studied example is the interdependency of infrastructure networks,[52]the power stations which form the nodes of the power grid require fuel delivered via a network of roads or pipes and are also controlled via the nodes of communications network. Though the transportation network does not depend on the power network to function, the communications network does. In such infrastructure networks, the disfunction of a critical number of nodes in either the power network or the communication network can lead to cascading failures across the system with potentially catastrophic result to the whole system functioning.[53]If the two networks were treated in isolation, this important feedback effect would not be seen and predictions of network robustness would be greatly overestimated.
|
https://en.wikipedia.org/wiki/Network_science
|
Inartificial intelligenceresearch,commonsense knowledgeconsists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem inartificial general intelligence. The first AI program to address common sense knowledge wasAdvice Takerin 1959 byJohn McCarthy.[1]
Commonsense knowledge can underpin acommonsense reasoningprocess, to attempt inferences such as "You might bake a cake because you want people to eat the cake." Anatural language processingprocess can be attached to the commonsense knowledge base to allow theknowledge baseto attempt toanswer questionsabout the world.[2]Common sense knowledge also helps to solve problems in the face ofincomplete information. Using widely held beliefs about everyday objects, orcommon senseknowledge, AI systems make common sense assumptions ordefault assumptionsabout the unknown similar to the way people do. In an AI system or in English, this is expressed as "Normally P holds", "Usually P" or "Typically P so Assume P". For example, if we know the fact "Tweety is a bird", because we know the commonly held belief about birds, "typically birds fly," without knowing anything else about Tweety, we may reasonably assume the fact that "Tweety can fly." As more knowledge of the world is discovered or learned over time, the AI system can revise its assumptions about Tweety using atruth maintenanceprocess. If we later learn that "Tweety is a penguin" thentruth maintenancerevises this assumption because we also know "penguins do not fly".
Commonsense reasoning simulates the human ability to use commonsense knowledge to make presumptions about the type and essence of ordinary situations they encounter every day, and to change their "minds" should new information come to light. This includes time, missing or incomplete information and cause and effect. The ability to explain cause and effect is an important aspect ofexplainable AI.Truth maintenancealgorithms automatically provide an explanation facility because they create elaborate records of presumptions. Compared with humans, all existing computer programs that attempthuman-level AIperform extremely poorly on modern "commonsense reasoning" benchmark tests such as theWinograd Schema Challenge.[3]The problem of attaining human-level competency at "commonsense knowledge" tasks is considered to probably be "AI complete" (that is, solving it would require the ability to synthesize a fully human-level intelligence),[4][5]although some oppose this notion and believe compassionate intelligence is also required for human-level AI.[6]Common sense reasoning has been applied successfully in more limited domains such asnatural language processing[7][8]and automated diagnosis[9]or analysis.[10]
Compiling comprehensive knowledge bases of commonsense assertions (CSKBs) is a long-standing challenge in AI research. From early expert-driven efforts like CYC andWordNet, significant advances were achieved via the crowdsourcedOpenMind Commonsenseproject, which led to the crowdsourced ConceptNet KB. Several approaches have attempted to automate CSKB construction, most notably, via text mining (WebChild, Quasimodo, TransOMCS, Ascent), as well as harvesting these directly from pre-trained language models (AutoTOMIC). These resources are significantly larger than ConceptNet, though the automated construction mostly makes them of moderately lower quality. Challenges also remain on the representation of commonsense knowledge: Most CSKB projects follow a triple data model, which is not necessarily best suited for breaking more complex natural language assertions. A notable exception here is GenericsKB, which applies no further normalization to sentences, but retains them in full.
Around 2013,MITresearchers developed BullySpace, an extension of the commonsense knowledgebaseConceptNet, to catch taunting social media comments. BullySpace included over 200 semantic assertions based around stereotypes, to help the system infer that comments like "Put on a wig and lipstick and be who you really are" are more likely to be an insult if directed at a boy than a girl.[11][12][13]
ConceptNet has also been used by chatbots[14]and by computers that compose original fiction.[15]AtLawrence Livermore National Laboratory, common sense knowledge was used in anintelligent software agentto detect violations of acomprehensive nuclear test bantreaty.[16]
As an example, as of 2012 ConceptNet includes these 21 language-independent relations:[17]
|
https://en.wikipedia.org/wiki/Commonsense_knowledge_bases
|
Aconcept maporconceptual diagramis adiagramthat depicts suggested relationships betweenconcepts.[1]Concept maps may be used byinstructional designers,engineers,technical writers, and others to organize and structureknowledge.
A concept map typically represents ideas and information as boxes or circles, which it connects with labeled arrows, often in a downward-branching hierarchical structure but also infree-formmaps.[2][3]The relationship between concepts can be articulated inlinking phrasessuch as "causes", "requires", "such as" or "contributes to".[4]
The technique forvisualizingthese relationships among different concepts is calledconcept mapping. Concept maps have been used to define theontologyof computer systems, for example with theobject-role modelingorUnified Modeling Languageformalism.
Concept mapping was developed by the professor of educationJoseph D. Novakand his research team atCornell Universityin the 1970s as a means of representing the emerging science knowledge of students.[7]It has subsequently been used as a way to increase meaningful learning in the sciences and other subjects as well as to represent the expert knowledge of individuals and teams in education, government and business. Concept maps have their origin in the learning movement calledconstructivism. In particular, constructivists hold that learners actively construct knowledge.
Novak's work is based on the cognitive theories ofDavid Ausubel, who stressed the importance of prior knowledge in being able to learn (orassimilate) new concepts: "The most important single factor influencing learning is what the learner already knows. Ascertain this and teach accordingly."[8]Novak taught students as young as six years old to make concept maps to represent their response to focus questions such as "What is water?" "What causes the seasons?" In his bookLearning How to Learn, Novak stated that a "meaningful learning involves the assimilation of new concepts and propositions into existing cognitive structures."
Various attempts have been made to conceptualize the process of creating concept maps.[9]McAleese suggested that the process of making knowledge explicit, usingnodesandrelationships, allows the individual to become aware of what they know and as a result to be able to modify what they know.[10]Maria Birbili applied the same idea to helping young children learn to think about what they know.[11]McAleese's concept of theknowledge arenasuggests a virtual space where learners may explore what they know and what they do not know.[10]
Concept maps are used to stimulate the generation of ideas, and are believed to aidcreativity.[4]Concept mapping is also sometimes used forbrain-storming. Although they are often personalized and idiosyncratic, concept maps can be used to communicate complex ideas.
Formalized concept maps are used insoftware design, where a common usage isUnified Modeling Languagediagramming amongst similar conventions and development methodologies.
Concept mapping can also be seen as a first step inontology-building, and can also be used flexibly to represent formal argument — similar toargument maps.
Concept maps are widely used in education and business. Uses include:
|
https://en.wikipedia.org/wiki/Concept_map
|
Ininformation scienceandontology, aclassification schemeis an arrangement of classes or groups of classes. The activity of developing the schemes bears similarity totaxonomy, but with perhaps a more theoretical bent, as a single classification scheme can be applied over a widesemantic spectrumwhile taxonomies tend to be devoted to a single topic.
In the abstract, the resulting structures are a crucial aspect ofmetadata, often represented as a hierarchical structure and accompanied by descriptive information of the classes or groups. Such a classification scheme is intended to be used for theclassificationof individual objects into the classes or groups, and the classes or groups are based on characteristics which the objects (members) have in common.
TheISO/IEC 11179metadata registry standard uses classification schemes as a way to classify administered items, such asdata elements, in ametadata registry.
Some quality criteria for classification schemes are:
Inlinguistics,subordinateconcepts are described ashyponymsof their respective superordinates; typically, a hyponym is 'a kind of' its superordinate.[1]
Using one or more classification schemes for the classification of a collection of objects has many benefits. Some of these include:
The following are examples of different kinds of classification schemes. This list is in approximate order from informal to more formal:
One example of a classification scheme fordata elementsis arepresentation term.
|
https://en.wikipedia.org/wiki/Classification_scheme_(information_science)
|
Folksonomyis aclassification systemin whichend usersapply publictagsto online items, typically to make those items easier for themselves or others to find later. Over time, this can give rise to a classification system based on those tags and how often they are applied or searched for, in contrast to ataxonomicclassification designed by the owners of thecontentand specified when it is published.[1][2]This practice is also known ascollaborative tagging,[3][4]social classification,social indexing, andsocial tagging. Folksonomy was originally "the result of personal free tagging of information [...] for one's own retrieval",[5]but online sharing and interaction expanded it into collaborative forms.Social taggingis the application of tags in an open online environment where the tags of other users are available to others.Collaborative tagging(also known as group tagging) is tagging performed by a group of users. This type of folksonomy is commonly used in cooperative and collaborative projects such as research, content repositories, and social bookmarking.
The term was coined byThomas Vander Walin 2004[5][6][7]as aportmanteauoffolkandtaxonomy. Folksonomies became popular as part ofsocial softwareapplications such associal bookmarkingand photograph annotation that enable users to collectively classify and find information via shared tags. Some websites includetag cloudsas a way to visualize tags in a folksonomy.[8]
Folksonomies can be used forK–12education, business, and higher education. More specifically, folksonomies may be implemented for social bookmarking, teacher resource repositories, e-learning systems, collaborative learning, collaborative research, professional development and teaching.Wikipediais a prime example of folksonomy.[9][better source needed][clarification needed]
Folksonomies are a trade-off between traditional centralized classification and no classification at all,[10]and have several advantages:[11][12][13]
There are several disadvantages with the use of tags and folksonomies as well,[14]and some of the advantages can lead to problems. For example, the simplicity in tagging can result in poorly applied tags.[15]Further, while controlled vocabularies are exclusionary by nature,[16]tags are often ambiguous and overly personalized.[17]Users apply tags to documents in many different ways and tagging systems also often lack mechanisms for handlingsynonyms,acronymsandhomonyms, and they also often lack mechanisms for handlingspellingvariations such as misspellings,singular/pluralform,conjugatedandcompoundwords. Some tagging systems do not support tags consisting of multiple words, resulting in tags like "viewfrommywindow". Sometimes users choose specialized tags or tags without meaning to others.
A folksonomy emerges when users tag content or information, such as web pages, photos, videos, podcasts, tweets, scientific papers and others. Strohmaier et al.[18]elaborate the concept: the term "tagging" refers to a "voluntary activity of users who are annotating resources with term-so-called 'tags' – freely chosen from an unbounded and uncontrolled vocabulary". Others explain tags as an unstructured textual label[19]or keywords,[17]and that they appear as a simple form of metadata.[20]
Folksonomies consist of three basic entities: users, tags, and resources. Users create tags to mark resources such as: web pages, photos, videos, and podcasts. These tags are used to manage, categorize and summarize online content. This collaborative tagging system also uses these tags as a way to index information, facilitate searches and navigate resources. Folksonomy also includes a set of URLs that are used to identify resources that have been referred to by users of different websites. These systems also include category schemes that have the ability to organize tags at different levels of granularity.[21]
Vander Wal identifies two types of folksonomy: broad and narrow.[22]A broad folksonomy arises when multiple users can apply the same tag to an item, providing information about which tags are the most popular. A narrow folksonomy occurs when users, typically fewer in number and often including the item's creator, tag an item with tags that can each be applied only once. While both broad and narrow folksonomies enable the searchability of content by adding an associated word or phrase to an object, a broad folksonomy allows for sorting based on the popularity of each tag, as well as the tracking of emerging trends in tag usage and developing vocabularies.[22]
An example of a broad folksonomy isdel.icio.us, a website where users can tag any online resource they find relevant with their own personal tags. The photo-sharing websiteFlickris an oft-cited example of a narrow folksonomy.
'Taxonomy' refers to a hierarchicalcategorizationin which relatively well-defined classes are nested under broader categories. Afolksonomyestablishes categories (each tag is a category) without stipulating or necessarily deriving a hierarchical structure of parent-child relations among different tags. (Work has been done on techniques for deriving at least loose hierarchies from clusters of tags.[23])
Supporters of folksonomies claim that they are often preferable to taxonomies because folksonomies democratize the way information is organized, they are more useful to users because they reflect current ways of thinking about domains, and they express more information about domains.[24]Critics claim that folksonomies are messy and thus harder to use, and can reflect transient trends that may misrepresent what is known about a field.
An empirical analysis of the complex dynamics of tagging systems, published in 2007,[25]has shown that consensus around stable distributions and shared vocabularies does emerge, even in the absence of a centralcontrolled vocabulary. For content to be searchable, it should be categorized and grouped. While this was believed to require commonly agreed on sets of content describing tags (much like keywords of a journal article), some research has found that in large folksonomies common structures also emerge on the level of categorizations.[26]Accordingly, it is possible to devise mathematicalmodels of collaborative taggingthat allow for translating from personal tag vocabularies (personomies) to the vocabulary shared by most users.[27]
Folksonomy is unrelated tofolk taxonomy, a cultural practice that has been widely documented in anthropological andfolkloristicwork. Folk taxonomies are culturally supplied, intergenerationally transmitted, and relatively stable classification systems that people in a given culture use to make sense of the entire world around them (not just theInternet).[21]
The study of the structuring or classification of folksonomy is termedfolksontology.[28]This branch ofontologydeals with the intersection between highly structured taxonomies or hierarchies and loosely structured folksonomy, asking what best features can be taken by both for a system of classification. The strength of flat-tagging schemes is their ability to relate one item to others like it. Folksonomy allows large disparate groups of users to collaboratively label massive, dynamic information systems. The strength of taxonomies are their browsability: users can easily start from more generalized knowledge and target their queries towards more specific and detailed knowledge.[29]Folksonomy looks to categorize tags and thus create browsable spaces of information that are easy to maintain and expand.
Social tagging forknowledge acquisitionis the specific use of tagging for finding and re-finding specific content for an individual or group. Social tagging systems differ from traditional taxonomies in that they are community-based systems lacking the traditional hierarchy of taxonomies. Rather than a top-down approach, social tagging relies on users to create the folksonomy from the bottom up.[30]
Common uses of social tagging for knowledge acquisition include personal development for individual use and collaborative projects. Social tagging is used for knowledge acquisition in secondary, post-secondary, and graduate education as well as personal and business research. The benefits of finding/re-finding source information are applicable to a wide spectrum of users. Tagged resources are located through search queries rather than searching through a more traditional file folder system.[31]The social aspect of tagging also allows users to take advantage of metadata from thousands of other users.[30]
Users choose individual tags for stored resources. These tags reflect personal associations, categories, and concept, all of which are individual representations based on meaning and relevance to that individual. The tags, or keywords, are designated by users. Consequently, tags represent a user's associations corresponding to the resource. Commonly tagged resources include videos, photos, articles, websites, and email.[32]Tags are beneficial for a couple of reasons. First, they help to structure and organize large amounts of digital resources in a manner that makes them easily accessible when users attempt to locate the resource at a later time. The second aspect is social in nature, that is to say that users may search for new resources and content based on the tags of other users. Even the act of browsing through common tags may lead to further resources for knowledge acquisition.[30]
Tags that occur more frequently with specific resources are said to be more strongly connected. Furthermore, tags may be connected to each other. This may be seen in the frequency in which they co-occur. The more often they co-occur, the stronger the connection. Tag clouds are often utilized to visualize connectivity between resources and tags. Font size increases as the strength of association increases.[32]
Tags show interconnections of concepts that were formerly unknown to a user. Therefore, a user's current cognitive constructs may be modified or augmented by the metadata information found in aggregated social tags. This process promotes knowledge acquisition through cognitive irritation and equilibration. This theoretical framework is known as the co-evolution model of individual and collective knowledge.[32]
The co-evolution model focuses on cognitive conflict in which a learner's prior knowledge and the information received from the environment are dissimilar to some degree.[30][32]When this incongruence occurs, the learner must work through a process cognitive equilibration in order to make personal cognitive constructs and outside information congruent. According to the coevolution model, this may require the learner to modify existing constructs or simply add to them.[30]The additional cognitive effort promotes information processing which in turn allows individual learning to occur.[32]
|
https://en.wikipedia.org/wiki/Folksonomy
|
Ininformation science,formal concept analysis(FCA) is aprincipled wayof deriving aconcept hierarchyor formalontologyfrom a collection ofobjectsand theirproperties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents asubsetof the objects (as well as a superset of the properties) in the concepts above it. The term was introduced byRudolf Willein 1981, and builds on the mathematical theory oflatticesandordered setsthat was developed byGarrett Birkhoffand others in the 1930s.
Formal concept analysis finds practical application in fields includingdata mining,text mining,machine learning,knowledge management,semantic web,software development,chemistryandbiology.
The original motivation of formal concept analysis was the search for real-world meaning of mathematicalorder theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures calledcomplete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents aheterogeneous relationbetween objects and attributes, tabulating pairs of the form "objectghas attributem", is considered as a basic data type. It is referred to as aformal context. In this theory, aformal conceptis defined to be a pair (A,B), whereAis a set of objects (called theextent) andBis a set of attributes (theintent) such that
In this way, formal concept analysis formalizes thesemanticnotions ofextensionandintension.
The formal concepts of any formal context can—as explained below—beorderedin a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be helpful for understanding the data. Often however these lattices get too large for visualization. Then the mathematical theory of formal concept analysis may be helpful, e.g., for decomposing the lattice into smaller pieces without information loss, or for embedding it into another structure which is easier to interpret.
The theory in its present form goes back to the early 1980s and a research group led byRudolf Wille,Bernhard Ganterand Peter Burmeister at theTechnische Universität Darmstadt. Its basic mathematical definitions, however, were already introduced in the 1930s byGarrett Birkhoffas part of general lattice theory. Other previous approaches to the same idea arose from various French research groups, but the Darmstadt group normalised the field and systematically worked out both its mathematical theory and its philosophical foundations. The latter refer in particular toCharles S. Peirce, but also to thePort-Royal Logic.
In his article "Restructuring Lattice Theory" (1982),[1]initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymnastics"—were impressive, but the connections between neighboring domains, even parts of a theory were getting weaker.
Restructuring lattice theory is an attempt to reinvigorate connections with our general culture by interpreting the theory as concretely as possible, and in this way to promote better communication between lattice theorists and potential users of lattice theory
This aim traces back to the educationalist Hartmut von Hentig, who in 1972 pleaded for restructuring sciences in view of better teaching and in order to make sciences mutually available and more generally (i.e. also without specialized knowledge) critiqueable.[2]Hence, by its origins formal concept analysis aims at interdisciplinarity and democratic control of research.[3]
It corrects the starting point of lattice theory during the development offormal logicin the 19th century. Then—and later inmodel theory—a concept as unarypredicatehad been reduced to its extent. Now again, the philosophy of concepts should become less abstract by considering the intent. Hence, formal concept analysis is oriented towards the categoriesextensionandintensionoflinguisticsand classical conceptual logic.[4]
Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce'spragmatic maximby unfolding observable, elementary properties of thesubsumedobjects.[3]In his late philosophy, Peirce assumed that logical thinking aims at perceivingreality, by the triade concept,judgementandconclusion. Mathematics is an abstraction of logic, develops patterns ofpossiblerealities and therefore may support rationalcommunication. On this background, Wille defines:
The aim and meaning of Formal Concept Analysis as mathematical theory of concepts and concept hierarchies is to support the rational communication of humans by mathematically developing appropriate conceptual structures which can be logically activated.
The data in the example is taken from a semantic field study, where different kinds ofbodies of waterwere systematically categorized by their attributes.[6]For the purpose here it has been simplified.
The data table represents aformal context, theline diagramnext to it shows itsconcept lattice. Formal definitions follow below.
The above line diagram consists of circles, connecting line segments, and labels. Circles representformal concepts. The lines allow to read off the subconcept-superconcept hierarchy. Each object and attribute name is used as a label exactly once in the diagram, with objects below and attributes above concept circles. This is done in a way that an attribute can be reached from an object via an ascending path if and only if the object has the attribute.
In the diagram shown, e.g. the objectreservoirhas the attributesstagnantandconstant, but not the attributestemporary, running, natural, maritime. Accordingly,puddlehas exactly the characteristicstemporary, stagnantandnatural.
The original formal context can be reconstructed from the labelled diagram, as well as the formal concepts. The extent of a concept consists of those objects from which an ascending path leads to the circle representing the concept. The intent consists of those attributes to which there is an ascending path from that concept circle (in the diagram). In this diagram the concept immediately to the left of the labelreservoirhas the intentstagnantandnaturaland the extentpuddle, maar, lake, pond, tarn, pool, lagoon,andsea.
A formal context is a tripleK= (G,M,I), whereGis a set ofobjects,Mis a set ofattributes, andI⊆G×Mis a binary relation calledincidencethat expresses which objectshavewhich attributes.[4]For subsetsA⊆Gof objects and subsetsB⊆Mof attributes, one defines twoderivation operatorsas follows:
Applying either derivation operator and then the other constitutes twoclosure operators:
The derivation operators define aGalois connectionbetween sets of objects and of attributes. This is why in French a concept lattice is sometimes called atreillis de Galois(Galois lattice).
With these derivation operators, Wille gave an elegant definition of a formal concept:
a pair (A,B) is aformal conceptof a context(G,M,I)provided that:
Equivalently and more intuitively, (A,B) is a formal concept precisely when:
For computing purposes, a formal context may be naturally represented as a(0,1)-matrixKin which the rows correspond to the objects, the columns correspond to the attributes, and each entryki,jequals to 1 if "objectihas attributej." In this matrix representation, each formal concept corresponds to amaximalsubmatrix (not necessarily contiguous) all of whose elements equal 1. It is however misleading to consider a formal context asboolean, because the negated incidence ("objectgdoesnothave attributem") is not concept forming in the same way as defined above. For this reason, the values 1 and 0 or TRUE and FALSE are usually avoided when representing formal contexts, and a symbol like × is used to express incidence.
The concepts (Ai,Bi) of a contextKcan be(partially) orderedby the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1,B1) and (A2,B2) ofK, we say that (A1,B1) ≤ (A2,B2) precisely whenA1⊆A2. Equivalently, (A1,B1) ≤ (A2,B2) wheneverB1⊇B2.
In this order, every set of formal concepts has agreatest common subconcept, or meet. Its extent consists of those objects that are common to all extents of the set.Dually, every set of formal concepts has aleast common superconcept, the intent of which comprises all attributes which all objects of that set of concepts have.
These meet and join operations satisfy the axioms defining alattice, in fact acomplete lattice. Conversely, it can be shown that every complete lattice is the concept lattice of some formal context (up to isomorphism).
Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is calledconceptual scaling.
The negation of an attributemis an attribute ¬m, the extent of which is just the complement of the extent ofm, i.e., with (¬m)′= G \m′. It is in generalnotassumed that negated attributes are available for concept formation. But pairs of attributes which are negations of each other often naturally occur, for example in contexts derived from conceptual scaling.
For possible negations of formal concepts see the sectionconcept algebrasbelow.
AnimplicationA→Brelates two setsAandBof attributes and expresses that every object possessing each attribute fromAalso has each attribute fromB. When(G,M,I)is a formal context andA,Bare subsets of the setMof attributes (i.e.,A,B⊆M), then the implicationA→Bis validifA′⊆B′. For each finite formal context, the set of all valid implications has acanonical basis,[7]an irredundant set of implications from which all valid implications can be derived by the natural inference (Armstrong rules). This is used inattribute exploration, a knowledge acquisition method based on implications.[8]
Formal concept analysis has elaborate mathematical foundations,[4]making the field versatile. As a basic example we mention thearrow relations, which are simple and easy to compute, but very useful. They are defined as follows: Forg∈Gandm∈Mlet
and dually
Since only non-incident object-attribute pairs can be related, these relations can conveniently be recorded in the table representing a formal context. Many lattice properties can be read off from the arrow relations, including distributivity and several of its generalizations. They also reveal structural information and can be used for determining, e.g., the congruence relations of the lattice.
Temporal concept analysis (TCA) is an extension of Formal Concept Analysis (FCA) aiming at a conceptual description of temporal phenomena. It provides animations in concept lattices obtained from data about changing objects. It offers a general way of understanding change of concrete or abstract objects in continuous, discrete or hybrid space and time. TCA applies conceptual scaling to temporal data bases.[14]
In the simplest case TCA considers objects that change in time like a particle in physics, which, at each time, is at exactly one place. That happens in those temporal data where the attributes 'temporal object' and 'time' together form a key of the data base. Then the state (of a temporal object at a time in a view) is formalized as a certain object concept of the formal context describing the chosen view. In this simple case, a typical visualization of a temporal system is a line diagram of the concept lattice of the view into which trajectories of temporal objects are embedded.[15]
TCA generalizes the above mentioned case by considering temporal data bases with an arbitrary key. That leads to the notion of distributed objects which are at any given time at possibly many places, as for example, a high pressure zone on a weather map. The notions of 'temporal objects', 'time' and 'place' are represented as formal concepts in scales. A state is formalized as a set of object concepts.
That leads to a conceptual interpretation of the ideas of particles and waves in physics.[16]
There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov[17]or the book by Ganter and Obiedkov,[8]where also some pseudo-code can be found. Since the number of formal concepts may be exponential in the size of the formal context, the complexity of the algorithms usually is given with respect to the output size. Concept lattices with a few million elements can be handled without problems.
Many FCA software applications are available today.[18]The main purpose of these tools varies from formal context creation to formalconcept miningand generating the concepts lattice of a given formal context and the corresponding implications andassociation rules. Most of these tools are academic open-source applications, such as:
A formal context can naturally be interpreted as abipartite graph. The formal concepts then correspond to the maximalbicliquesin that graph. The mathematical and algorithmic results of formal concept analysis may thus be used for the theory of maximal bicliques. The notion ofbipartite dimension(of the complemented bipartite graph) translates[4]to that ofFerrers dimension(of the formal context) and oforder dimension(of the concept lattice) and has applications e.g. for Boolean matrix factorization.[25]
Given an object-attribute numerical data-table, the goal ofbiclusteringis to group together some objects having similar values of some attributes. For example, in gene expression data, it is known that genes (objects) may share a common behavior for a subset of biological situations (attributes) only: one should accordingly produce local patterns to characterize biological processes, the latter should possibly overlap, since a gene may be involved in several processes. The same remark applies for recommender systems where one is interested in local patterns characterizing groups of users that strongly share almost the same tastes for a subset of items.[26]
A bicluster in a binary object-attribute data-table is a pair(A,B)consisting of an inclusion-maximal set of objectsAand an inclusion-maximal set of attributesBsuch that almost all objects fromAhave almost all attributes fromBand vice versa.
Of course, formal concepts can be considered as "rigid" biclusters where all objects have all attributes and vice versa. Hence, it is not surprising that some bicluster definitions coming from practice[27]are just definitions of a formal concept.[28]Relaxed FCA-based versions of biclustering and triclustering include OA-biclustering[29]and OAC-triclustering[30](here O stands for object, A for attribute, C for condition); to generate patterns these methods use prime operators only once being applied to a single entity (e.g. object) or a pair of entities (e.g. attribute-condition), respectively.
A bicluster of similar values in a numerical object-attribute data-table is usually defined[31][32][33]as a pair consisting of an inclusion-maximal set of objects and an inclusion-maximal set of attributes having similar values for the objects. Such a pair can be represented as an inclusion-maximal rectangle in the numerical table, modulo rows and columns permutations. In[28]it was shown that biclusters of similar values correspond to triconcepts of a triadic context where the third dimension is given by a scale that represents numerical attribute values by binary attributes.
This fact can be generalized ton-dimensional case, wheren-dimensional clusters of similar values inn-dimensional data are represented byn+1-dimensional concepts. This reduction allows one to use standard definitions and algorithms from multidimensional concept analysis[33][10]for computing multidimensional clusters.
In the theory ofknowledge spacesit is assumed that in any knowledge space the family ofknowledge statesis union-closed. The complements of knowledge states therefore form aclosure systemand may be represented as the extents of some formal context.
The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005).[34]Including the fields of:medicineandcell biology,[35][36]genetics,[37][38]ecology,[39]software engineering,[40]ontology,[41]informationandlibrary sciences,[42][43][44]office administration,[45]law,[46][47]linguistics,[48]political science.[49]
Many more examples are e.g. described in:Formal Concept Analysis. Foundations and Applications,[34]conference papers at regular conferences such as:International Conference on Formal Concept Analysis(ICFCA),[50]Concept Lattices and their Applications(CLA),[51]orInternational Conference on Conceptual Structures(ICCS).[52]
|
https://en.wikipedia.org/wiki/Formal_concept_analysis
|
Inphilosophy, the termformal ontologyis used to refer to anontologydefined byaxiomsin aformal languagewith the goal to provide anunbiased(domain- and application-independent) view onreality, which can help the modeler ofdomain- or application-specificontologiesto avoid possibly erroneous ontological assumptions encountered in modeling large-scale ontologies.
By maintaining an independent view on reality, a formal (upper level) ontology gains the following properties:
Theories on how to conceptualize reality date back as far asPlatoandAristotle. The term 'formal ontology' itself was coined byEdmund Husserlin the second edition of hisLogical Investigations(1900–01), where it refers to an ontological counterpart offormal logic. Formal ontology for Husserl embraces an axiomatizedmereologyand a theory of dependence relations, for example between the qualities of an object and the object itself. 'Formal' signifies not the use of a formal-logical language, but rather: non-material, or in other words domain-independent (of universal application). Husserl's ideas on formal ontology were developed especially by his Polish studentRoman Ingardenin hisControversy over the Existence of the World.[1]The relations between the Husserlian tradition of formal ontology and the Polish tradition of mereology are set forth inParts and Moments. Studies in Logic and Formal Ontology,[2]edited byBarry Smith.
The differences in terminology used between separate formal upper-level ontologies can be quite substantial, but most formal upper-level ontologies apply one foremostdichotomy: that between endurants and perdurants.
Also known as continuants, or in some cases as "substance", endurants are thoseentitiesthat can be observed-perceived as a complete concept, at no matter which givensnapshotoftime.
Were we to freeze time we would still be able to perceive/conceive the entire endurant.
Examples include material objects (such as an apple or a human), and abstract "fiat" objects (such as an organization, or the border of a country).
Also known as occurrents, accidents or happenings, perdurants are those entities for which only a part exists if we look at them at any given snapshot in time.
When we freeze time we can only see a part of the perdurant. Perdurants are often what we know as processes, for example: "running". If we freeze time then we only see a part of the running, without any previous knowledge one might not even be able to determine the actual process as being a process of running. Other examples include an activation, a kiss, or a procedure.
In a broad sense, qualities can also be known aspropertiesortropes.
Qualities do not exist on their own, but they need anotherentity(in many formal ontologies this entity is restricted to be an endurant) which they occupy. Examples of qualities and the values they assume include colors (red color), or temperatures (warm).
Most formal upper-level ontologies recognize qualities, attributes, tropes, or something related, although the exact classification may differ. Some see qualities and the values they can assume (sometimes calledquale) as a separatehierarchybesides endurants and perdurants (example:DOLCE). Others classify qualities as a subsection of endurants, e.g. the dependent endurants (example:BFO). Others consider property-instances or tropes that are single characteristics of individuals as the atoms of the ontology, the simpler entities of which all other entities are composed, so that all the entities are sums or bundles of tropes.
In information science an ontology is formal if it is specified in aformal language, otherwise it is informal.
In philosophy, a separate distinction between formal and nonformal ontologies exists, which does not relate to the use of aformal language.
An ontology might contain a concept representing 'mobility of the arm'. In a nonformal ontology, a concept like this can often be classified as for example a 'finding of the arm', right next to other concepts such as 'bruising of the arm'. This method of modeling might create problems with increasing amounts of information, as there is no foolproof way to keep hierarchies like this, or their descendant hierarchies (one is a process, the other is a quality) from entangling or knotting.
In a formal ontology, there is an optimal way to properly classify this concept, it is a kind of 'mobility', which is a kind of quality/property (see above). As a quality, it is said toinhereinindependentendurant entities (see above), as such, it cannot exist without a bearer (in the case the arm).
Having a formal ontology at your disposal, especially when it consists of a Formal upper layer enriched with concrete domain-independent 'middle layer' concepts, can really aid the creation of a domain specific ontology.
It allows the modeller to focus on the content of the domain specific ontology without having to worry on the exact higher structure or abstractphilosophicalframework that gives his ontology a rigid backbone.Disjointaxiomsat the higher level will prevent many of the commonly made ontological mistakes made when creating the detailed layer of the ontology.
Aligning terminologies and ontologies is not an easy task. The divergence of the underlying meaning of word descriptions and terms within different information sources is a well known obstacle for direct approaches todata integrationand mapping. One single description may have a completely different meaning in one data source when compared with another. This is because different databases/terminologies often have a different viewpoint on similar items. They are usually built with a specific application-perspective in mind and their hierarchical structure represents this.
A formal ontology, on the other hand, represents entities without a particular application scope. Its hierarchy reflects ontological principles and a basic class-subclass relation between its concepts. A consistent framework like this is ideal for crossmapping data sources.
However, one cannot just integrate these external data sources in the formal ontology. A direct incorporation would lead to corruption of the framework and principles of the formal ontology.
A formal ontology is a great crossmapping hub only if a complete distinction between the content and structure of the external information sources and the formal ontology itself is maintained. This is possible by specifying a mapping relation between concepts from a chaotic external information source and a concept in the formal ontology that corresponds with the meaning of the former concept.
Where two or more external information sources map to one and the same formal ontology concept a crossmapping/translation is achieved, as you know that those concepts—no matter what their phrasing is—mean the same thing.
In ontologies designed to servenatural language processing(NLP) andnatural language understanding(NLU) systems, ontology concepts are usually connected and symbolized by terms. This kind of connection represents a linguistic realization.Termsare words or a combination of words (multi-word units), in different languages, used to describe in natural language an element from reality, and hence connected to that formal ontology concept that frames this element in reality.
Thelexicon, the collection of terms and their inflections assigned to the concepts and relationships in an ontology, forms the 'ontology interface to natural language', the channel through which the ontology can be accessed from a natural language input.
The great thing about a formal ontology, in contrast to rigidtaxonomiesorclassifications, is that it allows for indefinite expansion. Given proper modeling, just about any kind ofconceptualinformation, no matter the content, can find its place.
To disambiguate a concept's place in the ontology, often acontext modelis useful to improve the classification power. The model typically applies rules to surrounding elements of the context to select the most valid classification.
|
https://en.wikipedia.org/wiki/Formal_ontology
|
TheGeneral Concept Lattice(GCL) proposes a novel general construction of concept hierarchy from formal context, where the conventional Formal Concept Lattice based onFormal Concept Analysis(FCA) only serves as a substructure.[1][2][3]
The formal context is a data table ofheterogeneous relationsillustrating how objects carrying attributes. By analogy withtruth-value table, every formal context can develop its fully extended version including all the columns corresponding to attributes constructed, by means of Boolean operations, out of the given attribute set. TheGCLis based on theextendedformal context which comprehends the full information content of formal context in the sense that it incorporates whatever the formal context should consistently imply. Noteworthily, different formal contexts may give rise to the same extended formal context.[4]
TheGCL[4]claims to take into account the extended formal context for preservation of information content. Consider describing a three-ball system (3BS) with three distinct colours, e.g.,a:={\displaystyle a:=}red,b:={\displaystyle b:=}green andc:={\displaystyle c:=}blue.According toTable 1, one may refer to different attribute sets, say,M={a,b,c}{\textstyle M=\{a,b,c\}},M1={aorb,borc,cora}{\textstyle M_{1}=\{a\ {\bf {or}}\ b,b\ {\bf {or}}\ c,c\ {\bf {or}}\ a\}}orM2={aorb,borc,c}{\textstyle M_{2}=\{a\ {\bf {or}}\ b,b\ {\bf {or}}\ c,c\}}to reach different formal contexts. The concept hierarchy for the 3BS is supposed to be unique regardless of how the 3BS being described.However, the FCA exhibits differentformal concept lattices subject to the chosen formal contexts for the 3BS , seeFig. 1. In contrast, theGCLis an invariant lattice structure with respect to these formal contexts since they can infer each other and ultimately entail the same information content.
In information science, theFormal Concept Analysis(FCA) promises practical applications in various fields based on the following fundamental characteristics.
The FCL does not appear to be the only lattice applicable to the interpretation of data table. Alternative concept lattices subject to different derivation operators based on the notions relevant to theRough Set Analysishave also been proposed.[7][8][9]Specifically, the object-oriented concept lattice,[9]which is referred to as the rough set lattice[4](RSL) afterwards, is found to be particularly instructive to supplement the standard FCA in further understandings of the formal context.
Consequently, there are two crucial points to be contemplated.
TheGCLaccomplishes a sound theoretical foundation for the concept hierarchies acquired from formal context.[4]Maintaining the generality that preserves the information, theGCLunderlies both the FCL and RSL, which correspond to substructures at particular restrictions. Technically, theGCLwould be reduced to the FCL and RSL when restricted to conjunctions and disjunctions of elements in the referred attribute set (M{\displaystyle M}), respectively. In addition, theGCLunveils extra information complementary to the results via the FCL and RSL. Surprisingly, the implementation of formal context viaGCLis much more manageable than those via FCL and RSL.
The derivation operators constitute the building blocks of concept lattices and thus deserve distinctive notations. Subject to a formal context concerning the object setG{\displaystyle G}and attribute setM{\displaystyle M},
are considered as different modal operators[7][8](Sufficiency, Necessity and Possibility, respectively) that generalise the FCA. For notations,I{\displaystyle I}, the operator adopted in the standard FCA,[1][2][3]followsBernhard Ganter[de][10]andR. Wille;[1]◻and◊{\displaystyle \Box {\mbox{ and }}\Diamond }as well asR{\displaystyle R}follows Y. Y. Yao.[9]BygRm{\displaystyle gRm}, i.e.,(g,m)∈R{\displaystyle (g,m)\in R}the objectg{\displaystyle g}carries the attributem{\textstyle m}as its property, which is also referred to asg∈mR{\displaystyle g\in m^{R}}wheremR{\displaystyle m^{R}}is theset of all objects carrying the attributem{\textstyle m}.
WithX,X1,X2⊆GandXc:=G∖X{\displaystyle X,X_{1},X_{2}\subseteq G{\mbox{ and }}X^{c}:=G\backslash X}it is straightforward to check that
where the same relations hold if given in terms ofY,Y1,Y2⊆MandYc:=M∖Y{\displaystyle Y,Y_{1},Y_{2}\subseteq M{\mbox{ and }}Y^{c}:=M\backslash Y}.
From the above algebras, there exist different types ofGalois connections, e.g.,
and (3)X⊆Y◻{\displaystyle X\subseteq Y^{\Box }}⟺X◊⊆Y{\displaystyle \iff X^{\Diamond }\subseteq Y}that corresponds to (2) when one replacesXforXc{\displaystyle X{\mbox{ for }}X^{c}}andYforYc{\displaystyle Y{\mbox{ for }}Y^{c}}. Note that (1) and (2) enable different object-oriented constructions for the concept hierarchies FCL and RSL, respectively. Note that (3) corresponds to the attribute-oriented construction[9]where the roles of object and attribute in the RSL are exchanged. The FCL and RSL apply to different 2-tuple(X,Y){\displaystyle (X,Y)}concept collections that manifest different well-defined partial orderings.
Given as a concept, the 2-tuple(X,Y){\displaystyle (X,Y)}is in general constituted by anextentX⊆G{\displaystyle X\subseteq G}and anintentY⊆M{\displaystyle Y\subseteq M}, which should be distinguished when applied to FCL and RSL. The concept(X,Y)fcl{\displaystyle (X,Y)_{fcl}}is furnished byXI=YandYI=X{\displaystyle X^{I}=Y{\mbox{ and }}Y^{I}=X}based on (1) while(X,Y)rsl{\displaystyle (X,Y)_{rsl}}is furnished byX◻=YandY◊=X{\displaystyle X^{\Box }=Y{\mbox{ and }}Y^{\Diamond }=X}based on (2). In essence,there are two Galois lattices based on different orderings of the two collections of concepts as follows.
Everyattribute listed in the formal contextprovides anextentfor FCL and RSL simultaneously viathe object set carrying the attribute. Though the extents for FCL and for RSL do not coincide totally, everymR{\displaystyle m^{R}}form∈M{\textstyle m\in M}is known to be a common extent of FCL and RSL. This turns up from the main results in FCL (Formale Begriffsanalyse#Hauptsatz der Formalen Begriffsanalyse[de]) and RSL: everyYI{\displaystyle Y^{I}}(Y⊆M{\displaystyle Y\subseteq M}) is an extent for FCL[1][2][3]andY◊{\displaystyle Y^{\Diamond }}is an extent for RSL.[9]Note that[4]choosingY={m}{\textstyle Y=\{m\}}gives rise toYI=Y◊=mR{\textstyle Y^{I}=Y^{\Diamond }=m^{R}}.
The consideration of the attribute set-to-set implicationA→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}(A,B⊆M{\displaystyle A,B\subseteq M}) via FCL has an intuitive interpretation:[6]every object possessing all the attributes inA{\displaystyle A}possesses all the attributes inB{\displaystyle B}, in other wordsAI⊆BI{\displaystyle A^{I}\subseteq B^{I}}. Alternatively, one may considerA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}based on the RSL in a similar manner:[4]the set of all objects carryinganyof the attributes inA{\displaystyle A}is contained in the set of all objects carryinganyof the attributes inB{\displaystyle B}, in other wordsA◊⊆B◊{\displaystyle A^{\Diamond }\subseteq B^{\Diamond }}. It is apparent thatA→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}andA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}relate different pairs of attribute sets and are incapable of expressing each other.
For every formal context one may acquire its extended version deduced in the sense of completing a truth-value table. It is instructive to explicitly label the object/attribute dependence for the formal context,[4]say,F(G,M):=(G,M,I){\displaystyle F(G,M):=(G,M,I)}rather thanK:=(G,M,I){\displaystyle \mathbb {K} :=(G,M,I)}since one may have to investigate more than one formal contexts. As is illustrated inTable 1,F3BS(G,M){\displaystyle F_{\scriptscriptstyle 3BS}(G,M)}can be employed to deduce the extended versionF3BS∗(G,M∗){\displaystyle F_{\scriptscriptstyle 3BS}^{\ast }(G,M^{\ast })}, whereM∗{\textstyle M^{\ast }}is the set of all attributes constructed out of elements inM{\textstyle M}by means of Boolean operations. Note thatF3BS(G,M){\displaystyle F_{\scriptscriptstyle 3BS}(G,M)}includes three columns reflecting the use ofM={a,b,c}{\textstyle M=\{a,b,c\}}andF1(G,M1){\displaystyle F_{1}(G,M_{1})}the attribute setM1={aorb,borc,cora}{\textstyle M_{1}=\{a\ {\bf {or}}\ b,b\ {\bf {or}}\ c,c\ {\bf {or}}\ a\}}.
The FCL and RSL will not be altered if their intents are interpreted as single attributes.[4]
Here, the dot product⋅(∏){\textstyle \cdot \ (\prod )}stands for the conjunction (the dots is often omitted for compactness) and the summation+(∑){\textstyle +\ (\sum )}the disjunction, which are notations in theCurry-Howardstyle. Note that theorderingsbecome
Concerning theimplications extracted from formal context,
Note thatμ1R⊆μ2R{\displaystyle \mu _{1}^{R}\subseteq \mu _{2}^{R}}turns out to be trivial ifμ1≤μ2{\displaystyle \mu _{1}\leq \mu _{2}}, which entailsμ1=μ1⋅μ2{\displaystyle \mu _{1}=\mu _{1}\cdot \mu _{2}}. Intuitively,[4]every object carryingμ1{\displaystyle \mu _{1}}is an object carryingμ2{\displaystyle \mu _{2}}, which means the implicationany object having the propertyμ1{\displaystyle \mu _{1}}must also have the propertyμ2{\displaystyle \mu _{2}}. In particular,
whereAI⊆BI{\displaystyle A^{I}\subseteq B^{I}}andA◊⊆B◊{\displaystyle A^{\Diamond }\subseteq B^{\Diamond }}collapse intoμ1R⊆μ2R{\displaystyle \mu _{1}^{R}\subseteq \mu _{2}^{R}}.
When extended toF∗(G,M∗){\textstyle F^{\ast }(G,M^{\ast })}, the algebras ofderivation operatorsremainformallyunchanged, apart from the generalisationfromm∈M{\textstyle m\in M}toμ∈M∗{\textstyle \mu \in M^{\ast }}which is signified in terms of[4]thereplacementsIbyI∗{\displaystyle I{\mbox{ by }}I^{\ast }},◻by◻∗{\displaystyle \Box {\mbox{ by }}\Box ^{\ast }}and◊by◊∗{\displaystyle \Diamond {\mbox{ by }}\Diamond ^{\ast }}. The concepts under consideration become then(X,Y)fcl∗{\textstyle (X,Y)_{fcl}^{\ast }}and(X,Y)rsl∗{\textstyle (X,Y)_{rsl}^{\ast }}, whereX⊆G{\displaystyle X\subseteq G}andY⊆M∗{\displaystyle Y\subseteq M^{\ast }}, which are constructions allowable by thetwo Galois connectionsi.e.X⊆YI∗⟺Y⊆XI∗{\displaystyle X\subseteq Y^{I^{\ast }}\iff Y\subseteq X^{I^{\ast }}}andY◊∗⊆X⟺Y⊆X◻∗{\textstyle Y^{\Diamond ^{\ast }}\subseteq X\iff Y\subseteq X^{\Box ^{\ast }}}, respectively. Henceforth,
The extents for the two concepts nowcoincide exactly. All the attributes inM∗{\textstyle M^{\ast }}are listed inthe formal contextF∗(G,M∗){\textstyle F^{\ast }(G,M^{\ast })}, each contributes acommon extentfor FCL and RSL.Furthermore, the collection of these common extentsEF:={μR∣μ∈M∗}{\textstyle E_{F}:=\{\mu ^{R}\mid \mu \in M^{\ast }\}}amounts to{⋃k∈JDk∣J⊆{1…nF}}{\textstyle \{\bigcup _{k\in J}D_{k}\mid J\subseteq \{1\ldots n_{F}\}\}}which exhausts all the possible unions of theminimal object sets discernible by the formal context. Note that eachDk{\displaystyle D_{k}}collectsobjects of the same property, seeTable 2. One may then join(X,Y)fcl∗{\textstyle (X,Y)_{fcl}^{\ast }}and(X,Y)rsl∗{\textstyle (X,Y)_{rsl}^{\ast }}into a 3-tuple with common extent:
Note thatYfcl∗andYrsl∗{\textstyle Y^{fcl\ast }{\mbox{ and }}Y^{rsl\ast }}are introduced in order to differentiate the two intents. Clearly, the number of these 3-tuples equals the cardinality of set of common extent which counts|EF|=2nF{\textstyle |E_{F}|=2^{n_{F}}}. Moreover,(X,Yfcl∗,Yrsl∗){\textstyle (X,Y^{fcl\ast },Y^{rsl\ast })}manifests well-defined ordering. ForX1,X2∈EF⊆G{\textstyle X_{1},X_{2}\in E_{F}\subseteq G\ }, whereY1fcl∗,Y2fcl∗⊂M∗{\textstyle {Y_{1}^{fcl\ast }},{Y_{2}^{fcl\ast }}\subset M^{\ast }}andY1rsl∗,Y2rsl∗⊂M∗{\textstyle {Y_{1}^{rsl\ast }},{Y_{2}^{rsl\ast }}\subset M^{\ast }},
While it isgenericallyimpossible to determineYfcl∗andYrsl∗{\textstyle Y^{fcl\ast }{\mbox{ and }}Y^{rsl\ast }}subject toX∈EF⊆G{\textstyle X\in E_{F}\subseteq G}, the structure of concept hierarchy need not rely on these intents directly. An efficient way[4]to implement the concept hierarchy for(X,Yfcl∗,Yrsl∗){\textstyle (X,Y^{fcl\ast },Y^{rsl\ast })}is to consider intents in terms of single attributes.
Let henceforthη(X):=∏Yfcl∗{\textstyle \eta (X):=\prod Y^{fcl\ast }}andρ(X):=∑Yrsl∗{\textstyle \rho (X):=\sum Y^{rsl\ast }}. Upon introducing[X]F:={μ∈M∗∣μR=X}{\textstyle [X]_{F}:=\{\mu \in M^{\ast }\mid \mu ^{R}=X\}}, one may check that∏[X]F=∏Yfcl∗{\textstyle \prod [X]_{F}=\prod Y^{fcl\ast }}and∑[X]F=∑Yrsl∗{\textstyle \sum [X]_{F}=\sum Y^{rsl\ast }},∀X∈EF{\textstyle \forall X\in E_{F}}. Therefore,
which is aclosed intervalbounded from below byη(X){\textstyle \eta (X)}and from above byρ(X){\textstyle \rho (X)}since∀μμR=X⟹η(X)≤μ≤ρ(X){\displaystyle \forall \mu \ \mu ^{R}=X\implies \eta (X)\leq \mu \leq \rho (X)}. Moreover,
In addition,⋃X∈EF[X]F=M∗{\textstyle \bigcup _{X\in E_{F}}[X]_{F}=M^{\ast }}, namely, the collection of intents[X]F{\textstyle [X]_{F}}exhausts all the generalised attributesM∗{\textstyle M^{\ast }}, in comparison to⋃X∈EFX=G{\textstyle \bigcup _{X\in E_{F}}X=G}. Then, theGCLenters as the lattice structureΓF:=(LF,∧,∨){\textstyle \Gamma _{F}:=(L_{F},\wedge ,\vee )}based on the formal context viaF∗(G,M∗){\textstyle F^{\ast }(G,M^{\ast })}:
The construction for FCL was known to count on efficient algorithms,[11][12]not to mention the construction for RSL which did not receive much attention yet. Intriguingly, though theGCLfurnishes the general structure on which both the FCL and RSL can be rediscovered, theGCLcan be acquired via simplereadout.
The completion ofGCL[4]is equivalent to the completion of the intents ofGCLin terms of the lower and bounds.
The above enables the determinations of the intents depicted as inFig. 3for the 3BS given byTable 1, where one can read out thatη({1})=a¬b¬c{\textstyle \eta (\{1\})=a\neg b\neg c},η({2})=¬ab¬c{\textstyle \eta (\{2\})=\neg ab\neg c}andη({3})=¬ab¬c{\textstyle \eta (\{3\})=\neg ab\neg c}. Hence, e.g.,ρ({1,2})=¬η({3})=a+b+¬c{\textstyle \rho (\{1,2\})=\neg \eta (\{3\})=a+b+\neg c},η({1,2})=a¬b¬c+¬ab¬c=¬ρ({3}){\textstyle \eta (\{1,2\})=a\neg b\neg c+\neg ab\neg c=\neg \rho (\{3\})}. Note that theGCLalso appears to be aHasse diagramdue to the resemblance of its extents to apower set. Moreover, each intent[X]F=[η(X),ρ(X)]{\textstyle [X]_{F}=[\eta (X),\rho (X)]}atX{\textstyle X}also exhibits another Hasse diagram isomorphic to the ordering of attributes in the closed interval[0,0ρ]{\textstyle [{\bf {0}},0_{\rho }]}. It can be shown that∀X∈EFρ(X)=η(X)+0ρ{\textstyle \forall X\in E_{F}\ \rho (X)=\eta (X)+0_{\rho }}where0ρ:=¬1η≡ρ(∅){\textstyle 0_{\rho }:=\neg 1_{\eta }\equiv \rho (\emptyset )}with1η:=∑k=1nFη(Dk)≡η(G){\textstyle 1_{\eta }:=\sum _{k=1}^{n_{F}}\eta (D_{k})\equiv \eta (G)}. Hence,[X]F={η(X)+τ∣0≤τ≤0ρ}{\textstyle [X]_{F}=\{\eta (X)+\tau \mid {\bf {0}}\leq \tau \leq 0_{\rho }\}}making the cardinality|[X]F|{\textstyle |[X]_{F}|}a constant given as22|M|−nF{\textstyle 2^{2^{|M|}-n_{F}}}. Clearly, one may check thatρ({1,2})=¬η({3})=η({1,2})+0ρ{\textstyle \rho (\{1,2\})=\neg \eta (\{3\})=\eta (\{1,2\})+0_{\rho }}
TheGCLunderlies theoriginalFCL and RSL subject toF(G,M){\textstyle F(G,M)}, as one can tell fromη(X)=∏Yfcl∗{\textstyle \eta (X)=\prod Y^{fcl\ast }}andρ(X)=∑Yrsl∗{\textstyle \rho (X)=\sum Y^{rsl\ast }}. To rediscover a node for FCL, one looks for aconjunction of attributes inM{\textstyle M}contained in[X]F{\textstyle [X]_{F}}, which can be identified within theconjunctive normal formofη(X){\textstyle \eta (X)}if exists. Likewise, for the RSL one looks for adisjunction of attributes inM{\textstyle M}contained in[X]F{\textstyle [X]_{F}}, which can be found within thedisjunctive normal formofρ(X){\textstyle \rho (X)}, seeFig 3.
For instance, from the node({3},[{3}]F){\textstyle (\{3\},[\{3\}]_{F})}on theGCL, one finds thatη({3})=¬a¬bc≤c{\textstyle \eta (\{3\})=\neg a\neg bc\leq c}≤(a+¬b+c)(¬a+b+c){\textstyle \leq (a+\neg b+c)(\neg a+b+c)}=ρ({3}){\textstyle =\rho (\{3\})}. Note thatc{\textstyle c}appears to be theonlyattribute belonging to[{3}]F{\textstyle [\{3\}]_{F}}, which is simultaneously a conjunction and a disjunction. Therefore, both the FCL and RSL have the concept({3},{c}){\textstyle (\{3\},\{c\})}in common. To illustrate a different situation,ρ({1,3})=(a+¬b+c)≥a+c{\textstyle \rho (\{1,3\})=(a+\neg b+c)\geq a+c}≥a¬b¬c+¬a¬bc{\textstyle \geq a\neg b\neg c+\neg a\neg bc}=η({1,3}){\textstyle =\eta (\{1,3\})}. Apparently,a+c{\textstyle a+c}is the attribute emerging as disjunction of elements inM{\textstyle M}which belongs to[{1,3}]F{\textstyle [\{1,3\}]_{F}}, in which no attribute composed by conjunction of elements inM{\textstyle M}is found. Hence,{1,3}{\textstyle \{1,3\}}could not be an extent of FCL, it only constitutes the concept({1,3},{a,c}){\textstyle (\{1,3\},\{a,c\})}for the RSL.
Non-tautological implication relations signify the information contained in the formal context and are referred to asinformative implications.[6]In general,μ1R⊆μ2R{\textstyle \mu _{1}^{R}\subseteq \mu _{2}^{R}}entails the implicationμ1→μ2{\textstyle \mu _{1}\rightarrow \mu _{2}}. The implication is informative if it isnotμ1≤μ2{\textstyle not\ \mu _{1}\leq \mu _{2}}(i.e.μ1≠μ1⋅μ2{\textstyle \mu _{1}\neq \mu _{1}\cdot \mu _{2}}).
In case it is strictlyμ1R⊂μ2R{\textstyle \mu _{1}^{R}\subset \mu _{2}^{R}}, one hasμ1R=μ1R∩μ2R=(μ1⋅μ2)R{\textstyle \mu _{1}^{R}=\mu _{1}^{R}\cap \mu _{2}^{R}=(\mu _{1}\cdot \mu _{2})^{R}}whereμ1R∩μ2R⊂μ2R{\textstyle \mu _{1}^{R}\cap \mu _{2}^{R}\subset \mu _{2}^{R}}. Then,μ1→μ2{\textstyle \mu _{1}\rightarrow \mu _{2}}can be replaced by means ofμ1↔μ1⋅μ2{\textstyle \mu _{1}\leftrightarrow \mu _{1}\cdot \mu _{2}}together with the tautologyμ1⋅μ2⟹μ2{\textstyle \mu _{1}\cdot \mu _{2}\implies \mu _{2}}. Therefore, what remains to be taken into account is the equivalenceμR=νR=X{\textstyle \mu ^{R}=\nu ^{R}=X}for someX∈EF{\textstyle X\in E_{F}}. Logically, both attributes are properties carried by the same object class,μ↔ν{\textstyle \mu \leftrightarrow \nu }reflects that equivalence relation.
All attributes in[X]F{\textstyle [X]_{F}}must be mutually implied,[4]which can be implemented, e.g., by∀μ∈[X]Fμ→η(X){\textstyle \forall \mu \in [X]_{F}\ \mu \rightarrow \eta (X)}(in fact,μ↔η(X){\textstyle \mu \leftrightarrow \eta (X)}whereη(X)→μ{\textstyle \eta (X)\rightarrow \mu }is a tautology), i.e., all attributes are equivalent to the lower bound of intent.
Extraction of the implications of typeA→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}from the formal context was known to be complicated,[13][14][15][16][17]it necessitates efforts for constructing acanonical basis, which doesnotapply to the implications of typeA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}. By contrast, the above equivalence only proposes[4]
Hence, purely algebraic formulae can be employed to determine the implication relations, one need not consult the object-attribute dependence in the formal context, which is the typical effort in finding the canonical basis.
Remarkably,1η{\textstyle 1_{\eta }}and0ρ{\textstyle 0_{\rho }}are referred to as thecontextual truthandfalsity, respectively.∀X∈EF{\textstyle \forall X\in E_{F}}0ρ+ρ(X)=ρ(X){\textstyle 0_{\rho }+\rho (X)=\rho (X)}and0ρ⋅ρ(X)=0ρ{\textstyle 0_{\rho }\cdot \rho (X)=0_{\rho }}as well as1η⋅η(X)=η(X){\textstyle 1_{\eta }\cdot \eta (X)=\eta (X)}and1η+η(X)=1η{\textstyle 1_{\eta }+\eta (X)=1_{\eta }}similar to theconventional truth1andfalsity0that can be identified withρ(G){\textstyle \rho (G)}andη(∅){\textstyle \eta (\emptyset )}, respectively.
A→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}andA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}are found to be particular forms ofμ1→μ2{\textstyle \mu _{1}\rightarrow \mu _{2}}. AssumeA={a1,a2,…}⊆M{\textstyle A=\{a_{1},a_{2},\ldots \}\subseteq M}andB={b1,b2,…}⊆M{\textstyle B=\{b_{1},b_{2},\ldots \}\subseteq M}for both cases. ByA→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}, an object set carrying all the attributes inA{\textstyle A}implies carrying all the attributes inB{\textstyle B}simultaneously, i.e.∏iai→∏ibi{\textstyle \prod _{i}a_{i}\rightarrow \prod _{i}b_{i}}. ByA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}, an object set carryinganyof the attributes inA{\textstyle A}implies carryingsomeof the attributes inB{\textstyle B}, therefore∑iai→∑ibi{\textstyle \sum _{i}a_{i}\rightarrow \sum _{i}b_{i}}. Notably, the point of viewconjunction-to-conjunctionhas also been emphasised by Ganter[5]while dealing with the attribute exploration.
One could overlook significant parts of the logic content in formal context were itnotfor the consideration based on the GCL. Here, the formal context describing 3BS given inTable 1suggests an extreme case where no implication of the typeA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}could be found. Nevertheless, one ends up, e.g.,{a,b}→fcl{a,b,c}{\textstyle \{a,b\}{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}\{a,b,c\}}(or{a,b}→fcl{c}{\textstyle \{a,b\}{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}\{c\}}), whose meaning appears to be ambiguous. Though it is true thatab→abc{\textstyle ab\rightarrow abc}, one also notices that(ab)R={a,b}I=∅{\textstyle (ab)^{R}=\{a,b\}^{I}=\emptyset }as well as(abc)R={a,b,c}I{\textstyle (abc)^{R}=\{a,b,c\}^{I}}=∅{\textstyle =\emptyset }. Indeed, by using theabove formulawith the1η{\textstyle 1_{\eta }}provided inFig. 2it can be seen thatab⋅1η≡0{\textstyle ab\cdot 1_{\eta }\equiv {\bf {0}}}≡abc⋅1η{\textstyle \equiv abc\cdot 1_{\eta }}, hence it isab↔0{\textstyle ab\leftrightarrow {\bf {0}}}andabc↔0{\textstyle abc\leftrightarrow {\bf {0}}}that underliesab→abc{\textstyle ab\rightarrow abc}.
Remarkably, the same formula will lead to (1)a→a¬b¬c{\textstyle a\rightarrow a\neg b\neg c}(ora→¬b¬c{\textstyle a\rightarrow \neg b\neg c}) and (2)¬b¬c→¬b¬ca{\textstyle \neg b\neg c\rightarrow \neg b\neg ca}(or¬b¬c→a{\textstyle \neg b\neg c\rightarrow a}), wherea{\textstyle a},b{\textstyle b}andc{\textstyle c}can be interchanged. Hence, what one has captured from the 3BS are that (1) no two colours could coexist and that (2) there is no colour other thana{\textstyle a},b{\textstyle b}andc{\textstyle c}. The two issues are certainly less trivial in the scopes ofA→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}andA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}.
The rules to assemble or transform implications of typeμ→ν{\textstyle \mu \rightarrow \nu }are of direct consequences ofobject set inclusion relations. Notably, some of these rules can be reduced to theArmstrong axioms, which pertain to the main considerations of Guigues and Duquenne[6]based on the non-redundant collection of informative implications acquired via FCL. In particular,
In the case ofμ1=∏A1{\textstyle \mu _{1}=\prod A_{1}},ν1=∏B1{\textstyle \nu _{1}=\prod B_{1}},μ2=∏A2{\textstyle \mu _{2}=\prod A_{2}}andν2=∏B2{\textstyle \nu _{2}=\prod B_{2}}, whereA1,A2,B1,B2{\textstyle A_{1},A_{2},B_{1},B_{2}}are sets of attributes, the rule (1) can be re-expressed as Armstrong'scomposition:
The Armstrong axioms are not suited forA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}which requiresA⊆B{\textstyle A\subseteq B}. This is in contrast toA→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}for which Armstrong'sreflexivityis implemented byA⊇B{\textstyle A\supseteq B}. Nevertheless, a similarcompositionmay occur but signify a different rule from (1). Note that one also arrives at
For concreteness, consider the example depicted byTable 2, which has been originally adopted for clarification of the RSL[9]but worked out for the GCL.[4]
Clearly, one may also check thatρ({2,5})=¬η({1,3,4,6})=η({2,5})+0ρ{\textstyle \rho (\{2,5\})=\neg \eta (\{1,3,4,6\})=\eta (\{2,5\})+0_{\rho }}.
Within the expression ofη({1,2,5,6}){\textstyle \eta (\{1,2,5,6\})}it can be seen thataR={a}I{\textstyle {a}^{R}=\lbrace a\rbrace ^{I}}={1,2,5,6}{\textstyle =\lbrace 1,2,5,6\rbrace }, while withinρ({1,2,5,6}){\textstyle \rho (\{1,2,5,6\})}it can be seen(a+c+d)R{\textstyle (a+c+d)^{R}}={a,c,d}◊{\textstyle =\lbrace {a,c,d}\rbrace ^{\Diamond }}={1,2,5,6}{\textstyle =\lbrace 1,2,5,6\rbrace }. Therefore, one finds out the concepts({1,2,5,6},{a}){\textstyle (\lbrace 1,2,5,6\rbrace ,\lbrace a\rbrace )}for FCL and({1,2,5,6},{a,c,d}){\textstyle (\lbrace 1,2,5,6\rbrace ,\lbrace a,c,d\rbrace )}for RSL. By contrast,
with(ae)R={a,e}I{\textstyle (ae)^{R}=\{a,e\}^{I}}={1,6}{\textstyle =\{1,6\}}gives rise to the concept({1,6},{a,e}){\textstyle (\lbrace 1,6\rbrace ,\lbrace a,e\rbrace )}for FCL however fails to provide an extent for RSL becausedR≡{d}◊={1}≠{1,6}{\textstyle d^{R}\equiv \lbrace d\rbrace ^{\Diamond }=\lbrace 1\rbrace \neq \lbrace 1,6\rbrace }.
For the present case, the above relations can be examined via theauxiliary formula:
Note thatcR={c}I={c}◊{\textstyle c^{R}=\{c\}^{I}=\{c\}^{\Diamond }}={1,2}{\textstyle =\{1,2\}}⊂{1,2,5,6}{\textstyle \subset \{1,2,5,6\}}=aR{\textstyle =a^{R}}={a}I{\textstyle =\{a\}^{I}}={a}◊{\textstyle =\{a\}^{\Diamond }}. Moreover,c→a{\textstyle c\rightarrow a}entails bothc→c⋅a{\textstyle c\rightarrow c\cdot a}andc+a→a{\textstyle c+a\rightarrow a}, which correspond to{c}→fcl{a,c}{\textstyle \{c\}{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}\{a,c\}}and{c,a}→rsl{a}{\textstyle \{c,a\}{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}\{a\}}, respectively.
For instance, by(c+d)⋅1η{\textstyle (c+d)\cdot 1_{\eta }}=c⋅1η{\textstyle =c\cdot 1_{\eta }}=a¬bc(de+¬d¬e){\textstyle =a\neg bc(de+\neg d\neg e)}the relationc+d{\textstyle c+d}→a¬bc(de+¬d¬e){\textstyle \rightarrow a\neg bc(de+\neg d\neg e)}is neitherof the typeA→fclB{\textstyle A{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}B}norof the typeA→rslB{\textstyle A{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}B}. Nevertheless, one may also derive, e.g.,c+d→c{\textstyle c+d\rightarrow c},c+d→a{\textstyle c+d\rightarrow a}andcd→a{\textstyle cd\rightarrow a}, which are{c,d}→rsl{c}{\textstyle \{c,d\}{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}\{c\}},{c,d}→rsl{a}{\textstyle \{c,d\}{\stackrel {\scriptscriptstyle rsl}{\rightarrow }}\{a\}}and{c,d}→fcl{a}{\textstyle \{c,d\}{\stackrel {\scriptscriptstyle fcl}{\rightarrow }}\{a\}}, respectively. As a further interesting implicationc+d→¬b(de+¬d¬e){\textstyle c+d\rightarrow \neg b(de+\neg d\neg e)}entailsc+d→¬b⋅(e↔d){\textstyle c+d\rightarrow \neg b\cdot (e\leftrightarrow d)}by means ofmaterial implication. Namely, for the objects carrying the propertyc{\textstyle c}ord{\textstyle d},¬b{\textstyle \neg b}must hold and, in addition, objects carrying the propertye{\textstyle e}must also carry the propertyd{\textstyle d}and vice versa.
One may be interested in the properties inferring aparticular consequent, say,e→a{\textstyle e\rightarrow a}. Considerμ:=¬e+a{\textstyle \mu :=\neg e+a}⟺e→a{\textstyle \iff e\rightarrow a}giving rise toμ+0ρ{\textstyle \mu +0_{\rho }}=a+¬b+c+d+¬e{\textstyle =a+\neg b+c+d+\neg e}according toTable 2. Clearly, with¬e+a{\displaystyle \neg e+a}≤μ1{\displaystyle \leq \mu _{1}}≤a+¬b+c+d+¬e{\displaystyle \leq a+\neg b+c+d+\neg e}one hasμ1↔(e→a){\textstyle \mu _{1}\leftrightarrow (e\rightarrow a)}. This gives rise to many possible antecedents such as(e→a+c+d)→(e→a){\textstyle (e\rightarrow a+c+d)\rightarrow (e\rightarrow a)},(b→(e→a+c))→(e→a){\textstyle (b\rightarrow (e\rightarrow a+c))\rightarrow (e\rightarrow a)},(e→(b→a+c))→(e→a){\textstyle (e\rightarrow (b\rightarrow a+c))\rightarrow (e\rightarrow a)},(b→(e→a+c+d))→(e→a){\textstyle (b\rightarrow (e\rightarrow a+c+d))\rightarrow (e\rightarrow a)}and so forth.
1η{\textstyle 1_{\eta }}can be understood as1→1η{\textstyle {\bf {1}}\rightarrow 1_{\eta }}or equivalently0ρ→0{\textstyle 0_{\rho }\rightarrow {\bf {0}}}, which turns out be theonlynon-redundant implication one needs to deduce all the informative implications from any formal context. The basis1→1η{\textstyle {\bf {1}}\rightarrow 1_{\eta }}or0ρ→0{\textstyle 0_{\rho }\rightarrow {\bf {0}}}suffices the deduction of all implications as follows. While∀μ{\textstyle \forall \mu }1→1η{\textstyle {\bf {1}}\rightarrow 1_{\eta }}⟹μ→μ1η{\textstyle \implies \mu \rightarrow \mu 1_{\eta }}and∀ν{\textstyle \forall \nu }0ρ→0{\textstyle 0_{\rho }\rightarrow {\bf {0}}}⟹ν+0ρ→ν{\textstyle \implies \nu +0_{\rho }\rightarrow \nu }, choosing eitherμ=ρ(X){\textstyle \mu =\rho (X)}orν=η(X){\textstyle \nu =\eta (X)}gives rise toρ(X)→η(X){\textstyle \rho (X)\rightarrow \eta (X)}. Notably, this encompasses (1) and (1') by means ofμ⋅1η≡η(μR){\displaystyle \mu \cdot 1_{\eta }\equiv \eta (\mu ^{R})}≤μ{\displaystyle \leq \mu }≤ρ(μR)≡μ+0ρ{\displaystyle \leq \rho (\mu ^{R})\equiv \mu +0_{\rho }}for anyμ{\displaystyle \mu }, whereμR{\displaystyle \mu ^{R}}can be identified with someX{\textstyle X}corresponding to one of the 32 nodes on theGCLinFig. 4.
ρ(X)→η(X){\textstyle \rho (X)\rightarrow \eta (X)}develops equivalence, at each single node, for all attributes contained within the interval[η(X),ρ(X)]{\textstyle [\eta (X),\rho (X)]}. Moreover, informative implications could also relate different nodes viaHypothetical syllogismby invoking tautology. Typically,∀μ1∈[X1]F∀μ2∈[X2]F{\textstyle \forall \mu _{1}\in [X_{1}]_{F}\forall \mu _{2}\in [X_{2}]_{F}}μ1→μ2{\textstyle \mu _{1}\rightarrow \mu _{2}}whenever(X1,[X1]F){\textstyle (X_{1},[X_{1}]_{F})}≤(X2,[X2]F){\textstyle \leq (X_{2},[X_{2}]_{F})}. This corresponds to the cases considered in (1'):(b→c)→(e→a){\textstyle (b\rightarrow c)\rightarrow (e\rightarrow a)},c→(e→a){\textstyle c\rightarrow (e\rightarrow a)},¬b→(e→a){\textstyle \neg b\rightarrow (e\rightarrow a)}etc. Explicitly,(b→c)→(e→a){\textstyle (b\rightarrow c)\rightarrow (e\rightarrow a)}is based upon¬b+c∈[{1,2,5}]F{\textstyle \neg b+c\in [\{1,2,5\}]_{F}}and¬e+a∈[{1,2,5,6}]F{\textstyle \neg e+a\in [\{1,2,5,6\}]_{F}}where{1,2,5}⊆{1,2,5,6}{\textstyle \{1,2,5\}\subseteq \{1,2,5,6\}}. Note that¬b+c↔ρ({1,2,5}){\textstyle \neg b+c\leftrightarrow \rho (\{1,2,5\})}↔η({1,2,5}){\textstyle \leftrightarrow \eta (\{1,2,5\})}and¬e+a↔ρ({1,2,5,6}){\textstyle \neg e+a\leftrightarrow \rho (\{1,2,5,6\})}↔η({1,2,5,6}){\textstyle \leftrightarrow \eta (\{1,2,5,6\})}whileρ({1,2,5}){\textstyle \rho (\{1,2,5\})}≤ρ({1,2,5,6}){\textstyle \leq \rho (\{1,2,5,6\})}(alsoη({1,2,5})≤η({1,2,5,6}){\textstyle \eta (\{1,2,5\})\leq \eta (\{1,2,5,6\})}). Therefore,(b→c)→(e→a){\textstyle (b\rightarrow c)\rightarrow (e\rightarrow a)}. Similarly,c∈[{1,2}]F{\textstyle c\in [\{1,2\}]_{F}}with{1,2}⊆{1,2,5,6}{\textstyle \{1,2\}\subseteq \{1,2,5,6\}}givesc→(e→a){\textstyle c\rightarrow (e\rightarrow a)}.
Indeed,1→1η{\textstyle {\bf {1}}\rightarrow 1_{\eta }}or equivalently0ρ→0{\textstyle 0_{\rho }\rightarrow {\bf {0}}}plays the role of canonical basis withone singleimplication relation.
|
https://en.wikipedia.org/wiki/General_Concept_Lattice
|
Inknowledge representation and reasoning, aknowledge graphis aknowledge basethat uses agraph-structureddata modelortopologyto represent and operate ondata. Knowledge graphs are often used to store interlinked descriptions ofentities– objects, events, situations or abstract concepts – while also encoding the free-formsemanticsor relationships underlying these entities.[1][2]
Since the development of theSemantic Web, knowledge graphs have often been associated withlinked open dataprojects, focusing on the connections betweenconceptsand entities.[3][4]They are also historically associated with and used bysearch enginessuch asGoogle,Bing,YextandYahoo;knowledge-enginesand question-answering services such asWolframAlpha, Apple'sSiri, and AmazonAlexa; andsocial networkssuch asLinkedInandFacebook.
Recent developments in data science and machine learning, particularly in graph neural networks and representation learning and also in machine learning, have broadened the scope of knowledge graphs beyond their traditional use in search engines and recommender systems. They are increasingly used in scientific research, with notable applications in fields such as genomics, proteomics, and systems biology.[5]
The term was coined as early as 1972 by the AustrianlinguistEdgar W. Schneider, in a discussion of how to build modular instructional systems for courses.[6]In the late 1980s, theUniversity of GroningenandUniversity of Twentejointly began a project called Knowledge Graphs, focusing on the design ofsemantic networkswith edges restricted to a limited set of relations, to facilitatealgebras on the graph. In subsequent decades, the distinction between semantic networks and knowledge graphs was blurred.
Some early knowledge graphs were topic-specific. In 1985,Wordnetwas founded, capturing semantic relationships between words and meanings – an application of this idea to language itself. In 2005, Marc Wirk foundedGeonamesto capture relationships between different geographic names and locales and associated entities. In 1998 Andrew Edmonds of Science in Finance Ltd in the UK created a system called ThinkBase that offeredfuzzy-logicbased reasoning in a graphical context.[7]ThinkBase LLC[8]
In 2007, bothDBpediaandFreebasewere founded as graph-based knowledgerepositoriesfor general-purpose knowledge. DBpedia focused exclusively on data extracted from Wikipedia, while Freebase also included a range of public datasets. Neither described themselves as a 'knowledge graph' but developed and described related concepts.
In 2012, Google introduced theirKnowledge Graph,[9]building on DBpedia and Freebase among other sources. They later incorporatedRDFa,Microdata,JSON-LDcontent extracted from indexed web pages, including theCIA World Factbook,Wikidata, andWikipedia.[9][10]Entity and relationship types associated with this knowledge graph have been further organized using terms from theschema.org[11]vocabulary. The Google Knowledge Graph became a successful complement to string-based search within Google, and its popularity online brought the term into more common use.[11]
Since then, several large multinationals have advertised their knowledge graphs use, further popularising the term. These include Facebook, LinkedIn,Airbnb,Microsoft,Amazon,UberandeBay.[12]
In 2019,IEEEcombined its annual international conferences on "Big Knowledge" and "Data Mining and Intelligent Computing" into the International Conference on Knowledge Graph.[13]
There is no single commonly accepted definition of a knowledge graph. Most definitions view the topic through a Semantic Web lens and include these features:[14]
There are, however, many knowledge graph representations for which some of these features are not relevant. For those knowledge graphs, this simpler definition may be more useful:
In addition to the above examples, the term has been used to describe open knowledge projects such asYAGOand Wikidata; federations like the Linked Open Data cloud;[20]a range of commercial search tools, including Yahoo's semantic search assistant Spark, Google'sKnowledge Graph, and Microsoft's Satori; and the LinkedIn and Facebook entity graphs.[3]
The term is also used in the context ofnote-taking softwareapplications that allow a user to build apersonal knowledge graph.[21]
The popularization of knowledge graphs and their accompanying methods have led to the development of graph databases such as Neo4j,[22]GraphDB[23]andAgensGraph.[24]These graph databases allow users to easily store data as entities and their interrelationships, and facilitate operations such as data reasoning, node embedding, and ontology development on knowledge bases.
In contrast, virtual knowledge graphs do not store information in specialized databases. They rely on an underlying relational database or data lake to answer queries on the graph. Such a virtual knowledge graph system must be properly configured in order to answer the queries correctly. This specific configuration is done through a set of mappings that define the relationship between the elements of the data source and the structure and ontology of the virtual knowledge graph.[25]
A knowledge graph formally represents semantics by describing entities and their relationships.[26]Knowledge graphs may make use ofontologiesas a schema layer. By doing this, they allowlogical inferencefor retrievingimplicit knowledgerather than only allowing queries requesting explicit knowledge.[27]
In order to allow the use of knowledge graphs in various machine learning tasks, several methods for deriving latent feature representations of entities and relations have been devised. These knowledge graph embeddings allow them to be connected to machine learning methods that require feature vectors likeword embeddings. This can complement other estimates of conceptual similarity.[28][29]
Models for generating useful knowledge graph embeddings are commonly the domain of graph neural networks (GNNs).[30]GNNs are deep learning architectures that comprise edges and nodes, which correspond well to the entities and relationships of knowledge graphs. The topology and data structures afforded by GNNs provides a convenient domain for semi-supervised learning, wherein the network is trained to predict the value of a node embedding (provided a group of adjacent nodes and their edges) or edge (provided a pair of nodes). These tasks serve as fundamental abstractions for more complex tasks such as knowledge graph reasoning and alignment.[31]
As new knowledge graphs are produced across a variety of fields and contexts, the same entity will inevitably be represented in multiple graphs. However, because no single standard for the construction or representation of knowledge graph exists, resolving which entities from disparate graphs correspond to the same real world subject is a non-trivial task. This task is known asknowledge graph entity alignment, and is an active area of research.[32]
Strategies for entity alignment generally seek to identify similar substructures, semantic relationships, shared attributes, or combinations of all three between two distinct knowledge graphs. Entity alignment methods use these structural similarities between generally non-isomorphic graphs to predict which nodes corresponds to the same entity.[33]
The recent successes of large language models (LLMs), in particular their effectiveness at producing syntactically meaningful embeddings, has spurred the use of LLMs in the task of entity alignment.[34]
As the amount of data stored in knowledge graphs grows, developing dependable methods for knowledge graph entity alignment becomes an increasingly crucial step in the integration and cohesion of knowledge graph data.
|
https://en.wikipedia.org/wiki/Knowledge_graph
|
All definitions tacitly require thehomogeneous relationR{\displaystyle R}betransitive: for alla,b,c,{\displaystyle a,b,c,}ifaRb{\displaystyle aRb}andbRc{\displaystyle bRc}thenaRc.{\displaystyle aRc.}A term's definition may require additional properties that are not listed in this table.
Alatticeis an abstract structure studied in themathematicalsubdisciplines oforder theoryandabstract algebra. It consists of apartially ordered setin which every pair of elements has a uniquesupremum(also called a least upper bound orjoin) and a uniqueinfimum(also called a greatest lower bound ormeet). An example is given by thepower setof a set, partially ordered byinclusion, for which the supremum is theunionand the infimum is theintersection. Another example is given by thenatural numbers, partially ordered bydivisibility, for which the supremum is theleast common multipleand the infimum is thegreatest common divisor.
Lattices can also be characterized asalgebraic structuressatisfying certainaxiomaticidentities. Since the two definitions are equivalent, lattice theory draws on bothorder theoryanduniversal algebra.Semilatticesinclude lattices, which in turn includeHeytingandBoolean algebras. Theselattice-likestructures all admitorder-theoreticas well as algebraic descriptions.
The sub-field ofabstract algebrathat studies lattices is calledlattice theory.
A lattice can be defined either order-theoretically as a partially ordered set, or as an algebraic structure.
Apartially ordered set(poset)(L,≤){\displaystyle (L,\leq )}is called alatticeif it is both a join- and a meet-semilattice, i.e. each two-element subset{a,b}⊆L{\displaystyle \{a,b\}\subseteq L}has ajoin(i.e. least upper bound, denoted bya∨b{\displaystyle a\vee b}) andduallyameet(i.e. greatest lower bound, denoted bya∧b{\displaystyle a\wedge b}). This definition makes∧{\displaystyle \,\wedge \,}and∨{\displaystyle \,\vee \,}binary operations. Both operations are monotone with respect to the given order:a1≤a2{\displaystyle a_{1}\leq a_{2}}andb1≤b2{\displaystyle b_{1}\leq b_{2}}implies thata1∨b1≤a2∨b2{\displaystyle a_{1}\vee b_{1}\leq a_{2}\vee b_{2}}anda1∧b1≤a2∧b2.{\displaystyle a_{1}\wedge b_{1}\leq a_{2}\wedge b_{2}.}
It follows by aninductionargument that every non-empty finite subset of a lattice has a least upper bound and a greatest lower bound. With additional assumptions, further conclusions may be possible; seeCompleteness (order theory)for more discussion of this subject. That article also discusses how one may rephrase the above definition in terms of the existence of suitableGalois connectionsbetween related partially ordered sets—an approach of special interest for thecategory theoreticapproach to lattices, and forformal concept analysis.
Given a subset of a lattice,H⊆L,{\displaystyle H\subseteq L,}meet and join restrict topartial functions– they are undefined if their value is not in the subsetH.{\displaystyle H.}The resulting structure onH{\displaystyle H}is called apartial lattice. In addition to this extrinsic definition as a subset of some other algebraic structure (a lattice), a partial lattice can also be intrinsically defined as a set with two partial binary operations satisfying certain axioms.[1]
Alatticeis analgebraic structure(L,∨,∧){\displaystyle (L,\vee ,\wedge )}, consisting of a setL{\displaystyle L}and two binary, commutative and associativeoperations∨{\displaystyle \vee }and∧{\displaystyle \wedge }onL{\displaystyle L}satisfying the following axiomatic identities for all elementsa,b∈L{\displaystyle a,b\in L}(sometimes calledabsorption laws):a∨(a∧b)=a{\displaystyle a\vee (a\wedge b)=a}a∧(a∨b)=a{\displaystyle a\wedge (a\vee b)=a}
The following two identities are also usually regarded as axioms, even though they follow from the two absorption laws taken together.[2]These are calledidempotent laws.a∨a=a{\displaystyle a\vee a=a}a∧a=a{\displaystyle a\wedge a=a}
These axioms assert that both(L,∨){\displaystyle (L,\vee )}and(L,∧){\displaystyle (L,\wedge )}aresemilattices. The absorption laws, the only axioms above in which both meet and join appear, distinguish a lattice from an arbitrary pair of semilattice structures and assure that the two semilattices interact appropriately. In particular, each semilattice is thedualof the other. The absorption laws can be viewed as a requirement that the meet and join semilattices define the samepartial order.
An order-theoretic lattice gives rise to the two binary operations∨{\displaystyle \vee }and∧.{\displaystyle \wedge .}Since the commutative, associative and absorption laws can easily be verified for these operations, they make(L,∨,∧){\displaystyle (L,\vee ,\wedge )}into a lattice in the algebraic sense.
The converse is also true. Given an algebraically defined lattice(L,∨,∧),{\displaystyle (L,\vee ,\wedge ),}one can define a partial order≤{\displaystyle \leq }onL{\displaystyle L}by settinga≤bifa=a∧b,or{\displaystyle a\leq b{\text{ if }}a=a\wedge b,{\text{ or }}}a≤bifb=a∨b,{\displaystyle a\leq b{\text{ if }}b=a\vee b,}for all elementsa,b∈L.{\displaystyle a,b\in L.}The laws of absorption ensure that both definitions are equivalent:a=a∧bimpliesb=b∨(b∧a)=(a∧b)∨b=a∨b{\displaystyle a=a\wedge b{\text{ implies }}b=b\vee (b\wedge a)=(a\wedge b)\vee b=a\vee b}and dually for the other direction.
One can now check that the relation≤{\displaystyle \leq }introduced in this way defines a partial ordering within which binary meets and joins are given through the original operations∨{\displaystyle \vee }and∧.{\displaystyle \wedge .}
Since the two definitions of a lattice are equivalent, one may freely invoke aspects of either definition in any way that suits the purpose at hand.
Abounded latticeis a lattice that additionally has agreatest element(also calledmaximum, ortopelement, and denoted by1,{\displaystyle 1,}orby⊤{\displaystyle \top })and aleast element(also calledminimum, orbottom, denoted by0{\displaystyle 0}or by⊥{\displaystyle \bot }),which satisfy0≤x≤1for everyx∈L.{\displaystyle 0\leq x\leq 1\;{\text{ for every }}x\in L.}
A bounded lattice may also be defined as an algebraic structure of the form(L,∨,∧,0,1){\displaystyle (L,\vee ,\wedge ,0,1)}such that(L,∨,∧){\displaystyle (L,\vee ,\wedge )}is a lattice,0{\displaystyle 0}(the lattice's bottom) is theidentity elementfor the join operation∨,{\displaystyle \vee ,}and1{\displaystyle 1}(the lattice's top) is the identity element for the meet operation∧.{\displaystyle \wedge .}a∨0=a{\displaystyle a\vee 0=a}a∧1=a{\displaystyle a\wedge 1=a}
It can be shown that a partially ordered set is a bounded lattice if and only if every finite set of elements (including the empty set) has a join and a meet. This is because for alla∈∅{\displaystyle a\in \varnothing }, bothx≤a{\displaystyle x\leq a}anda≤x{\displaystyle a\leq x}, for anyx{\displaystyle x}. Since the empty set contains no elements, noa∈∅{\displaystyle a\in \varnothing }exist, and the statement isvacuously true. Hence, every element of a poset is both an upper bound and a lower bound of the empty set. This implies that the join of an empty set is the least element, denoted⋁∅=0,{\textstyle \bigvee \varnothing =0,}and the meet of the empty set is the greatest element, denoted⋀∅=1.{\textstyle \bigwedge \varnothing =1.}
This is consistent with the associativity and commutativity of meet and join in the algebraic definition. These properties can be viewed as stating that the join of a finite union of sets of elements of the order is equal to the join of the joins of the sets. Writinga∨b{\displaystyle a\vee b}and⋁{a,b}{\displaystyle \bigvee \{a,b\}}are equivalent. Dually, the meet of a finite union of sets is equal to the meet of the meets of the sets, that is, for finite subsetsA{\displaystyle A}andB{\displaystyle B}of a partially orderedL,{\displaystyle L,}⋁(A∪B)=⋁{⋁A,⋁B}{\displaystyle \bigvee (A\cup B)=\bigvee \left\{\bigvee A,\bigvee B\right\}}and⋀(A∪B)=⋀{⋀A,⋀B}{\displaystyle \bigwedge (A\cup B)=\bigwedge \left\{\bigwedge A,\bigwedge B\right\}}hold.
Note that the use of the set theoretic unionA∪B{\displaystyle A\cup B}hereis unrelatedto the fact that the union is the meet operator in a power set ordered by the subset relation.
Since 0 is a least element, all elements of the order are greater than or equal to it. This means adding or removing 0 to a set of elements cannot change its meet. In particular, here, by takingB{\displaystyle B}to be the empty set, we obtain⋁(A∪∅)=⋁{⋁A,⋁∅}=⋁{⋁A,0}=⋁A{\displaystyle \bigvee (A\cup \varnothing )=\bigvee \left\{\bigvee A,\bigvee \varnothing \right\}=\bigvee \left\{\bigvee A,0\right\}=\bigvee A}and dually,⋀(A∪∅)=⋀{⋀A,⋀∅}=⋀{⋀A,1}=⋀A{\displaystyle \bigwedge (A\cup \varnothing )=\bigwedge \left\{\bigwedge A,\bigwedge \varnothing \right\}=\bigwedge \left\{\bigwedge A,1\right\}=\bigwedge A}which is consistent with the fact thatA∪∅=A{\displaystyle A\cup \varnothing =A}. Thisdoes notconstitute a proof that the meet and join of the empty set are lower and upper bounds, but merely illustrates that this plays nicely with useful properties of the meet and join. Note that in partial orders without a lower (or upper) bound, the meet (or join) of the empty set is not defined.
Every lattice can be embedded into a bounded lattice by adding a greatest and a least element. Furthermore, every non-empty finite lattice is bounded, by taking the join (respectively, meet) of all elements, denoted by1=⋁L=a1∨⋯∨an{\textstyle 1=\bigvee L=a_{1}\lor \cdots \lor a_{n}}(respectively0=⋀L=a1∧⋯∧an{\textstyle 0=\bigwedge L=a_{1}\land \cdots \land a_{n}}) whereL={a1,…,an}{\displaystyle L=\left\{a_{1},\ldots ,a_{n}\right\}}is the set of all elements.
Lattices have some connections to the family ofgroup-like algebraic structures. Because meet and join both commute and associate, a lattice can be viewed as consisting of two commutativesemigroupshaving the same domain. For a bounded lattice, these semigroups are in fact commutativemonoids. Theabsorption lawis the only defining identity that is peculiar to lattice theory. A bounded lattice can also be thought of as a commutativerigwithout the distributive axiom.
By commutativity, associativity and idempotence one can think of join and meet as operations on non-empty finite sets, rather than on pairs of elements. In a bounded lattice the join and meet of the empty set can also be defined (as0{\displaystyle 0}and1,{\displaystyle 1,}respectively). This makes bounded lattices somewhat more natural than general lattices, and many authors require all lattices to be bounded.
The algebraic interpretation of lattices plays an essential role inuniversal algebra.[citation needed]
Further examples of lattices are given for each of the additional properties discussed below.
Most partially ordered sets are not lattices, including the following.
The appropriate notion of amorphismbetween two lattices flows easily from theabovealgebraic definition. Given two lattices(L,∨L,∧L){\displaystyle \left(L,\vee _{L},\wedge _{L}\right)}and(M,∨M,∧M),{\displaystyle \left(M,\vee _{M},\wedge _{M}\right),}alattice homomorphismfromLtoMis a functionf:L→M{\displaystyle f:L\to M}such that for alla,b∈L:{\displaystyle a,b\in L:}f(a∨Lb)=f(a)∨Mf(b),and{\displaystyle f\left(a\vee _{L}b\right)=f(a)\vee _{M}f(b),{\text{ and }}}f(a∧Lb)=f(a)∧Mf(b).{\displaystyle f\left(a\wedge _{L}b\right)=f(a)\wedge _{M}f(b).}
Thusf{\displaystyle f}is ahomomorphismof the two underlyingsemilattices. When lattices with more structure are considered, the morphisms should "respect" the extra structure, too. In particular, abounded-lattice homomorphism(usually called just "lattice homomorphism")f{\displaystyle f}between two bounded latticesL{\displaystyle L}andM{\displaystyle M}should also have the following property:f(0L)=0M,and{\displaystyle f\left(0_{L}\right)=0_{M},{\text{ and }}}f(1L)=1M.{\displaystyle f\left(1_{L}\right)=1_{M}.}
In the order-theoretic formulation, these conditions just state that a homomorphism of lattices is a function preserving binary meets and joins. For bounded lattices, preservation of least and greatest elements is just preservation of join and meet of the empty set.
Any homomorphism of lattices is necessarilymonotonewith respect to the associated ordering relation; see Limit preserving function. The converse is not true: monotonicity by no means implies the required preservation of meets and joins (see Pic. 9), although anorder-preservingbijectionis a homomorphism if itsinverseis also order-preserving.
Given the standard definition ofisomorphismsas invertible morphisms, alattice isomorphismis just abijectivelattice homomorphism. Similarly, alattice endomorphismis a lattice homomorphism from a lattice to itself, and alattice automorphismis a bijective lattice endomorphism. Lattices and their homomorphisms form acategory.
LetL{\displaystyle \mathbb {L} }andL′{\displaystyle \mathbb {L} '}be two lattices with0and1. A homomorphism fromL{\displaystyle \mathbb {L} }toL′{\displaystyle \mathbb {L} '}is called0,1-separatingif and only iff−1{f(0)}={0}{\displaystyle f^{-1}\{f(0)\}=\{0\}}(f{\displaystyle f}separates0) andf−1{f(1)}={1}{\displaystyle f^{-1}\{f(1)\}=\{1\}}(f{\displaystyle f}separates 1).
Asublatticeof a latticeL{\displaystyle L}is a subset ofL{\displaystyle L}that is a lattice with the same meet and join operations asL.{\displaystyle L.}That is, ifL{\displaystyle L}is a lattice andM{\displaystyle M}is a subset ofL{\displaystyle L}such that for every pair of elementsa,b∈M{\displaystyle a,b\in M}botha∧b{\displaystyle a\wedge b}anda∨b{\displaystyle a\vee b}are inM,{\displaystyle M,}thenM{\displaystyle M}is a sublattice ofL.{\displaystyle L.}[3]
A sublatticeM{\displaystyle M}of a latticeL{\displaystyle L}is aconvex sublatticeofL,{\displaystyle L,}ifx≤z≤y{\displaystyle x\leq z\leq y}andx,y∈M{\displaystyle x,y\in M}implies thatz{\displaystyle z}belongs toM,{\displaystyle M,}for all elementsx,y,z∈L.{\displaystyle x,y,z\in L.}
We now introduce a number of important properties that lead to interesting special classes of lattices. One, boundedness, has already been discussed.
A poset is called acomplete latticeifallits subsets have both a join and a meet. In particular, every complete lattice is a bounded lattice. While bounded lattice homomorphisms in general preserve only finite joins and meets, complete lattice homomorphisms are required to preserve arbitrary joins and meets.
Every poset that is a complete semilattice is also a complete lattice. Related to this result is the interesting phenomenon that there are various competing notions of homomorphism for this class of posets, depending on whether they are seen as complete lattices, complete join-semilattices, complete meet-semilattices, or as join-complete or meet-complete lattices.
"Partial lattice" is not the opposite of "complete lattice" – rather, "partial lattice", "lattice", and "complete lattice" are increasingly restrictive definitions.
Aconditionally complete latticeis a lattice in which everynonemptysubsetthat has an upper boundhas a join (that is, a least upper bound). Such lattices provide the most direct generalization of thecompleteness axiomof thereal numbers. A conditionally complete lattice is either a complete lattice, or a complete lattice without its maximum element1,{\displaystyle 1,}its minimum element0,{\displaystyle 0,}or both.[4][5]
Since lattices come with two binary operations, it is natural to ask whether one of themdistributesover the other, that is, whether one or the other of the followingduallaws holds for every three elementsa,b,c∈L,{\displaystyle a,b,c\in L,}:
a∨(b∧c)=(a∨b)∧(a∨c).{\displaystyle a\vee (b\wedge c)=(a\vee b)\wedge (a\vee c).}
a∧(b∨c)=(a∧b)∨(a∧c).{\displaystyle a\wedge (b\vee c)=(a\wedge b)\vee (a\wedge c).}
A lattice that satisfies the first or, equivalently (as it turns out), the second axiom, is called adistributive lattice.
The only non-distributive lattices with fewer than 6 elements are called M3and N5;[6]they are shown in Pictures 10 and 11, respectively. A lattice is distributive if and only if it does not have asublatticeisomorphic to M3or N5.[7]Each distributive lattice is isomorphic to a lattice of sets (with union and intersection as join and meet, respectively).[8]
For an overview of stronger notions of distributivity that are appropriate for complete lattices and that are used to define more special classes of lattices such asframesandcompletely distributive lattices, seedistributivity in order theory.
For some applications the distributivity condition is too strong, and the following weaker property is often useful. A lattice(L,∨,∧){\displaystyle (L,\vee ,\wedge )}ismodularif, for all elementsa,b,c∈L,{\displaystyle a,b,c\in L,}the following identity holds:(a∧c)∨(b∧c)=((a∧c)∨b)∧c.{\displaystyle (a\wedge c)\vee (b\wedge c)=((a\wedge c)\vee b)\wedge c.}(Modular identity)This condition is equivalent to the following axiom:a≤c{\displaystyle a\leq c}impliesa∨(b∧c)=(a∨b)∧c.{\displaystyle a\vee (b\wedge c)=(a\vee b)\wedge c.}(Modular law)A lattice is modular if and only if it does not have asublatticeisomorphic to N5(shown in Pic. 11).[7]Besides distributive lattices, examples of modular lattices are the lattice of submodules of amodule(hencemodular), the lattice oftwo-sided idealsof aring, and the lattice ofnormal subgroupsof agroup. Theset of first-order termswith the ordering "is more specific than" is a non-modular lattice used inautomated reasoning.
A finite lattice is modular if and only if it is both upper and lowersemimodular.
For a lattice of finite length, the (upper) semimodularity is equivalent to the condition that the lattice is graded and its rank functionr{\displaystyle r}satisfies the following condition:[9]
Another equivalent (for graded lattices) condition isBirkhoff's condition:
A lattice is called lower semimodular if its dual is semimodular. For finite lattices this means that the previous conditions hold with∨{\displaystyle \vee }and∧{\displaystyle \wedge }exchanged, "covers" exchanged with "is covered by", and inequalities reversed.[10]
Indomain theory, it is natural to seek to approximate the elements in a partial order by "much simpler" elements. This leads to the class ofcontinuous posets, consisting of posets where every element can be obtained as the supremum of adirected setof elements that areway-belowthe element. If one can additionally restrict these to thecompact elementsof a poset for obtaining these directed sets, then the poset is evenalgebraic. Both concepts can be applied to lattices as follows:
Both of these classes have interesting properties. For example, continuous lattices can be characterized as algebraic structures (with infinitary operations) satisfying certain identities. While such a characterization is not known for algebraic lattices, they can be described "syntactically" viaScott information systems.
LetL{\displaystyle L}be a bounded lattice with greatest element 1 and least element 0. Two elementsx{\displaystyle x}andy{\displaystyle y}ofL{\displaystyle L}arecomplementsof each other if and only if:x∨y=1andx∧y=0.{\displaystyle x\vee y=1\quad {\text{ and }}\quad x\wedge y=0.}
In general, some elements of a bounded lattice might not have a complement, and others might have more than one complement. For example, the set{0,1/2,1}{\displaystyle \{0,1/2,1\}}with its usual ordering is a bounded lattice, and12{\displaystyle {\tfrac {1}{2}}}does not have a complement. In the bounded lattice N5, the elementa{\displaystyle a}has two complements, viz.b{\displaystyle b}andc{\displaystyle c}(see Pic. 11). A bounded lattice for which every element has a complement is called acomplemented lattice.
A complemented lattice that is also distributive is aBoolean algebra. For a distributive lattice, the complement ofx,{\displaystyle x,}when it exists, is unique.
In the case that the complement is unique, we write¬x=y{\textstyle \lnot x=y}and equivalently,¬y=x.{\textstyle \lnot y=x.}The corresponding unaryoperationoverL,{\displaystyle L,}called complementation, introduces an analogue of logicalnegationinto lattice theory.
Heyting algebrasare an example of distributive lattices where some members might be lacking complements. Every elementz{\displaystyle z}of a Heyting algebra has, on the other hand, apseudo-complement, also denoted¬x.{\textstyle \lnot x.}The pseudo-complement is the greatest elementy{\displaystyle y}such thatx∧y=0.{\displaystyle x\wedge y=0.}If the pseudo-complement of every element of a Heyting algebra is in fact a complement, then the Heyting algebra is in fact a Boolean algebra.
Achainfromx0{\displaystyle x_{0}}toxn{\displaystyle x_{n}}is a set{x0,x1,…,xn},{\displaystyle \left\{x_{0},x_{1},\ldots ,x_{n}\right\},}wherex0<x1<x2<…<xn.{\displaystyle x_{0}<x_{1}<x_{2}<\ldots <x_{n}.}Thelengthof this chain isn, or one less than its number of elements. A chain ismaximalifxi{\displaystyle x_{i}}coversxi−1{\displaystyle x_{i-1}}for all1≤i≤n.{\displaystyle 1\leq i\leq n.}
If for any pair,x{\displaystyle x}andy,{\displaystyle y,}wherex<y,{\displaystyle x<y,}all maximal chains fromx{\displaystyle x}toy{\displaystyle y}have the same length, then the lattice is said to satisfy theJordan–Dedekind chain condition.
A lattice(L,≤){\displaystyle (L,\leq )}is calledgraded, sometimesranked(but seeRanked posetfor an alternative meaning), if it can be equipped with arank functionr:L→N{\displaystyle r:L\to \mathbb {N} }sometimes toZ{\displaystyle \mathbb {Z} }, compatible with the ordering (sor(x)<r(y){\displaystyle r(x)<r(y)}wheneverx<y{\displaystyle x<y}) such that whenevery{\displaystyle y}coversx,{\displaystyle x,}thenr(y)=r(x)+1.{\displaystyle r(y)=r(x)+1.}The value of the rank function for a lattice element is called itsrank.
A lattice elementy{\displaystyle y}is said tocoveranother elementx,{\displaystyle x,}ify>x,{\displaystyle y>x,}but there does not exist az{\displaystyle z}such thaty>z>x.{\displaystyle y>z>x.}Here,y>x{\displaystyle y>x}meansx≤y{\displaystyle x\leq y}andx≠y.{\displaystyle x\neq y.}
Any setX{\displaystyle X}may be used to generate thefree semilatticeFX.{\displaystyle FX.}The free semilattice is defined to consist of all of the finite subsets ofX,{\displaystyle X,}with the semilattice operation given by ordinaryset union. The free semilattice has theuniversal property. For thefree latticeover a setX,{\displaystyle X,}Whitmangave a construction based on polynomials overX{\displaystyle X}'s members.[11][12]
We now define some order-theoretic notions of importance to lattice theory. In the following, letx{\displaystyle x}be an element of some latticeL.{\displaystyle L.}x{\displaystyle x}is called:
LetL{\displaystyle L}have a bottom element 0. An elementx{\displaystyle x}ofL{\displaystyle L}is anatomif0<x{\displaystyle 0<x}and there exists no elementy∈L{\displaystyle y\in L}such that0<y<x.{\displaystyle 0<y<x.}ThenL{\displaystyle L}is called:
However, many sources and mathematical communities use the term "atomic" to mean "atomistic" as defined above.[citation needed]
The notions ofidealsand the dual notion offiltersrefer to particular kinds ofsubsetsof a partially ordered set, and are therefore important for lattice theory. Details can be found in the respective entries.
Note that in many applications the sets are only partial lattices: not every pair of elements has a meet or join.
Monographs available free online:
Elementary texts recommended for those with limitedmathematical maturity:
The standard contemporary introductory text, somewhat harder than the above:
Advanced monographs:
On free lattices:
On the history of lattice theory:
On applications of lattice theory:
|
https://en.wikipedia.org/wiki/Lattice_(order)
|
Ontologyis the philosophical study ofbeing. It is traditionally understood as the subdiscipline ofmetaphysicsfocused on the most general features ofreality. As one of the most fundamental concepts, being encompasses all of reality and everyentitywithin it. To articulate the basic structure of being, ontology examines the commonalities among all things and investigates their classification into basic types, such as thecategoriesofparticularsanduniversals. Particulars are unique, non-repeatable entities, such as the personSocrates, whereas universals are general, repeatable entities, like the colorgreen. Another distinction exists betweenconcreteobjects existing inspace and time, such as a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality by employing categories such assubstance,property,relation,state of affairs, andevent.
Ontologists disagree regarding which entities exist at the most basic level.Platonic realismasserts that universals have objective existence, whileconceptualismmaintains that universals exist only in the mind, andnominalismdenies their existence altogether. Similar disputes pertain tomathematical objects,unobservableobjects assumed by scientific theories, andmoral facts.Materialismposits that fundamentally onlymatterexists, whereasdualismasserts thatmindand matter are independent principles. According to some ontologists, objective answers to ontological questions do not exist, with perspectives shaped by differing linguistic practices.
Ontology employs diversemethods of inquiry, including the analysis ofconceptsandexperience, the use ofintuitionsandthought experiments, and the integration of findings fromnatural science.Formal ontologyinvestigates the most abstract features of objects, whileApplied ontologyutilizes ontological theories and principles to study entities within specific domains. For example,social ontologyexamines basic concepts used in thesocial sciences. Applied ontology is particularly relevant toinformationandcomputer science, which developconceptual frameworks of limited domains. These frameworks facilitate the structured storage of information, such as in a college database tracking academic activities. Ontology is also pertinent to the fields oflogic,theology, andanthropology.
Theorigins of ontologylie in theancient periodwith speculations about the nature of being and the source of the universe, including ancientIndian,Chinese, andGreek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name.
Ontology is the study of being. It is the branch ofphilosophythat investigates the nature ofexistence, the features all entities have in common, and how they are divided into basiccategories of being.[1]It aims to discover the foundational building blocks of the world and characterizerealityas a whole in its most general aspects.[a]In this regard, ontology contrasts with individual sciences likebiologyandastronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena.[3]In some contexts, the termontologyrefers not to the general study of being but to a specific ontological theory within this discipline. It can also mean an inventory or aconceptual schemeof a particular domain, such asthe ontology of genes.[4]In this context, an inventory is a comprehensive list of elements.[5]A conceptual scheme is a framework of the key concepts and their relationships.[6]
Ontology is closely related tometaphysicsbut the exact relation of these two disciplines is disputed. A traditionally influential characterization asserts that ontology is a subdiscipline of metaphysics. According to this view, metaphysics is the study of various aspects of fundamental reality, whereas ontology restricts itself to the most general features of reality.[7]This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, likeGod,mind, andvalue.[8]A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory.[9]Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being.[10]It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms.[11]
The etymology of the wordontologytraces back to theancient Greektermsὄντως(ontos, meaning'being') andλογία(logia, meaning'study of'), literally,'the study of being'. The ancient Greeks did not use the termontology, which was coined by philosophers in the 17th century.[12]
Being, orexistence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing all ofrealityand everyentitywithin it.[b]In its broadest sense, being only contrasts with non-being or nothingness.[14]It is controversial whether a more substantial analysis of the concept or meaning of being is possible.[15]One proposal understands being as a property possessed by every entity.[16]Critics argue that a thing without being cannot have properties. This means that properties presuppose being and cannot explain it.[17]Another suggestion is that all beings share a set of essential features. According to theEleatic principle, "power is the mark of being", meaning that only entities withcausalinfluence truly exist.[18]A controversial proposal by philosopherGeorge Berkeleysuggests that all existence is mental. He expressed thisimmaterialismin his slogan "to be is to be perceived".[19]
Depending on the context, the termbeingis sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and permanent, in contrast to becoming, which implies change.[20]Another contrast is between being, as what truly exists, andphenomena, as what appears to exist.[21]In some contexts, being expresses the fact that something is whileessenceexpresses itsqualitiesor what it is like.[22]
Ontologists often divide being into fundamental classes or highest kinds, calledcategories of being.[23]Proposed categories include substance,property,relation,state of affairs, andevent.[24]They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category.[23]Some philosophers, likeAristotle, say that entities belonging to different categories exist in distinct ways. Others, likeJohn Duns Scotus, insist that there are no differences in the mode of being, meaning thateverything exists in the same way.[25]A related dispute is whether some entities have a higher degree of being than others, an idea already found inPlato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees.[26]
The relation between being and non-being is a frequent topic in ontology. Influential issues include the status ofnonexistent objects[27]andwhy there is something rather than nothing.[28]
A central distinction in ontology is between particular and universal entities. Particulars, also calledindividuals, are unique, non-repeatable entities, likeSocrates, theTaj Mahal, andMars.[29]Universals are general, repeatable entities, like the colorgreen, the formcircularity, and the virtuecourage. Universals express aspects or features shared by particulars. For example,Mount EverestandMount Fujiare particulars characterized by the universalmountain.[30]
Universals can take the form of properties or relations.[31][c]Properties describe the characteristics of things. They are features or qualities possessed by an entity.[33]Properties are often divided intoessential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it.[34]For instance,having three sidesis an essential property of a triangle, whereasbeing redis an accidental property.[35][d]Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group.[37]For example,being a cityis a property whilebeing east ofis a relation, as in "Kathmanduis a city" and "Kathmandu is east ofNew Delhi".[38]Relations are often divided intointernal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation ofresemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations.[39]
Substances[e]play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the propertygreenand acquires the propertyred.[41]
States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individualSocratesand the propertywise. States of affairs that correspond to reality are calledfacts.[42][f]Facts aretruthmakersof statements, meaning that whether a statement is true or false depends on the underlying facts.[44]
Events are particular entities[g]that occur in time, like thefall of the Berlin Walland thefirst moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet.[46]Complex events, also calledprocesses, are composed of a sequence of events.[47]
Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set ofintegers. They lack causal powers and do not undergo changes.[48][h]The existence and nature of abstract objects remain subjects of philosophical debate.[50]
Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and the pages between them. Each of these components is itself constituted of smaller parts, likemolecules,atoms, andelementary particles.[51]Mereologystudies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to another view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another.[52]The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it.[53]
Abstract objects are closely related to fictional andintentional objects. Fictional objects are entities invented in works offiction. They can be things, like theOne RinginJ. R. R. Tolkien's book seriesThe Lord of the Rings, and people, like theMonkey Kingin the novelJourney to the West.[54]Some philosophers say that fictional objects are abstract objects and exist outside space and time. Others understand them as artifacts that are created as the works of fiction are written.[55]Intentional objects are entities that exist withinmental states, likeperceptions,beliefs, anddesires. For example, if a person thinks about theLoch Ness Monsterthen the Loch Ness Monster is the intentional object of thisthought. People can think about existing and non-existing objects. This makes it difficult to assess theontological status of intentional objects.[56]
Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity.[57]For instance, the surface of an apple cannot exist without the apple.[58]An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level.[59]It is closely related tometaphysical grounding, which is the relation between a ground and the facts it explains.[60]
Anontological commitmentof a person or a theory is an entity that exists according to them.[61]For instance, a person whobelieves in Godhas an ontological commitment toGod.[62]Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, theQuine–Putnam indispensability argumentdefendsmathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers.[63]
Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible thatextraterrestrial lifeexists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Dohais the capital ofQatar". Ontologists often use the concept ofpossible worldsto analyze possibility and necessity.[64]A possible world is a complete and consistent way how things could have been.[65]For example,Haruki Murakamiwas born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea,possible world semanticssays that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds.[66]The field ofmodal logicprovides a precise formalization of the concepts of possibility and necessity.[67]
In ontology,identitymeans that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also calledexact similarityandindiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother.[68]Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year".[69]The notion of identity also has a number of philosophical implications in terms of how it interacts with the aforementioned necessity and possibility. Most famously, Saul Kripke contended thatdiscovered identitiessuch as "Water is H2O" are necessarily true because "H2O" is what's known as arigid designator.[70]
There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole.[71]Pure ontology contrasts withapplied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science.[72]It considers ontological problems in regard to specific entities such asmatter,mind,numbers,God, and cultural artifacts.[73]
Social ontology, a major subfield of applied ontology, studies social kinds, likemoney,gender,society, andlanguage. It aims to determine the nature and essential features of these concepts while also examining their mode of existence.[74]According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars.[75]In the fields ofcomputer science,information science, andknowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way.[76]A related application ingeneticsisGene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases.[77]
Formal ontologyis the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools offormal logicto express their findings in an abstract and general manner.[78][i]Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area.[80]Examples are ideal spatial beings in the area of geometry and living beings in the area of biology.[81]
Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization.[82]
Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion.[83]
Metaontologystudies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists".[84]It is closely related tofundamental ontology, an approach developed by philosopherMartin Heideggerthat seeks to uncover the meaning of being.[85]
The termrealismis used for various theories[j]that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there areobjectivefacts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true.[87]This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other.[88]According to philosopherRudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework.[89]
In a more narrow sense, realism refers to the existence of certain types of entities.[90]Realists about universals say that universals have mind-independent existence. According toPlatonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universalredcould exist by itself even if there were no red objects in the world. Aristotelian realism, also calledmoderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them.Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world.Nominalistsdefend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects.[91]
Mathematical realism, a closely related view in thephilosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence ofmathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible toempirical observation.[92]Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and gameformalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation.[93]
Modal realismis the theory that in addition to the actual world, there are countlesspossible worldsas real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by ourcounterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects.[94]
Scientific realistssay that the scientific description of the world is an accurate representation of reality.[k]It is of particular relevance in regard to things thatcannot be directly observedby humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature.Scientific anti-realismsays that scientific theories are not descriptions of reality butinstrumentsto predict observations and the outcomes of experiments.[96]
Moral realistsclaim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right.Moral anti-realistseither claim that moral principles are subjective and differ between persons and cultures, a position known asmoral relativism, or outright deny the existence of moral facts, a view referred to asmoral nihilism.[97]
Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class.[98]For example, some forms of nominalism state that only concrete particulars exist while some forms ofbundle theorystate that only properties exist.[99]Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything.[100]
The closely related discussion betweenmonismanddualismis about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level.[101]Materialismis an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states.Idealiststake the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds.[102]Neutral monismoccupies a middle ground by saying that both mind and matter are derivative phenomena.[103]Dualists state that mind and matter exist as independent principles, either asdistinct substancesordifferent types of properties.[104]In a slightly different sense, monism contrasts withpluralismas a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality.[l]Pluralism is more commonly accepted and says that several distinct entities exist.[106]
The historically influentialsubstance-attribute ontologyis a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances.[107]The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless orbare particularthat merely supports the properties.[108]
Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality.[109]Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible.[110][m]According toprocess ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change.[112]Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According totropebundle theory, properties are particular entities that belong to a single bundle.[113]
Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level.[114][n]Ontic structural realismagrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate.[116]Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact thatthe Earth is a planetconsists of the particular objectthe Earthand the propertybeing a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts.[117][o]
In thehistory of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested byAristotle, whose system includes ten categories: substance,quantity,quality, relation, place, date, posture, state, action, and passion.[119]An early influential system of categories in Indian philosophy, first proposed in theVaisheshikaschool, distinguishes between six categories:substance, quality, motion, universal, individuator, and inherence.[120]Immanuel Kant'stranscendental idealismincludes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality.[121]In more recent philosophy, theories of categories were developed byC. S. Peirce,Edmund Husserl,Samuel Alexander,Roderick Chisholm, andE. J. Lowe.[122]
The dispute between constituent and relational ontologies[p]concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties.[124]
Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities.[125]One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture.[126]Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists.[127][q]
The ontological theories ofendurantismandperdurantismaim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed oftemporal partsand, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves.[129]
Differential ontology is apoststructuralistapproach interested in the relation between the concepts of identity anddifference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things.[130]
Object-oriented ontologybelongs to the school ofspeculative realismand examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception.[131]
Methodsof ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied bymetaontology.[132]
Conceptual analysisis a method to understand ontological concepts and clarify their meaning.[133]It proceeds by analyzing their component parts and thenecessary and sufficient conditionsunder which a concept applies to an entity.[134]This information can help ontologists decide whether a certain type of entity, such as numbers, exists.[135]Eidetic variationis a related method inphenomenologicalontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential.[136][r]Thetranscendentalmethod begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or whichconditionsare required for this entity to exist.[138]
Another approach is based onintuitionsin the form of non-inferential impressions about the correctness of general principles.[139]These principles can be used as thefoundationon which an ontological system is built and expanded usingdeductive reasoning.[140]A further intuition-based method relies onthought experimentsto evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employingcounterfactual thinkingto assess the consequences of this situation.[141]For example, some ontologists examine the relation between mind and matter by imaginingcreatures identical to humans but without consciousness.[142]
Naturalistic methodsrely on the insights of the natural sciences to determine what exists.[143]According to an influential approach byWillard Van Orman Quine, ontology can be conducted by analyzing[s]the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them.[145]
Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction.[146]The principle ofOckham's Razorsays that simple theories are preferable.[147]A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities.[148]Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations.[149]A further factor is how close a theory is tocommon sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue.[150]
In applied ontology,ontological engineeringis the process of creating and refining conceptual models of specific domains.[151]Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in aformal languageto ensure precision and, in some cases, automaticcomputability. In the following review phase, the validity of the ontology is assessed using test data.[152]Various more specific instructions for how to carry out the different steps have been suggested. They include theCycmethod, Grüninger and Fox's methodology, and so-called METHONTOLOGY.[153]In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch.[154]
Ontology overlaps with many disciplines, includinglogic, the study ofcorrect reasoning.[155]Ontologists often employlogical systemsto express their insights, specifically in the field of formal ontology. Of particular interest to them is theexistential quantifier(∃{\displaystyle \exists }), which is used to express what exists. Infirst-order logic, for example, the formula∃xDog(x){\displaystyle \exists x{\text{Dog}}(x)}states that dogs exist.[156]Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being.[157]Doubts about the accuracy ofnatural languagehave led some ontologists to seek a newformal language, termedontologese, for a better representation of the fundamental structure of reality.[158]
Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which buildsdatabasesto store this information and defines computational processes to automatically transform and use it.[160]For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name.[161]In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help ofupper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, likeSuggested Upper Merged OntologyandBasic Formal Ontology.[162]
Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework.Protein Ontologyis a formal framework for the standardized representation ofprotein-related entities and their relationships.[163]Gene OntologyandSequence Ontologyserve a similar purpose in the field ofgenetics.[164]Environment Ontology is a knowledge representation focused onecosystemsand environmental processes.[165]Friend of a Friendprovides a conceptual framework to represent relations between people and their interests and activities.[166]
The topic of ontology has received increased attention inanthropologysince the 1990s, sometimes termed the "ontological turn".[167]This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook ofIndigenous peopleand how it differs from a Western perspective.[168]As an example of this contrast, it has been argued that various indigenous communities ascribeintentionalityto non-human entities, like plants, forests, or rivers. This outlook is known asanimism[169]and is also found inNative Americanontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature.[170]
Ontology is closely related totheologyand its interest in theexistence of Godas an ultimate entity. Theontological argument, first proposed byAnselm of Canterbury, attempts to prove the existence of the divine. It definesGodas the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence.[171]Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming itontotheology.[172]
The roots of ontology inancient philosophyare speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in theUpanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what senseultimate realityis one or many.[174]Samkhya, the firstorthodox school of Indian philosophy,[t]formulated anatheistdualist ontology based on the Upanishads, identifyingpure consciousnessandmatteras its two foundational principles.[176]The laterVaisheshikaschool[u]proposed a comprehensive system of categories.[178]Inancient China,Laozi's (6th century BCE)[v]Taoismexamines the underlying order of the universe, known asTao, and how this order is shaped by the interaction of two basic forces,yin and yang.[180]The philosophical movement ofXuanxueemerged in the 3rd century CE and explored the relation between being and non-being.[181]
Starting in the 6th century BCE,Presocratic philosophersinancient Greeceaimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things.[182]Parmenides(c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being.[183]Inspired by Presocratic philosophy,Plato(427–347 BCE) developed histheory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms.[184]Aristotle(384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being.[185]The school ofNeoplatonismarose in the 3rd century CE and proposed an ineffable source of everything, calledthe One, which is more basic than being itself.[186]
Theproblem of universalswas an influential topic in medieval ontology.Boethius(477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspiredPeter Abelard(1079–1142 CE), who proposed that universals exist only in the mind.[187]Thomas Aquinas(1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence andessence, between substance and accidents, and betweenmatter and form.[188]He also discussed thetranscendentals, which are the most general properties or modes of being.[189]John Duns Scotus(1266–1308) argued that all entities, including God,exist in the same wayand that each entity has a unique essence, calledhaecceity.[190]William of Ockham(c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known asOckham's razor.[191]
InArabic-Persian philosophy,Avicenna(980–1037 CE) combined ontology withtheology. He identified God as a necessary being that is the source of everything else, which only has contingent existence.[193]In 8th-centuryIndian philosophy, the school ofAdvaita Vedantaemerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is anillusion.[194]Starting in the 13th century CE, theNavya-Nyāyaschool built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation.[195]9th-century China saw the emergence ofNeo-Confucianism, which developed the idea that a rational principle, known asli, is the ground of being and order of the cosmos.[196]
René Descartes(1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact.[197]Rejecting Descartes's dualism,Baruch Spinoza(1632–1677) proposed a monist ontology according to which there is only a single entity that is identical toGod and nature.[198]Gottfried Wilhelm Leibniz(1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another.[199]John Locke(1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties.[200]Christian Wolff(1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry.[201]George Berkeley(1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds.[202]
Immanuel Kant(1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system ofpure concepts of understanding.[203]Influenced by Kant's philosophy,Georg Wilhelm Friedrich Hegel(1770–1831) linked ontology andlogic. He said that being and thought are identical and examined their foundational structures.[204]Arthur Schopenhauer(1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of ablind and irrational will.[205]Francis Herbert Bradley(1846–1924) saw absolute spirit as the ultimate and all-encompassing reality[206]while denying that there are any external relations.[207]In Indian philosophy,Swami Vivekananda(1863–1902) expanded on Advaita Vedanta, emphasizing the unity of all existence.[208]Sri Aurobindo(1872–1950) sought to understand the world as an evolutionary manifestation of a divine consciousness.[209]
At the beginning of the 20th century,Edmund Husserl(1859–1938) developedphenomenologyand employed its method, the description ofexperience, to address ontological problems.[210]This idea inspired his studentMartin Heidegger(1889–1976) to clarify the meaning of being by exploring the mode of human existence.[211]Jean-Paul Sartreresponded to Heidegger's philosophy by examining the relation between being andnothingnessfrom the perspective of human existence, freedom, and consciousness.[212]Based on the phenomenological method,Nicolai Hartmann(1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual.[213]
Alexius Meinong(1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being.[214]Arguing against this theory,Bertrand Russell(1872–1970) formulated a fact ontology known aslogical atomism. This idea was further refined by the earlyLudwig Wittgenstein(1889–1951) and inspiredD. M. Armstrong's (1926–2014) ontology.[215]Alfred North Whitehead(1861–1947), by contrast, developed a process ontology.[216]Rudolf Carnap(1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework.[217]He had a strong influence onWillard Van Orman Quine(1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems.[218]Quine's studentDavid Lewis(1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world.[219]Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains.[220]
|
https://en.wikipedia.org/wiki/Ontology
|
Ontology alignment, orontology matching, is the process of determining correspondences betweenconceptsinontologies. A set of correspondences is also called an alignment. The phrase takes on a slightly different meaning, incomputer science,cognitive scienceorphilosophy.
Forcomputer scientists, concepts are expressed as labels for data. Historically, the need for ontology alignment arose out of the need tointegrateheterogeneousdatabases, ones developed independently and thus each having their own data vocabulary. In theSemantic Webcontext involving many actors providing their ownontologies, ontology matching has taken a critical place for helping heterogeneous resources to interoperate. Ontology alignment tools find classes of data that aresemantically equivalent, for example, "truck" and "lorry". The classes are not necessarily logically identical. According to Euzenat and Shvaiko (2007),[1]there are three major dimensions for similarity: syntactic, external, and semantic. Coincidentally, they roughly correspond to the dimensions identified by Cognitive Scientists below. A number of tools and frameworks have been developed for aligning ontologies, some with inspiration from Cognitive Science and some independently.
Ontology alignment tools have generally been developed to operate ondatabase schemas,[2]XML schemas,[3]taxonomies,[4]formal languages,entity-relationship models,[5]dictionaries, and other label frameworks. They are usually converted to a graph representation before being matched.
Since the emergence of the Semantic Web, such graphs can be represented in theResource Description Frameworkline of languages by triples of the form <subject, predicate, object>, as illustrated in theNotation 3syntax.
In this context, aligning ontologies is sometimes referred to as "ontology matching".
The problem of Ontology Alignment has been tackled recently by trying to compute matching first and mapping (based on the matching) in an automatic fashion. Systems likeDSSim, X-SOM[6]or COMA++ obtained at the moment very highprecision and recall.[3]TheOntology Alignment Evaluation Initiativeaims to evaluate, compare and improve the different approaches.
Given two ontologiesi=⟨Ci,Ri,Ii,Ti,Vi⟩{\displaystyle i=\langle C_{i},R_{i},I_{i},T_{i},V_{i}\rangle }andj=⟨Cj,Rj,Ij,Tj,Vj⟩{\displaystyle j=\langle C_{j},R_{j},I_{j},T_{j},V_{j}\rangle }whereC{\displaystyle C}is the set of classes,R{\displaystyle R}is the set of relations,I{\displaystyle I}is the set of individuals,T{\displaystyle T}is the set of data types, andV{\displaystyle V}is the set of values, we can define different types of (inter-ontology) relationships.[1]Such relationships will be called, all together, alignments and can be categorized among different dimensions:
Subsumption, atomic, homogeneous alignments are the building blocks to obtain richer alignments, and have a well defined semantics in every Description Logic.
Let's now introduce more formally ontology matching and mapping.
An atomic homogeneousmatchingis an alignment that carries a similarity degrees∈[0,1]{\displaystyle s\in [0,1]}, describing the similarity of two terms of the input ontologiesi{\displaystyle i}andj{\displaystyle j}.
Matching can be eithercomputed, by means of heuristic algorithms, orinferredfrom other matchings.
Formally we can say that, a matching is a quadruplem=⟨id,ti,tj,s⟩{\displaystyle m=\langle id,t_{i},t_{j},s\rangle }, whereti{\displaystyle t_{i}}andtj{\displaystyle t_{j}}are homogeneous ontology terms,s{\displaystyle s}is the similarity degree ofm{\displaystyle m}.
A (subsumption, homogeneous, atomic) mapping is defined as a pairμ=⟨ti,tj⟩{\displaystyle \mu =\langle t_{i},t_{j}\rangle }, whereti{\displaystyle t_{i}}andtj{\displaystyle t_{j}}are homogeneous ontology terms.
Forcognitive scientistsinterested in ontology alignment, the "concepts" are nodes in asemantic networkthat reside in brains as "conceptual systems." The focal question is: if everyone has unique experiences and thus different semantic networks, then how can we ever understand each other? This question has been addressed by a model called ABSURDIST (Aligning Between Systems Using Relations Derived Inside Systems for Translation). Three major dimensions have been identified for similarity as equations for "internal similarity, external similarity, and mutual inhibition."[7]
Two sub research fields have emerged in ontology mapping, namely monolingual ontology mapping and cross-lingual ontology mapping. The former refers to the mapping of ontologies in the same natural language, whereas the latter refers to "the process of establishing relationships among ontological resources from two or more independent ontologies where each ontology is labelled in a different natural language".[8]Existing matching methods in monolingual ontology mapping are discussed in Euzenat and Shvaiko (2007).[1]Approaches to cross-lingual ontology mapping are presented in Fu et al. (2011).[9]
|
https://en.wikipedia.org/wiki/Ontology_alignment
|
Anontology chartis a type ofchartused insemioticsandsoftware engineeringto illustrate anontology.
The nodes of an ontology chart represent universalaffordancesand rarely representparticulars. The exception is the root which is a particular agent often labelled ‘society’ and located on the extreme left of an ontology chart. The root is often dropped in practice but is implied in every ontology chart. If any other particular is present in an ontology chart it is recognised by the ‘#’ sign prefix and upper case letters. In our ontology chart the node labelled #IBM is a particular organisation.
The arcs represent ontological dependency relations directed from left to right. The right affordance is ontologically dependent on the left affordance. The left affordance is the ontological antecedent of the right affordance. A special category of affordances are determiners. They are recognised by the ‘#’ sign prefix. The two examples above are #hourly rate and #name. All determiners have a second antecedent – the measurement standard. They are usually dropped from the ontology chart but they are implied and obvious. In the case of hourly rate and name they are currency and language respectively. The names on the arcs are role names of the carrier, the left node, in the relationship node on the right. For example, the ‘employee’ is the role name of a person while in employment. No ontology chart node has more than two ontological antecedents. Where you find an arc on the ontology chart between a role name and a node, read that as an arc between the right hand side of the role name. So the arc from employee to works at is an arc between employment and works at
Mathematically, ontology charts are a graphical representation of semi-lattice structures; specifically they areHasse diagramsof a single root and no cycles. Ontological dependency is a relationship known mathematically as a partial order set relation (poset). Posets are an object of study in the mathematical discipline of order theory. They belong to the class of binary relations but they have three additional properties:reflexivity, anti-symmetry and transitivity.
Ontological dependency is a special poset because it is a binary relation, every thing is ontologically dependent on itself for its existence, two things that are mutually ontologically dependent must be the same thing and if a depends on b and b depends on c then a depends on c. The last of these properties - the transitive property of posets - was exploited by Helmut Hasse to give us the Hasse diagram - a diagram of incredible power, simplicity and if drawn well elegant as well. Because ontology charts have a root that all affordances (realisations/things) are ultimately dependent upon for their existence, they are graphical representation of semi-lattices.
|
https://en.wikipedia.org/wiki/Ontology_chart
|
TheOpen Semantic Framework(OSF) is an integratedsoftware stackusingsemantic technologiesforknowledge management.[1]It has a layered architecture that combines existingopen sourcesoftwarewith additional open source components developed specifically to provide a completeWeb application framework. OSF is made available under theApache 2 license.
OSF is a platform-independentWeb servicesframework for accessing and exposingstructured data,semi-structured data, andunstructured datausingontologiesto reconcilesemantic heterogeneitieswithin the contributing data andschema. Internal to OSF, all data is converted toRDFto provide acommon data model. TheOWL 2ontology languageis used to describe the data schema overlaying all of the constituent data sources.
Thearchitectureof OSF is built around a central layer ofRESTfulweb services, designed to enable most constituent modules within the software stack to be substituted without major adverse impacts on the entire stack. A central organizing perspective of OSF is that of thedataset. These datasets contain the records in any given OSF instance. One or moredomain ontologiesis used by a given OSF instance to define the structural relationships amongst the data and their attributes and concepts.
Some of the use applications for OSF includelocal government,[2]health information systems,[3]community indicator systems,[4]eLearning,[5]citizen engagement,[6]or any domain that may be modeled by ontologies.
Documentation and training videosare provided with the open-source OSF application.
Early components of OSF were provided under the names of structWSF and conStruct starting in June 2009.[7]The first version 1.x of OSF was announced in August 2010. The first automated OSF installer was released in March 2012.[8]OSF was expanded with an ontology manager, structOntology in August 2012.[9]The version 2.x developments of OSF occurred for enterprise sponsors in the period of early 2012 until the end of 2013. None of these interim 2.x versions were released to the public. Then, at the conclusion of this period, Structured Dynamics, the main developer of OSF,refactoredthese specific enterprise developments to leapfrog to a new version 3.0 of OSF, announced in early 2014.[10]These public releases were last updated to OSF version 3.4.0 in August 2016.[11]
The Open Semantic Framework has a basic three-layer architecture. User interactions and content management are provided by an externalcontent management system, which is currentlyDrupal(but does not depend on it). This layer accesses the pivotalOSF Web Services; there are now more than 20 providing OSF'sdistributed computingfunctionality. FullCRUDaccess and user permissions and security is provided to all digital objects in the stack. Thismiddlewarelayer then provides a means to access the third layer, the engines and indexers that drive the entire stack. Both the top CMS layer and the engines layer are provided by existing off-the-shelf software. What makes OSF a complete stack are the connecting scripts and the intermediate Web services layer.
The premise of the OSF stack is based on the RDF data model. RDF provides the means for integrating existing structured data assets in any format, with semi-structured data like XML and HTML, and unstructured documents or text. The OSF framework is made operational via ontologies that capture the domain or knowledge space, matched with internal ontologies that guide OSF operations and data display. This design approach is known asODapps, for ontology-driven applications.[1]
OSF delegates all direct user interactions and standard content management to an externalCMS. In the case of Drupal, this integration is tighter,[12]and supports connectors and modules that can replace standard Drupal storage and databases with OSFtriplestores.[13]
This intermediate OSF Web Services layer may also be accessed directly via API or command line or utilities likecURL, suitable for interfacing with standard content management systems (CMSs), or via a dedicated suite of connectors and modules that leverage the open source Drupal CMS. These connectors and modules, also part of the standard OSF stack and calledOSF for Drupal, natively enable Drupal's existing thousands of modules and ecosystem of developers and capabilities to access OSF using familiar Drupal methods.[12]
The OSF middleware framework is generallyRESTfulin design and is based onHTTPand Web protocols andW3Copen standards. The initial OSF framework comes packaged with a baseline set of more than 20 Web services in CRUD, browse, search, tagging, ontology management, and export and import. All Web services are exposed viaAPIsandSPARQLendpoints. Each request to an individual Web service returns an HTTP status and optionally a document ofresultsets. Each results document can be serialized in many ways, and may be expressed as either RDF, pureXML,JSON, or other formats.[citation needed]
The engines layer represents the major workflow requirements and data management and indexing of the system. The premise of the Open Semantic Framework is based on the RDF data model. Using a common data model means that all Web services and actions against the data only need to be programmed via a single,canonical form. Simple converters convert external, native data formats to the RDF form at time of ingest; similar converters can translate the internal RDF form back into native forms for export (or use by external applications). This use of a canonical form leads to a simpler design at the core of the stack and a uniform basis to which tools or other work activities can be written.[original research?]
The OSF engines are all open source and work to support this premise. The OSF engines layer governs the index and management of all OSF content. Documents are indexed by theSolr[14]engine for full-text search, while information about their structural characteristics and metadata are stored in an RDFtriplestoredatabase provided by OpenLink'sVirtuososoftware.[15]The schema aspects of the information (the "ontologies") are separately managed and manipulated with their own W3C standard application, theOWL API.[16]At ingest time, the system automatically routes and indexes the content into its appropriate stores. Another engine, GATE (General Architecture for Text Engineering),[17]provides semi-automatic assistance in tagging input information and othernatural language processing(NLP) tasks.
OSF is sometimes referred to as alinked data application.[18]Alternative applications in this space include:
The Open Semantic Framework also has alternatives in thesemantic publishingandsemantic computingarenas.
|
https://en.wikipedia.org/wiki/Open_Semantic_Framework
|
The term "soft ontology", coined byEli Hirschin 1993, refers to the embracing or reconciling of apparentontologicaldifferences, by means of relevant distinctions and contextual analyses.
Hirsch used the term to broaden and expand on whatWilliam Jamesdiscussed in his landmark 1907 work inepistemology,Pragmatism. James gave a now famous example of dispute over a squirrel:
The corpus of the dispute was a squirrel--a live squirrel supposed to be clinging to one side of a tree-trunk; while over against the tree's opposite side a human being was imagined to stand. This human witness tries to get sight of the squirrel by moving rapidly round the tree, but no matter how fast he goes, the squirrel moves as fast in the opposite direction, and always keeps the tree between himself and the man, so that never a glimpse of him is caught. The resultant metaphysical problem now is this: DOES THE MAN GO ROUND THE SQUIRREL OR NOT?
James' solution was that by clarifying "pragmatically" whether "around" meant traversing north/east/south/west of something versus traversing left/right/before/behind something, the dispute was readily solvable.
Hirsch actually calls James' example a "verbal" dispute and explains, at some length, the connection between verbal and soft ontological disagreements (they are, according to Hirsch, partly but not completely overlapping sets of problems).
Soft ontologicaldilemmasare contrasted with hard ones—those which would not admit of translation, reconciliation, or overlap, and would instead require a systematic orparadigmatic shiftof one's ontology. One can choose to construct a hard or soft ontology, depending on the flexibility one intends to obtain.
Other related terms in philosophy and in cognitive science include "ontological relativity" (as inQuine) and "cognitive relativism" (as inJack Meiland).
Softontology, as proposed in computer science circles by Aviles et al. (2003), is a definition of a domain in terms of a flexible set of ontological dimensions. It can be regarded as a subclass of ontologies as they are conceived of incomputer science, in Gruber's terms (1993) asdefinitions of conceptualization. Unlike standard ontologies, the approach allows the number of its constitutive concepts to increase or decrease dynamically, any subsets of the ontology to be taken into account at a time, or the order their mutual weight or priority to vary in a graded manner so as to allow differentontological perspectives.
Where conventional ontologies describe or interpret theconceptualizationof a domain from a prioritized perspective, the soft ontology approach transfers the task of interpretation to the observer, user or learner, depending on the context. (seeWeak ontology)
The approach is particularly applicable for expert practices that intend to present raw content or data without presenting any authoritative taxonomy or categorization. It also serves to support neutrality for domains such as ethics, politics, aesthetics or philosophy, in which there may not exist a single authorized conceptualization or truth, or it may be instrumental to present a range of perspectives to the domain.
Soft ontologies also result inherently from user-defined ontology practices, such as folksonomies or tagging practices ("tagsonomies"), characteristic of many contemporary user-driven media genres.
|
https://en.wikipedia.org/wiki/Soft_ontology
|
Terminology extraction(also known astermextraction,glossaryextraction, termrecognition, or terminologymining) is a subtask ofinformation extraction. The goal of terminology extraction is to automatically extract relevant terms from a givencorpus.[1]
In thesemantic webera, a growing number of communities and networked enterprises started to access and interoperate through theinternet. Modeling these communities and their information needs is important for severalweb applications, like topic-drivenweb crawlers,[2]web services,[3]recommender systems,[4]etc. The development of terminology extraction is also essential to thelanguage industry.
One of the first steps to model aknowledge domainis to collect a vocabulary of domain-relevant terms, constituting the linguistic surface manifestation of domainconcepts. Several methods to automatically extract technical terms from domain-specific document warehouses have been described in the literature.[5][6][7][8][9][10][11][12][13][14][15][16][17]
Typically, approaches to automatic term extraction make use of linguistic processors (part of speech tagging,phrase chunking) to extract terminological candidates, i.e. syntactically plausible terminologicalnoun phrases. Noun phrases include compounds (e.g. "credit card"), adjective noun phrases (e.g. "local tourist information office"), and prepositional noun phrases (e.g. "board of directors"). In English, the first two (compounds and adjective noun phrases) are the most frequent.[18]Terminological entries are then filtered from the candidate list using statistical andmachine learningmethods. Once filtered, because of their low ambiguity and high specificity, these terms are particularly useful for conceptualizing a knowledge domain or for supporting the creation of adomain ontologyor a terminology base. Furthermore, terminology extraction is a very useful starting point forsemantic similarity,knowledge management,human translationandmachine translation, etc.
The methods for terminology extraction can be applied toparallel corpora. Combined with e.g.co-occurrencestatistics, candidates for term translations can be obtained.[19]Bilingual terminology can be extracted also from comparable corpora[20](corpora containing texts within the same text type, domain but not translations of documents between each other).
|
https://en.wikipedia.org/wiki/Terminology_extraction
|
Incomputer science, aweak ontologyis anontologythat is not sufficiently rigorous to allow software to infer new facts without intervention by humans (the end users of the software system). In other words, it does not contain sufficient literal information.[1]
By this standard – which evolved asartificial intelligencemethods became more sophisticated, and computers were used to modelhigh human impactdecisions – mostdatabasesuse weak ontologies.
A weak ontology is adequate for many purposes, includingeducation, where one teaches a set of distinctions and tries to induce the power to make those distinctions in the student. Stronger ontologies only tend to evolve as the weaker ones prove deficient. This phenomenon of ontology becoming stronger over time parallels observations infolk taxonomyabouttaxonomy: as a society practices morelabour specialization, it tends to become intolerant of confusions and mixed metaphors, and sorts them into formalprofessionsor practices. Ultimately, these are expected to reason about them in common, with mathematics, especiallystatisticsandlogic, as the common ground.
On theWorld Wide Web,folksonomyin the form oftag schemasandtyped linkshas tended to evolve slowly in a variety of forums, and then be standardized in such schemes asmicroformatsas more and more forums agree. These weak ontology constructs only become strong in response to growing demands for a more powerful form ofsearch enginethan is possible withkeywording.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Weak_ontology
|
TheWeb Ontology Language(OWL) is a family ofknowledge representationlanguages for authoringontologies. Ontologies are a formal way to describetaxonomiesand classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects.
Ontologies resembleclass hierarchiesinobject-oriented programmingbut there are several critical differences. Class hierarchies are meant to represent structures used in source code that evolve fairly slowly (perhaps with monthly revisions) whereas ontologies are meant to represent information on the Internet and are expected to be evolving almost constantly. Similarly, ontologies are typically far more flexible as they are meant to represent information on the Internet coming from all sorts of heterogeneous data sources. Class hierarchies on the other hand tend to be fairly static and rely on far less diverse and more structured sources of data such as corporate databases.[1]
The OWL languages are characterized byformal semantics. They are built upon theWorld Wide Web Consortium's (W3C) standard for objects called theResource Description Framework(RDF).[2]OWL and RDF have attracted significant academic, medical and commercial interest.
In October 2007,[3]a new W3C working group[4]was started to extend OWL with several new features as proposed in the OWL 1.1 member submission.[5]W3C announced the new version of OWL on 27 October 2009.[6]This new version, called OWL 2, soon found its way into semantic editors such asProtégéandsemantic reasonerssuch as Pellet,[7]RacerPro,[8]FaCT++[9][10]and HermiT.[11]
The OWL family contains many species, serializations, syntaxes and specifications with similar names. OWL and OWL2 are used to refer to the 2004 and 2009 specifications, respectively. Full species names will be used, including specification version (for example, OWL2 EL). When referring more generally,OWL Familywill be used.[12][13][14]
There is a long history ofontologicaldevelopment in philosophy and computer science. Since the 1990s, a number of research efforts have explored how the idea ofknowledge representation(KR) fromartificial intelligence(AI) could be made useful on the World Wide Web. These included languages based onHTML(calledSHOE), based on XML (called XOL, laterOIL), and various frame-based KR languages and knowledge acquisition approaches.
In 2000 in the United States,DARPAstarted development ofDAMLled byJames Hendler.[15][self-published source]In March 2001, theJoint EU/US Committee on Agent Markup Languagesdecided that DAML should be merged with OIL.[15]TheEU/US ad hoc Joint Working Group on Agent Markup Languageswas convened to developDAML+OILas a web ontology language. This group was jointly funded by the DARPA (under the DAML program) and the European Union'sInformation Society Technologies(IST) funding project. DAML+OIL was intended to be a thin layer aboveRDFS,[15]withformal semanticsbased on adescription logic(DL).[16]
DAML+OIL is a particularly major influence on OWL; OWL's design was specifically based on DAML+OIL.[17]
The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.
a declarative representation language influenced by ideas from knowledge representation
In the late 1990s, the World Wide Web Consortium (W3C)Metadata Activitystarted work onRDF Schema(RDFS), a language forRDFvocabulary sharing. The RDF became a W3CRecommendationin February 1999, and RDFS a Candidate Recommendation in March 2000.[19]In February 2001, theSemantic Web Activityreplaced the Metadata Activity.[19]In 2004 (as part of a wider revision of RDF) RDFS became a W3C Recommendation.[20]Though RDFS provides some support for ontology specification, the need for a more expressive ontology language had become clear.[21][self-published source]
As of Monday, the 31st of May, our working group will officially come to an end. We have achieved all that we were chartered to do, and I believe our work is being quite well appreciated.
The World Wide Web Consortium (W3C) created theWeb-Ontology Working Groupas part of their Semantic Web Activity. It began work on November 1, 2001 with co-chairs James Hendler and Guus Schreiber.[22]The first working drafts of theabstract syntax, reference and synopsis were published in July 2002.[22]OWL became a formalW3C recommendationon February 10, 2004 and the working group was disbanded on May 31, 2004.[22]
In 2005, at theOWL Experiences And Directions Workshopa consensus formed that recent advances in description logic would allow a more expressive revision to satisfy user requirements more comprehensively whilst retaining good computational properties.
In December 2006, the OWL1.1 Member Submission[23]was made to the W3C. The W3C chartered theOWL Working Groupas part of the Semantic Web Activity in September 2007. In April 2008, this group decided to call this new language OWL2, indicating a substantial revision.[24]
OWL 2 became a W3C recommendation in October 2009. OWL 2 introduces profiles to improve scalability in typical applications.[6][25]
Why not be inconsistent in at least one aspect of a language which is all about consistency?
OWL was chosen as an easily pronounced acronym that would yield good logos, suggest wisdom, and honorWilliam A. Martin'sOne World Languageknowledge representation project from the 1970s.[27][28][29]
A 2006 survey of ontologies available on the web collected 688 OWL ontologies. Of these, 199 were OWL Lite, 149 were OWL DL and 337 OWL Full (by syntax). They found that 19 ontologies had in excess of 2,000 classes, and that 6 had more than 10,000. The same survey collected 587 RDFS vocabularies.[30]
An ontology is an explicit specification of a conceptualization.
The data described by an ontology in the OWL family is interpreted as a set of "individuals" and a set of "property assertions" which relate these individuals to each other. An ontology consists of a set ofaxiomswhich place constraints on sets of individuals (called "classes") and the types of relationships permitted between them. These axioms provide semantics by allowing systems to infer additional information based on the data explicitly provided. A full introduction to the expressive power of the OWL is provided in the W3C'sOWL Guide.[32]
OWL ontologies can import other ontologies, adding information from the imported ontology to the current ontology.[17]
An ontology describing families might include axioms stating that a "hasMother" property is only present between two individuals when "hasParent" is also present, and that individuals of class "HasTypeOBlood" are never related via "hasParent" to members of the "HasTypeABBlood" class. If it is stated that the individual Harriet is related via "hasMother" to the individual Sue, and that Harriet is a member of the "HasTypeOBlood" class, then it can be inferred that Sue is not a member of "HasTypeABBlood". This is, however, only true if the concepts of "Parent" and "Mother" only mean biological parent or mother and not social parent or mother.
To choose a subset of first-order logic that is decidable,propositional logicwas used, increasing its power by adding logics represented by convention with acronyms:
The W3C-endorsed OWL specification includes the definition of three variants of OWL, with different levels of expressiveness. These are OWL Lite, OWL DL and OWL Full (ordered by increasing expressiveness). Each of thesesublanguagesis a syntactic extension of its simpler predecessor. The following set of relations hold. Their inverses do not.
OWL Lite was originally intended to support those users primarily needing a classification hierarchy and simple constraints. For example, while it supportscardinalityconstraints, it only permits cardinality values of 0 or 1. It was hoped that it would be simpler to provide tool support for OWL Lite than its more expressive relatives, allowing quick migration path for systems usingthesauriand othertaxonomies. In practice, however, most of the expressiveness constraints placed on OWL Lite amount to little more than syntactic inconveniences: most of the constructs available in OWL DL can be built using complex combinations of OWL Lite features, and is equally expressive as thedescription logicSHIF(D){\displaystyle {\mathcal {SHIF}}(\mathbf {D} )}.[24]Development of OWL Lite tools has thus proven to be almost as difficult as development of tools for OWL DL, and OWL Lite is not widely used.[24]
OWL DL is designed to provide the maximum expressiveness possible while retaining computationalcompleteness(either φ or ¬φ holds),decidability(there is an effective procedure to determine whether φ is derivable or not), and the availability of practical reasoning algorithms. OWL DL includes all OWL language constructs, but they can be used only under certain restrictions (for example, number restrictions may not be placed upon properties which are declared to be transitive; and while a class may be a subclass of many classes, a class cannot be an instance of another class). OWL DL is so named due to its correspondence withdescription logic, a field of research that has studied the logics that form the formal foundation of OWL.
This one can be expressed asSHOIN(D){\displaystyle {\mathcal {SHOIN}}(\mathbf {D} )}, using the letters logic above.
OWL Full is based on a different semantics from OWL Lite or OWL DL, and was designed to preserve some compatibility with RDF Schema. For example, in OWL Full a class can be treated simultaneously as a collection of individuals and as an individual in its own right; this is not permitted in OWL DL. OWL Full allows an ontology to augment the meaning of the pre-defined (RDF or OWL) vocabulary. OWL Full is undecidable, so no reasoning software is able to perform complete reasoning for it.
In OWL2 there are three sublanguages (known asprofiles):[25]
The OWL family of languages supports a variety of syntaxes. It is useful to distinguishhigh levelsyntaxes aimed at specification fromexchangesyntaxes more suitable for general use.
These are close to the ontology structure of languages in the OWL family.
High level syntax is used to specify the OWL ontology structure and semantics.[36]
The OWL abstract syntax presents an ontology as a sequence ofannotations,axiomsandfacts. Annotations carry machine and human oriented meta-data. Information about the classes, properties and individuals that compose the ontology is contained in axioms and facts only.
Each class, property and individual is eitheranonymousor identified by anURI reference. Facts state data either about an individual or about a pair of individual identifiers (that the objects identified are distinct or the same). Axioms specify the characteristics of classes and properties. This style is similar toframe languages, and quite dissimilar to well known syntaxes forDLsandResource Description Framework(RDF).[36]
Sean Bechhofer,et al.argue that though this syntax is hard to parse, it is quite concrete. They conclude that the nameabstract syntaxmay be somewhat misleading.[37]
This syntax closely follows the structure of an OWL2 ontology. It is used by OWL2 to specify semantics, mappings to exchange syntaxes and profiles.[38]
Syntactic mappings intoRDFare specified[36][40]for languages in the OWL family. Several RDFserialization formatshave been devised. Each leads to a syntax for languages in the OWL family through this mapping. RDF/XML is normative.[36][40]
OWL2 specifies anXMLserialization that closely models the structure of an OWL2 ontology.[41]
The Manchester Syntax is a compact, human readable syntax with a style close to frame languages.
Variations are available for OWL and OWL2. Not all OWL and OWL2 ontologies can be expressed in this syntax.[42]
Consider an ontology for tea based on a Tea class. First, an ontology identifier is needed. Every OWL ontology must be identified by aURI[citation needed](http://www.example.org/tea.owl, say). This example provides a sense of the syntax. To save space below, preambles and prefix definitions have been skipped.
OWL classes correspond todescription logic(DL)concepts, OWL properties to DLroles, whileindividualsare called the same way in both the OWL and the DL terminology.[44]
In the beginning, IS-A was quite simple. Today, however, there are almost as many meanings for this inheritance link as there are knowledge-representation systems.
Early attempts to build large ontologies were plagued by a lack of clear definitions. Members of the OWL family havemodel theoreticformal semantics, and so have stronglogicalfoundations.
Description logics are a family of logics that are decidable fragments offirst-order logicwith attractive and well-understood computational properties. OWL DL and OWL Lite semantics are based on DLs.[46]They combine a syntax for describing and exchanging ontologies, and formal semantics that gives them meaning. For example, OWL DL corresponds to theSHOIN(D){\displaystyle {\mathcal {SHOIN}}^{\mathcal {(D)}}}description logic, while OWL 2 corresponds to theSROIQ(D){\displaystyle {\mathcal {SROIQ}}^{\mathcal {(D)}}}logic.[47]Sound, complete, terminatingreasoners(i.e. systems which are guaranteed to derive every consequence of the knowledge in an ontology) exist for these DLs.
OWL Full is intended to be compatible withRDF Schema(RDFS), and to be capable of augmenting the meanings of existingResource Description Framework(RDF) vocabulary.[48]Amodel theorydescribes the formal semantics for
RDF.[49]This interpretation provides the meaning of RDF and RDFS vocabulary. So, the meaning of OWL Full ontologies are defined by extension of the RDFS meaning, and OWL Full is asemantic extensionof RDF.[50]
[The closed] world assumption implies that everything we don't know isfalse, while the open world assumption states that everything we don't know isundefined.
The languages in the OWL family use theopen world assumption. Under the open world assumption, if a statement cannot be proven to be true with current knowledge, we cannot draw the conclusion that the statement is false.
Arelational databaseconsists of sets oftupleswith the sameattributes.SQLis a query and management language for relational databases.Prologis alogical programminglanguage. Both use theclosed world assumption.
The following tools include public ontology browsers:
|
https://en.wikipedia.org/wiki/Web_Ontology_Language
|
Variational Bayesian methodsare a family of techniques for approximating intractableintegralsarising inBayesian inferenceandmachine learning. They are typically used in complexstatistical modelsconsisting of observed variables (usually termed "data") as well as unknownparametersandlatent variables, with various sorts of relationships among the three types ofrandom variables, as might be described by agraphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:
In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative toMonte Carlo samplingmethods—particularly,Markov chain Monte Carlomethods such asGibbs sampling—for taking a fully Bayesian approach tostatistical inferenceover complexdistributionsthat are difficult to evaluate directly orsample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior.
Variational Bayes can be seen as an extension of theexpectation–maximization(EM) algorithm frommaximum likelihood(ML) ormaximum a posteriori(MAP) estimation of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entireposterior distributionof the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically.
For many applications, variational Bayes produces solutions of comparable accuracy to Gibbs sampling at greater speed. However, deriving the set of equations used to update the parameters iteratively often requires a large amount of work compared with deriving the comparable Gibbs sampling equations. This is the case even for many models that are conceptually quite simple, as is demonstrated below in the case of a basic non-hierarchical model with only two parameters and no latent variables.
Invariationalinference, the posterior distribution over a set of unobserved variablesZ={Z1…Zn}{\displaystyle \mathbf {Z} =\{Z_{1}\dots Z_{n}\}}given some dataX{\displaystyle \mathbf {X} }is approximated by a so-calledvariational distribution,Q(Z):{\displaystyle Q(\mathbf {Z} ):}
The distributionQ(Z){\displaystyle Q(\mathbf {Z} )}is restricted to belong to a family of distributions of simpler form thanP(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}(e.g. a family of Gaussian distributions), selected with the intention of makingQ(Z){\displaystyle Q(\mathbf {Z} )}similar to the true posterior,P(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}.
The similarity (or dissimilarity) is measured in terms of a dissimilarity functiond(Q;P){\displaystyle d(Q;P)}and hence inference is performed by selecting the distributionQ(Z){\displaystyle Q(\mathbf {Z} )}that minimizesd(Q;P){\displaystyle d(Q;P)}.
The most common type of variational Bayes uses theKullback–Leibler divergence(KL-divergence) ofQfromPas the choice of dissimilarity function. This choice makes this minimization tractable. The KL-divergence is defined as
Note thatQandPare reversed from what one might expect. This use of reversed KL-divergence is conceptually similar to theexpectation–maximization algorithm. (Using the KL-divergence in the other way produces theexpectation propagationalgorithm.)
Variational techniques are typically used to form an approximation for:
The marginalization overZ{\displaystyle \mathbf {Z} }to calculateP(X){\displaystyle P(\mathbf {X} )}in the denominator is typically intractable, because, for example, the search space ofZ{\displaystyle \mathbf {Z} }is combinatorially large. Therefore, we seek an approximation, usingQ(Z)≈P(Z∣X){\displaystyle Q(\mathbf {Z} )\approx P(\mathbf {Z} \mid \mathbf {X} )}.
Given thatP(Z∣X)=P(X,Z)P(X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} ,\mathbf {Z} )}{P(\mathbf {X} )}}}, the KL-divergence above can also be written as
BecauseP(X){\displaystyle P(\mathbf {X} )}is a constant with respect toZ{\displaystyle \mathbf {Z} }and∑ZQ(Z)=1{\displaystyle \sum _{\mathbf {Z} }Q(\mathbf {Z} )=1}becauseQ(Z){\displaystyle Q(\mathbf {Z} )}is a distribution, we have
which, according to the definition ofexpected value(for a discreterandom variable), can be written as follows
which can be rearranged to become
As thelog-evidencelogP(X){\displaystyle \log P(\mathbf {X} )}is fixed with respect toQ{\displaystyle Q}, maximizing the final termL(Q){\displaystyle {\mathcal {L}}(Q)}minimizes the KL divergence ofQ{\displaystyle Q}fromP{\displaystyle P}. By appropriate choice ofQ{\displaystyle Q},L(Q){\displaystyle {\mathcal {L}}(Q)}becomes tractable to compute and to maximize. Hence we have both an analytical approximationQ{\displaystyle Q}for the posteriorP(Z∣X){\displaystyle P(\mathbf {Z} \mid \mathbf {X} )}, and a lower boundL(Q){\displaystyle {\mathcal {L}}(Q)}for the log-evidencelogP(X){\displaystyle \log P(\mathbf {X} )}(since the KL-divergence is non-negative).
The lower boundL(Q){\displaystyle {\mathcal {L}}(Q)}is known as the (negative)variational free energyin analogy withthermodynamic free energybecause it can also be expressed as a negative energyEQ[logP(Z,X)]{\displaystyle \operatorname {E} _{Q}[\log P(\mathbf {Z} ,\mathbf {X} )]}plus theentropyofQ{\displaystyle Q}. The termL(Q){\displaystyle {\mathcal {L}}(Q)}is also known asEvidence Lower Bound, abbreviated asELBO, to emphasize that it is a lower (worst-case) bound on the log-evidence of the data.
By the generalizedPythagorean theoremofBregman divergence, of which KL-divergence is a special case, it can be shown that:[1][2]
whereC{\displaystyle {\mathcal {C}}}is a convex set and the equality holds if:
In this case, the global minimizerQ∗(Z)=q∗(Z1∣Z2)q∗(Z2)=q∗(Z2∣Z1)q∗(Z1),{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})q^{*}(\mathbf {Z} _{2})=q^{*}(\mathbf {Z} _{2}\mid \mathbf {Z} _{1})q^{*}(\mathbf {Z} _{1}),}withZ={Z1,Z2},{\displaystyle \mathbf {Z} =\{\mathbf {Z_{1}} ,\mathbf {Z_{2}} \},}can be found as follows:[1]
in which the normalizing constant is:
The termζ(X){\displaystyle \zeta (\mathbf {X} )}is often called theevidencelower bound (ELBO) in practice, sinceP(X)≥ζ(X)=exp(L(Q∗)){\displaystyle P(\mathbf {X} )\geq \zeta (\mathbf {X} )=\exp({\mathcal {L}}(Q^{*}))},[1]as shown above.
By interchanging the roles ofZ1{\displaystyle \mathbf {Z} _{1}}andZ2,{\displaystyle \mathbf {Z} _{2},}we can iteratively compute the approximatedq∗(Z1){\displaystyle q^{*}(\mathbf {Z} _{1})}andq∗(Z2){\displaystyle q^{*}(\mathbf {Z} _{2})}of the true model's marginalsP(Z1∣X){\displaystyle P(\mathbf {Z} _{1}\mid \mathbf {X} )}andP(Z2∣X),{\displaystyle P(\mathbf {Z} _{2}\mid \mathbf {X} ),}respectively. Although this iterative scheme is guaranteed to converge monotonically,[1]the convergedQ∗{\displaystyle Q^{*}}is only a local minimizer ofDKL(Q∥P){\displaystyle D_{\mathrm {KL} }(Q\parallel P)}.
If the constrained spaceC{\displaystyle {\mathcal {C}}}is confined within independent space, i.e.q∗(Z1∣Z2)=q∗(Z1),{\displaystyle q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})=q^{*}(\mathbf {Z_{1}} ),}the above iterative scheme will become the so-called mean field approximationQ∗(Z)=q∗(Z1)q∗(Z2),{\displaystyle Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1})q^{*}(\mathbf {Z} _{2}),}as shown below.
The variational distributionQ(Z){\displaystyle Q(\mathbf {Z} )}is usually assumed to factorize over somepartitionof the latent variables, i.e. for some partition of the latent variablesZ{\displaystyle \mathbf {Z} }intoZ1…ZM{\displaystyle \mathbf {Z} _{1}\dots \mathbf {Z} _{M}},
It can be shown using thecalculus of variations(hence the name "variational Bayes") that the "best" distributionqj∗{\displaystyle q_{j}^{*}}for each of the factorsqj{\displaystyle q_{j}}(in terms of the distribution minimizing the KL divergence, as described above) satisfies:[3]
whereEq−j∗[lnp(Z,X)]{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}is theexpectationof the logarithm of thejoint probabilityof the data and latent variables, taken with respect toq∗{\displaystyle q^{*}}over all variables not in the partition: refer to Lemma 4.1 of[4]for a derivation of the distributionqj∗(Zj∣X){\displaystyle q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )}.
In practice, we usually work in terms of logarithms, i.e.:
The constant in the above expression is related to thenormalizing constant(the denominator in the expression above forqj∗{\displaystyle q_{j}^{*}}) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g.Gaussian,gamma, etc.).
Using the properties of expectations, the expressionEq−j∗[lnp(Z,X)]{\displaystyle \operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}can usually be simplified into a function of the fixedhyperparametersof theprior distributionsover the latent variables and of expectations (and sometimes highermomentssuch as thevariance) of latent variables not in the current partition (i.e. latent variables not included inZj{\displaystyle \mathbf {Z} _{j}}). This createscircular dependenciesbetween the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests aniterativealgorithm, much like EM (theexpectation–maximization algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed toconverge.[5]
In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. themeans); sometimes expectations of squared variables (which can be related to thevarianceof the variables), or expectations of higher powers (i.e. highermoments) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual,nonlineardependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer.
The following theorem is referred to as a duality formula for variational inference.[4]It explains some important properties of the variational distributions used in variational Bayes methods.
TheoremConsider twoprobability spaces(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}and(Θ,F,Q){\displaystyle (\Theta ,{\mathcal {F}},Q)}withQ≪P{\displaystyle Q\ll P}. Assume that there is a common dominatingprobability measureλ{\displaystyle \lambda }such thatP≪λ{\displaystyle P\ll \lambda }andQ≪λ{\displaystyle Q\ll \lambda }. Leth{\displaystyle h}denote any real-valuedrandom variableon(Θ,F,P){\displaystyle (\Theta ,{\mathcal {F}},P)}that satisfiesh∈L1(P){\displaystyle h\in L_{1}(P)}. Then the following equality holds
Further, the supremum on the right-hand side is attainedif and only ifit holds
almost surely with respect to probability measureQ{\displaystyle Q}, wherep(θ)=dP/dλ{\displaystyle p(\theta )=dP/d\lambda }andq(θ)=dQ/dλ{\displaystyle q(\theta )=dQ/d\lambda }denote the Radon–Nikodym derivatives of the probability measuresP{\displaystyle P}andQ{\displaystyle Q}with respect toλ{\displaystyle \lambda }, respectively.
Consider a simple non-hierarchical Bayesian model consisting of a set ofi.i.d.observations from aGaussian distribution, with unknownmeanandvariance.[6]In the following, we work through this model in great detail to illustrate the workings of the variational Bayes method.
For mathematical convenience, in the following example we work in terms of theprecision— i.e. the reciprocal of the variance (or in a multivariate Gaussian, the inverse of thecovariance matrix) — rather than the variance itself. (From a theoretical standpoint, precision and variance are equivalent since there is aone-to-one correspondencebetween the two.)
We placeconjugate priordistributions on the unknown meanμ{\displaystyle \mu }and precisionτ{\displaystyle \tau }, i.e. the mean also follows a Gaussian distribution while the precision follows agamma distribution. In other words:
Thehyperparametersμ0,λ0,a0{\displaystyle \mu _{0},\lambda _{0},a_{0}}andb0{\displaystyle b_{0}}in the prior distributions are fixed, given values. They can be set to small positive numbers to give broad prior distributions indicating ignorance about the prior distributions ofμ{\displaystyle \mu }andτ{\displaystyle \tau }.
We are givenN{\displaystyle N}data pointsX={x1,…,xN}{\displaystyle \mathbf {X} =\{x_{1},\ldots ,x_{N}\}}and our goal is to infer theposterior distributionq(μ,τ)=p(μ,τ∣x1,…,xN){\displaystyle q(\mu ,\tau )=p(\mu ,\tau \mid x_{1},\ldots ,x_{N})}of the parametersμ{\displaystyle \mu }andτ.{\displaystyle \tau .}
Thejoint probabilityof all variables can be rewritten as
where the individual factors are
where
Assume thatq(μ,τ)=q(μ)q(τ){\displaystyle q(\mu ,\tau )=q(\mu )q(\tau )}, i.e. that the posterior distribution factorizes into independent factors forμ{\displaystyle \mu }andτ{\displaystyle \tau }. This type of assumption underlies the variational Bayesian method. The true posterior distribution does not in fact factor this way (in fact, in this simple case, it is known to be aGaussian-gamma distribution), and hence the result we obtain will be an approximation.
Then
In the above derivation,C{\displaystyle C},C2{\displaystyle C_{2}}andC3{\displaystyle C_{3}}refer to values that are constant with respect toμ{\displaystyle \mu }. Note that the termEτ[lnp(τ)]{\displaystyle \operatorname {E} _{\tau }[\ln p(\tau )]}is not a function ofμ{\displaystyle \mu }and will have the same value regardless of the value ofμ{\displaystyle \mu }. Hence in line 3 we can absorb it into theconstant termat the end. We do the same thing in line 7.
The last line is simply a quadratic polynomial inμ{\displaystyle \mu }. Since this is the logarithm ofqμ∗(μ){\displaystyle q_{\mu }^{*}(\mu )}, we can see thatqμ∗(μ){\displaystyle q_{\mu }^{*}(\mu )}itself is aGaussian distribution.
With a certain amount of tedious math (expanding the squares inside of the braces, separating out and grouping the terms involvingμ{\displaystyle \mu }andμ2{\displaystyle \mu ^{2}}andcompleting the squareoverμ{\displaystyle \mu }), we can derive the parameters of the Gaussian distribution:
Note that all of the above steps can be shortened by using the formula for thesum of two quadratics.
In other words:
The derivation ofqτ∗(τ){\displaystyle q_{\tau }^{*}(\tau )}is similar to above, although we omit some of the details for the sake of brevity.
Exponentiating both sides, we can see thatqτ∗(τ){\displaystyle q_{\tau }^{*}(\tau )}is agamma distribution. Specifically:
Let us recap the conclusions from the previous sections:
and
In each case, the parameters for the distribution over one of the variables depend on expectations taken with respect to the other variable. We can expand the expectations, using the standard formulas for the expectations of moments of the Gaussian and gamma distributions:
Applying these formulas to the above equations is trivial in most cases, but the equation forbN{\displaystyle b_{N}}takes more work:
We can then write the parameter equations as follows, without any expectations:
Note that there are circular dependencies among the formulas forλN{\displaystyle \lambda _{N}}andbN{\displaystyle b_{N}}. This naturally suggests anEM-like algorithm:
We then have values for the hyperparameters of the approximating distributions of the posterior parameters, which we can use to compute any properties we want of the posterior — e.g. its mean and variance, a 95% highest-density region (the smallest interval that includes 95% of the total probability), etc.
It can be shown that this algorithm is guaranteed to converge to a local maximum.
Note also that the posterior distributions have the same form as the corresponding prior distributions. We didnotassume this; the only assumption we made was that the distributions factorize, and the form of the distributions followed naturally. It turns out (see below) that the fact that the posterior distributions have the same form as the prior distributions is not a coincidence, but a general result whenever the prior distributions are members of theexponential family, which is the case for most of the standard distributions.
The above example shows the method by which the variational-Bayesian approximation to aposterior probabilitydensity in a givenBayesian networkis derived:
Due to all of the mathematical manipulations involved, it is easy to lose track of the big picture. The important things are:
Variational Bayes (VB) is often compared withexpectation–maximization(EM). The actual numerical procedure is quite similar, in that both are alternating iterative procedures that successively converge on optimum parameter values. The initial steps to derive the respective procedures are also vaguely similar, both starting out with formulas for probability densities and both involving significant amounts of mathematical manipulations.
However, there are a number of differences. Most important iswhatis being computed.
Imagine a BayesianGaussian mixture modeldescribed as follows:[3]
Note:
The interpretation of the above variables is as follows:
The joint probability of all variables can be rewritten as
where the individual factors are
where
Assume thatq(Z,π,μ,Λ)=q(Z)q(π,μ,Λ){\displaystyle q(\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=q(\mathbf {Z} )q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )}.
Then[3]
where we have defined
Exponentiating both sides of the formula forlnq∗(Z){\displaystyle \ln q^{*}(\mathbf {Z} )}yields
Requiring that this be normalized ends up requiring that theρnk{\displaystyle \rho _{nk}}sum to 1 over all values ofk{\displaystyle k}, yielding
where
In other words,q∗(Z){\displaystyle q^{*}(\mathbf {Z} )}is a product of single-observationmultinomial distributions, and factors over each individualzn{\displaystyle \mathbf {z} _{n}}, which is distributed as a single-observation multinomial distribution with parametersrnk{\displaystyle r_{nk}}fork=1…K{\displaystyle k=1\dots K}.
Furthermore, we note that
which is a standard result for categorical distributions.
Now, considering the factorq(π,μ,Λ){\displaystyle q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )}, note that it automatically factors intoq(π)∏k=1Kq(μk,Λk){\displaystyle q(\mathbf {\pi } )\prod _{k=1}^{K}q(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})}due to the structure of the graphical model defining our Gaussian mixture model, which is specified above.
Then,
Taking the exponential of both sides, we recognizeq∗(π){\displaystyle q^{*}(\mathbf {\pi } )}as aDirichlet distribution
where
where
Finally
Grouping and reading off terms involvingμk{\displaystyle \mathbf {\mu } _{k}}andΛk{\displaystyle \mathbf {\Lambda } _{k}}, the result is aGaussian-Wishart distributiongiven by
given the definitions
Finally, notice that these functions require the values ofrnk{\displaystyle r_{nk}}, which make use ofρnk{\displaystyle \rho _{nk}}, which is defined in turn based onE[lnπk]{\displaystyle \operatorname {E} [\ln \pi _{k}]},E[ln|Λk|]{\displaystyle \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]}, andEμk,Λk[(xn−μk)TΛk(xn−μk)]{\displaystyle \operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]}. Now that we have determined the distributions over which these expectations are taken, we can derive formulas for them:
These results lead to
These can be converted from proportional to absolute values by normalizing overk{\displaystyle k}so that the corresponding values sum to 1.
Note that:
This suggests an iterative procedure that alternates between two steps:
Note that these steps correspond closely with the standard EM algorithm to derive amaximum likelihoodormaximum a posteriori(MAP) solution for the parameters of aGaussian mixture model. The responsibilitiesrnk{\displaystyle r_{nk}}in the E step correspond closely to theposterior probabilitiesof the latent variables given the data, i.e.p(Z∣X){\displaystyle p(\mathbf {Z} \mid \mathbf {X} )}; the computation of the statisticsNk{\displaystyle N_{k}},x¯k{\displaystyle {\bar {\mathbf {x} }}_{k}}, andSk{\displaystyle \mathbf {S} _{k}}corresponds closely to the computation of corresponding "soft-count" statistics over the data; and the use of those statistics to compute new values of the parameters corresponds closely to the use of soft counts to compute new parameter values in normal EM over a Gaussian mixture model.
Note that in the previous example, once the distribution over unobserved variables was assumed to factorize into distributions over the "parameters" and distributions over the "latent data", the derived "best" distribution for each variable was in the same family as the corresponding prior distribution over the variable. This is a general result that holds true for all prior distributions derived from theexponential family.
|
https://en.wikipedia.org/wiki/Variational_Bayesian_methods
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.