text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Maximal entropy random walk(MERW) is a popular type ofbiased random walk on a graph, in which transition probabilities are chosen accordingly to theprinciple of maximum entropy, which says that theprobability distributionwhich best represents the current state of knowledge is the one with largest entropy. While standardrandom walkchooses for every vertex uniform probability distribution among its outgoing edges, locally maximizingentropy rate, MERW maximizes it globally (averageentropy production) by assuming uniform probability distribution among all paths in a given graph.
MERW is used in various fields of science. A direct application is choosing probabilities to maximize transmission rate through a constrained channel, analogously toFibonacci coding. Its properties also made it useful for example in analysis of complex networks,[1]likelink prediction,[2]community detection,[3]robust transport over networks[4]andcentralitymeasures.[5]Also inimage analysis, for example for detecting visual saliency regions,[6]object localization,[7]tampering detection[8]ortractographyproblem.[9]
Additionally, it recreates some properties ofquantum mechanics, suggesting a way to repair the discrepancy betweendiffusionmodels and quantum predictions, likeAnderson localization.[10]
Consider agraphwithn{\displaystyle n}vertices, defined by anadjacency matrixA∈{0,1}n×n{\displaystyle A\in \left\{0,1\right\}^{n\times n}}:Aij=1{\displaystyle A_{ij}=1}if there is an edge from vertexi{\displaystyle i}toj{\displaystyle j}, 0 otherwise. For simplicity assume it is anundirected graph, which corresponds to a symmetricA{\displaystyle A}; however, MERW can also be generalized for directed andweighted graphs(for exampleBoltzmann distributionamong paths instead of uniform).
We would like to choose a random walk as aMarkov processon this graph: for every vertexi{\displaystyle i}and its outgoing edge toj{\displaystyle j}, choose probabilitySij{\displaystyle S_{ij}}of the walker randomly using this edge after visitingi{\displaystyle i}. Formally, find astochastic matrixS{\displaystyle S}(containing the transition probabilities of a Markov chain) such that
Assuming this graph is connected and notperiodic,ergodic theorysays that evolution of thisstochastic processleads to some stationary probability distributionρ{\displaystyle \rho }such thatρS=ρ{\displaystyle \rho S=\rho }.
UsingShannon entropyfor every vertex and averaging over probability of visiting this vertex (to be able to use its entropy), we get the following formula for average entropy production (entropy rate) of the stochastic process:
This definition turns out to be equivalent to the asymptotic average entropy (per length) of the probability distribution in the space of paths for this stochastic process.
In the standard random walk, referred to here as generic random walk (GRW), we naturally choose that each outgoing edge is equally probable:
For a symmetricA{\displaystyle A}it leads to a stationary probability distributionρ{\displaystyle \rho }with
It locally maximizes entropy production (uncertainty) for every vertex, but usually leads to a suboptimal averaged global entropy rateH(S){\displaystyle H(S)}.
MERW chooses the stochastic matrix which maximizesH(S){\displaystyle H(S)}, or equivalently assumes uniform probability distribution among all paths in a given graph. Its formula is obtained by first calculating the dominanteigenvalueλ{\displaystyle \lambda }and correspondingeigenvectorψ{\displaystyle \psi }of the adjacency matrix, i.e. the largestλ∈R{\displaystyle \lambda \in \mathbb {R} }with correspondingψ∈Rn{\displaystyle \psi \in \mathbb {R} ^{n}}such thatψA=λψ{\displaystyle \psi A=\lambda \psi }. Then stochastic matrix and stationary probability distribution are given by
for which every possible path of lengthl{\displaystyle l}from thei{\displaystyle i}-th toj{\displaystyle j}-th vertex has probability
Its entropy rate islog(λ){\displaystyle \log(\lambda )}and the stationary probability distributionρ{\displaystyle \rho }is
In contrast to GRW, the MERW transition probabilities generally depend on the structure of the entire graph (are nonlocal). Hence, they should not be imagined as directly applied by the walker – if random-looking decisions are made based on the local situation, like a person would make, the GRW approach is more appropriate. MERW is based on the principle of maximum entropy, making it the safest assumption when we don't have any additional knowledge about the system. For example, it would be appropriate for modelling our knowledge about an object performing some complex dynamics – not necessarily random, like a particle.
Assume for simplicity that the considered graph is indirected, connected and aperiodic, allowing to conclude from thePerron–Frobenius theoremthat the dominant eigenvector is unique. HenceAl{\displaystyle A^{l}}can be asymptotically (l→∞{\displaystyle l\rightarrow \infty }) approximated byλlψψT{\displaystyle \lambda ^{l}\psi \psi ^{T}}(orλl|ψ⟩⟨ψ|{\displaystyle \lambda ^{l}|\psi \rangle \langle \psi |}inbra–ket notation).
MERW requires uniform distribution along paths. The numbermil{\displaystyle m_{il}}of paths with length2l{\displaystyle 2l}and vertexi{\displaystyle i}in the center is
hence for alli{\displaystyle i},
Analogously calculating probability distribution for two succeeding vertices, one obtains that the probability of being at thei{\displaystyle i}-th vertex and next at thej{\displaystyle j}-th vertex is
Dividing by the probability of being at thei{\displaystyle i}-th vertex, i.e.ρi{\displaystyle \rho _{i}}, gives for theconditional probabilitySij{\displaystyle S_{ij}}of thej{\displaystyle j}-th vertex being next after thei{\displaystyle i}-th vertex
We have assumed thatAij∈{0,1}{\displaystyle A_{ij}\in \{0,1\}}, yielding a MERW corresponding to the uniform ensemble among paths. However, the above derivation works for any real nonnegativeA{\displaystyle A}for which the Perron-Frobenius theorem applies. GivenAij=exp(−Eij){\displaystyle A_{ij}=\exp(-E_{ij})}, the probability of a particular length-l{\displaystyle l}path(γ0,…,γl){\displaystyle (\gamma _{0},\ldots ,\gamma _{l})}is as follows:
which is the same as theBoltzmann distributionof paths with energy defined as the sum ofEij{\displaystyle E_{ij}}over the edges of the path. For example, this can be used with the transfer matrix to calculate the probability distribution of patterns in theIsing model.
Let us first look at a simple nontrivial situation: Fibonacci coding, where we want to transmit a message as a sequence of 0s and 1s, but not using two successive 1s: after a 1 there has to be a 0. To maximize the amount of information transmitted in such sequence, we should assume uniform probability distribution in the space of all possible sequences fulfilling this constraint. To practically use such long sequences, after 1 we have to use 0, but there remains a freedom of choosing the probability of 0 after 0. Let us denote this probability byq{\displaystyle q}, thenentropy codingwould allow encoding a message using this chosen probability distribution. The stationary probability distribution of symbols for a givenq{\displaystyle q}turns out to beρ=(1/(2−q),1−1/(2−q)){\displaystyle \rho =(1/(2-q),1-1/(2-q))}. Hence, entropy production isH(S)=ρ0(qlog(1/q)+(1−q)log(1/(1−q))){\displaystyle H(S)=\rho _{0}\left(q\log(1/q)+(1-q)\log(1/(1-q))\right)}, which is maximized forq=(5−1)/2≈0.618{\displaystyle q=({\sqrt {5}}-1)/2\approx 0.618}, known as thegolden ratio. In contrast, standard random walk would choose suboptimalq=0.5{\displaystyle q=0.5}. While choosing largerq{\displaystyle q}reduces the amount of information produced after 0, it also reduces frequency of 1, after which we cannot write any information.
A more complex example is the defected one-dimensional cyclic lattice: let say 1000 nodes connected in a ring, for which all nodes but the defects have a self-loop (edge to itself). In standard random walk (GRW) the stationary probability distribution would have defect probability being 2/3 of probability of the non-defect vertices – there is nearly no localization, also analogously for standard diffusion, which is infinitesimal limit of GRW. For MERW we have to first find the dominant eigenvector of the adjacency matrix – maximizingλ{\displaystyle \lambda }in:
(λψ)x=(Aψ)x=ψx−1+(1−Vx)ψx+ψx+1{\displaystyle (\lambda \psi )_{x}=(A\psi )_{x}=\psi _{x-1}+(1-V_{x})\psi _{x}+\psi _{x+1}}
for all positionsx{\displaystyle x}, whereVx=1{\displaystyle V_{x}=1}for defects, 0 otherwise. Substituting3ψx{\displaystyle 3\psi _{x}}and multiplying the equation by −1 we get:
Eψx=−(ψx−1−2ψx+ψx+1)+Vxψx{\displaystyle E\psi _{x}=-(\psi _{x-1}-2\psi _{x}+\psi _{x+1})+V_{x}\psi _{x}}
whereE=3−λ{\displaystyle E=3-\lambda }is minimized now, becoming the analog of energy. The formula inside the bracket isdiscrete Laplace operator, making this equation a discrete analogue of stationarySchrodinger equation. As in quantum mechanics, MERW predicts that the probability distribution should lead exactly to the one of quantumground state:ρx∝ψx2{\displaystyle \rho _{x}\propto \psi _{x}^{2}}with its strongly localized density (in contrast to standard diffusion). Taking theinfinitesimallimit, we can get standard continuous stationary (time-independent) Schrodinger equation (Eψ=−Cψxx+Vψ{\displaystyle E\psi =-C\psi _{xx}+V\psi }forC=ℏ2/2m{\displaystyle C=\hbar ^{2}/2m}) here.[11]
|
https://en.wikipedia.org/wiki/Maximal_entropy_random_walk
|
Inmathematics, aself-avoiding walk(SAW) is asequenceof moves on alattice(alattice path) that does not visit the same point more than once. This is a special case of thegraph theoreticalnotion of apath. Aself-avoiding polygon(SAP) is a closed self-avoiding walk on a lattice. Very little is known rigorously about the self-avoiding walk from a mathematical perspective, although physicists have provided numerous conjectures that are believed to be true and are strongly supported by numerical simulations.
Incomputational physics, a self-avoiding walk is a chain-like path inR2orR3with a certain number of nodes, typically a fixed step length and has the property that it doesn't cross itself or another walk. A system of SAWs satisfies the so-calledexcluded volumecondition. In higher dimensions, the SAW is believed to behave much like the ordinaryrandom walk.
SAWs and SAPs play a central role in the modeling of thetopologicalandknot-theoreticbehavior of thread- and loop-like molecules such asproteins. Indeed, SAWs may have first been introduced by the chemistPaul Flory[1][dubious–discuss]in order to model the real-life behavior of chain-like entities such assolventsandpolymers, whose physical volume prohibits multiple occupation of the same spatial point.
SAWs arefractals. For example, ind= 2thefractal dimensionis 4/3, ford= 3it is close to 5/3 while ford≥ 4the fractal dimension is2. The dimension is called the uppercritical dimensionabove which excluded volume is negligible. A SAW that does not satisfy the excluded volume condition was recently studied to model explicitsurface geometryresulting from expansion of a SAW.[2][clarification needed]
The properties of SAWs cannot be calculated analytically, so numericalsimulationsare employed. Thepivot algorithmis a common method forMarkov chain Monte Carlosimulations for the uniformmeasureonn-step self-avoiding walks. The pivot algorithm works by taking a self-avoiding walk and randomly choosing a point on this walk, and then applyingsymmetricaltransformations (rotations and reflections) on the walk after thenth step to create a new walk.
Calculating the number of self-avoiding walks in any given lattice is a commoncomputational problem. There is currently no known formula, although there are rigorous methods of approximation.[3][4]
One of the phenomena associated with self-avoiding walks and statistical physics models in general is the notion ofuniversality, that is, independence of macroscopic observables from microscopic details, such as the choice of the lattice. One important quantity that appears in conjectures for universal laws is theconnective constant, defined as follows. Letcndenote the number ofn-step self-avoiding walks. Since every(n+m)-step self avoiding walk can be decomposed into ann-step self-avoiding walk and anm-step self-avoiding walk, it follows thatcn+m≤cncm. Therefore, the sequence{logcn}issubadditiveand we can applyFekete's lemmato show that the following limit exists:
μis called theconnective constant, sincecndepends on the particular lattice chosen for the walk so doesμ. The exact value ofμis only known for the hexagonal lattice, found byStanislav SmirnovandHugo Duminil-Copin, where it is equal to:[5]
For other lattices,μhas only been approximated numerically, and is believed not to even be analgebraic number. It is conjectured that[6]
asn→ ∞, whereμdepends on the lattice, but the power law correctionn1132{\displaystyle n^{\frac {11}{32}}}does not; in other words, this law is believed to be universal.
Self-avoiding walks have also been studied in the context ofnetwork theory.[7]In this context, it is customary to treat the SAW as a dynamical process, such that in every time-step a walker randomly hops between neighboring nodes of the network. The walk ends when the walker reaches a dead-end state, such that it can no longer progress to newly un-visited nodes. It was recently found that onErdős–Rényinetworks, the distribution of path lengths of such dynamically grown SAWs can be calculated analytically, and follows theGompertz distribution.[8]For arbitrary networks, the distribution of path lengths of the walk, thedegree distributionof the non-visited network and thefirst-hitting-timedistribution to a node can be obtained by solving a set of coupled recurrence equations.[9]
Consider the uniform measure onn-step self-avoiding walks in the full plane. It is currently unknown whether the limit of the uniform measure asn→ ∞induces a measure on infinite full-plane walks. However,Harry Kestenhas shown that such a measure exists for self-avoiding walks in the half-plane. One important question involving self-avoiding walks is the existence and conformal invariance of thescaling limit, that is, the limit as the length of the walk goes to infinity and the mesh of the lattice goes to zero. Thescaling limitof the self-avoiding walk is conjectured to be described bySchramm–Loewner evolutionwith parameterκ=8/3.
|
https://en.wikipedia.org/wiki/Self-avoiding_walk
|
Inprobability theoryandstatistics, aunit rootis a feature of somestochastic processes(such asrandom walks) that can cause problems instatistical inferenceinvolvingtime seriesmodels. A linearstochastic processhas a unit root if 1 is a root of the process'scharacteristic equation. Such a process isnon-stationarybut does not always have a trend.
If the other roots of the characteristic equation lie inside the unit circle—that is, have a modulus (absolute value) less than one—then thefirst differenceof the process will be stationary; otherwise, the process will need to be differenced multiple times to become stationary.[1]If there aredunit roots, the process will have to be differenceddtimes in order to make it stationary.[2]Due to this characteristic, unit root processes are also calleddifference stationary.[3][4]
Unit root processes may sometimes be confused withtrend-stationaryprocesses; while they share many properties, they are different in many aspects. It is possible for a time series to be non-stationary, yet have no unit root and be trend-stationary. In both unit root and trend-stationary processes, the mean can be growing or decreasing over time; however, in the presence of a shock, trend-stationary processes are mean-reverting (i.e. transitory, the time series will converge again towards the growing mean, which was not affected by the shock) while unit-root processes have a permanent impact on the mean (i.e. no convergence over time).[5]
If a root of the process's characteristic equation is larger than 1, then it is called anexplosive process, even though such processes are sometimes inaccurately called unit roots processes.
The presence of a unit root can be tested using aunit root test.
Consider a discrete-timestochastic process(yt,t=1,2,3,…){\displaystyle (y_{t},t=1,2,3,\ldots )}, and suppose that it can be written as anautoregressiveprocess of orderp:
Here,(εt,t=0,1,2,…,){\displaystyle (\varepsilon _{t},t=0,1,2,\ldots ,)}is a serially uncorrelated, zero-mean stochastic process with constant varianceσ2{\displaystyle \sigma ^{2}}. For convenience, assumey0=0{\displaystyle y_{0}=0}. Ifm=1{\displaystyle m=1}is arootof thecharacteristic equation, ofmultiplicity1:
then the stochastic process has aunit rootor, alternatively, isintegrated of orderone, denotedI(1){\displaystyle I(1)}. Ifm= 1 is aroot of multiplicityr, then the stochastic process is integrated of orderr, denotedI(r).
The first order autoregressive model,yt=a1yt−1+εt{\displaystyle y_{t}=a_{1}y_{t-1}+\varepsilon _{t}}, has a unit root whena1=1{\displaystyle a_{1}=1}. In this example, the characteristic equation ism−a1=0{\displaystyle m-a_{1}=0}. The root of the equation ism=1{\displaystyle m=1}.
If the process has a unit root, then it is a non-stationary time series. That is, the moments of the stochastic process depend ont{\displaystyle t}. To illustrate the effect of a unit root, we can consider the first order case, starting fromy0= 0:
By repeated substitution, we can writeyt=y0+∑j=1tεj{\displaystyle y_{t}=y_{0}+\sum _{j=1}^{t}\varepsilon _{j}}. Then the variance ofyt{\displaystyle y_{t}}is given by:
The variance depends ontsinceVar(y1)=σ2{\displaystyle \operatorname {Var} (y_{1})=\sigma ^{2}}, whileVar(y2)=2σ2{\displaystyle \operatorname {Var} (y_{2})=2\sigma ^{2}}. The variance of the series is diverging to infinity witht.
There are various tests to check for the existence of a unit root, some of them are given by:
In addition toautoregressive(AR) andautoregressive–moving-average(ARMA) models, other important models arise inregression analysiswhere themodel errorsmay themselves have atime seriesstructure and thus may need to be modelled by an AR or ARMA process that may have a unit root, as discussed above. Thefinite sampleproperties of regression models with first order ARMA errors, including unit roots, have been analyzed.[6][7]
Often,ordinary least squares(OLS) is used to estimate the slope coefficients of theautoregressive model. Use of OLS relies on the stochastic process being stationary. When the stochastic process is non-stationary, the use of OLS can produce invalid estimates.Grangerand Newbold called such estimates 'spurious regression' results:[8]highR2values and hight-ratiosyielding results with no real (in their context, economic) meaning.
To estimate the slope coefficients, one should first conduct aunit root test, whosenull hypothesisis that a unit root is present. If that hypothesis is rejected, one can use OLS. However, if the presence of a unit root is not rejected, then one should apply thedifference operatorto the series. If another unit root test shows the differenced time series to be stationary, OLS can then be applied to this series to estimate the slope coefficients.
For example, in the AR(1) case,Δyt=yt−yt−1=εt{\displaystyle \Delta y_{t}=y_{t}-y_{t-1}=\varepsilon _{t}}is stationary.
In the AR(2) case,yt=a1yt−1+a2yt−2+εt{\displaystyle y_{t}=a_{1}y_{t-1}+a_{2}y_{t-2}+\varepsilon _{t}}can be written as(1−λ1L)(1−λ2L)yt=εt{\displaystyle (1-\lambda _{1}L)(1-\lambda _{2}L)y_{t}=\varepsilon _{t}}where L is alag operatorthat decreases the time index of a variable by one period:Lyt=yt−1{\displaystyle Ly_{t}=y_{t-1}}. Ifλ2=1{\displaystyle \lambda _{2}=1}, the model has a unit root and we can definezt=Δyt{\displaystyle z_{t}=\Delta y_{t}}; then
is stationary if|λ1|<1{\displaystyle |\lambda _{1}|<1}. OLS can be used to estimate the slope coefficient,λ1{\displaystyle \lambda _{1}}.
If the process has multiple unit roots, the difference operator can be applied multiple times.
Economists debate whether various economic statistics, especiallyoutput, have a unit root or aretrend-stationary.[9]A unit root process with drift is given in the first-order case by
wherecis a constant term referred to as the "drift" term, andet{\displaystyle e_{t}}is white noise. Any non-zero value of the noise term, occurring for only one period, will permanently affect the value ofyt{\displaystyle y_{t}}as shown in the graph, so deviations from the lineyt=a+ct{\displaystyle y_{t}=a+ct}are non-stationary; there is no reversion to any trend line. In contrast, a trend-stationary process is given by
wherekis the slope of the trend andut{\displaystyle u_{t}}is noise (white noise in the simplest case; more generally, noise following its own stationary autoregressive process). Here any transient noise will not alter the long-run tendency foryt{\displaystyle y_{t}}to be on the trend line, as also shown in the graph. This process is said to be trend-stationary because deviations from the trend line are stationary.
The issue is particularly popular in the literature on business cycles.[10][11]Research on the subject began with Nelson and Plosser whose paper onGNPand other output aggregates failed to reject the unit root hypothesis for these series.[12]Since then, a debate—entwined with technical disputes on statistical methods—has ensued. Some economists[13]argue thatGDPhas a unit root orstructural break, implying that economic downturns result in permanently lower GDP levels in the long run. Other economists argue that GDP is trend-stationary: That is, when GDP dips below trend during a downturn it later returns to the level implied by the trend so that there is no permanent decrease in output. While the literature on the unit root hypothesis may consist of arcane debate on statistical methods, the hypothesis carries significant practical implications for economic forecasts and policies.
|
https://en.wikipedia.org/wiki/Unit_root#Unit_root_hypothesis
|
Cluster(s)may refer to:
|
https://en.wikipedia.org/wiki/Cluster_(disambiguation)
|
Instatisticsandmachine learning, thehierarchical Dirichlet process(HDP) is anonparametricBayesianapproach to clusteringgrouped data.[1][2]It uses aDirichlet processfor each group of data, with the Dirichlet processes for all groups sharing a base distribution which is itself drawn from a Dirichlet process. This method allows groups to share statistical strength via sharing of clusters across groups. The base distribution being drawn from a Dirichlet process is important, because draws from a Dirichlet process are atomic probability measures, and the atoms will appear in all group-level Dirichlet processes. Since each atom corresponds to a cluster, clusters are shared across all groups. It was developed byYee Whye Teh,Michael I. Jordan,Matthew J. BealandDavid Bleiand published in theJournal of the American Statistical Associationin 2006,[1]as a formalization and generalization of theinfinite hidden Markov modelpublished in 2002.[3]
This model description is sourced from.[1]The HDP is a model for grouped data. What this means is that the data items come in multiple distinct groups. For example, in atopic modelwords are organized into documents, with each document formed by a bag (group) of words (data items). Indexing groups byj=1,...J{\displaystyle j=1,...J}, suppose each group consist of data itemsxj1,...xjn{\displaystyle x_{j1},...x_{jn}}.
The HDP is parameterized by a base distributionH{\displaystyle H}that governs the a priori distribution over data items, and a number of concentration parameters that govern the a priori number of clusters and amount of sharing across groups. Thej{\displaystyle j}th group is associated with a random probability measureGj{\displaystyle G_{j}}which has distribution given by a Dirichlet process:
whereαj{\displaystyle \alpha _{j}}is the concentration parameter associated with the group, andG0{\displaystyle G_{0}}is the base distribution shared across all groups. In turn, the common base distribution is Dirichlet process distributed:
with concentration parameterα0{\displaystyle \alpha _{0}}and base distributionH{\displaystyle H}. Finally, to relate the Dirichlet processes back with the observed data, each data itemxji{\displaystyle x_{ji}}is associated with a latent parameterθji{\displaystyle \theta _{ji}}:
The first line states that each parameter has a prior distribution given byGj{\displaystyle G_{j}}, while the second line states that each data item has a distributionF(θji){\displaystyle F(\theta _{ji})}parameterized by its associated parameter. The resulting model above is called a HDP mixture model, with the HDP referring to the hierarchically linked set of Dirichlet processes, and the mixture model referring to the way the Dirichlet processes are related to the data items.
To understand how the HDP implements a clustering model, and how clusters become shared across groups, recall that draws from aDirichlet processare atomic probability measures with probability one. This means that the common base distributionG0{\displaystyle G_{0}}has a form which can be written as:
where there are an infinite number of atoms,θk∗,k=1,2,...{\displaystyle \theta _{k}^{*},k=1,2,...}, assuming that the overall base distributionH{\displaystyle H}has infinite support. Each atom is associated with a massπ0k{\displaystyle \pi _{0k}}. The masses have to sum to one sinceG0{\displaystyle G_{0}}is a probability measure. SinceG0{\displaystyle G_{0}}is itself the base distribution for the group specific Dirichlet processes, eachGj{\displaystyle G_{j}}will have atoms given by the atoms ofG0{\displaystyle G_{0}}, and can itself be written in the form:
Thus the set of atoms is shared across all groups, with each group having its own group-specific atom masses. Relating this representation back to the observed data, we see that each data item is described by a mixture model:
where the atomsθk∗{\displaystyle \theta _{k}^{*}}play the role of the mixture component parameters, while the massesπjk{\displaystyle \pi _{jk}}play the role of the mixing proportions. In conclusion, each group of data is modeled using a mixture model, with mixture components shared across all groups but mixing proportions being group-specific. In clustering terms, we can interpret each mixture component as modeling a cluster of data items, with clusters shared across all groups, and each group, having its own mixing proportions, composed of different combinations of clusters.
The HDP mixture model is a natural nonparametric generalization ofLatent Dirichlet allocation, where the number of topics can be unbounded and learnt from data.[1]Here each group is a document consisting of a bag of words, each cluster is a topic, and each document is a mixture of topics. The HDP is also a core component of theinfinite hidden Markov model,[3]which is a nonparametric generalization of thehidden Markov modelallowing the number of states to be unbounded and learnt from data.[1][4]
The HDP can be generalized in a number of directions. The Dirichlet processes can be replaced byPitman-Yor processesandGamma processes, resulting in theHierarchical Pitman-Yor processand Hierarchical Gamma process. The hierarchy can be deeper, with multiple levels of groups arranged in a hierarchy. Such an arrangement has been exploited in thesequence memoizer, a Bayesian nonparametric model for sequences which has a multi-level hierarchy of Pitman-Yor processes. In addition, Bayesian Multi-Domain Learning (BMDL) model derives domain-dependent latent representations of overdispersed count data based on hierarchical negative binomial factorization for accurate cancer subtyping even if the number of samples for a specific cancer type is small.[5]
|
https://en.wikipedia.org/wiki/Hierarchical_Dirichlet_process
|
Unsupervised learningis a framework inmachine learningwhere, in contrast tosupervised learning, algorithms learn patterns exclusively from unlabeled data.[1]Other frameworks in the spectrum of supervisions includeweak- or semi-supervision, where a small portion of the data is tagged, andself-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.[2]
Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massivetext corpusobtained byweb crawling, with only minor filtering (such asCommon Crawl). This compares favorably to supervised learning, where the dataset (such as theImageNet1000) is typically constructed manually, which is much more expensive.
There were algorithms designed specifically for unsupervised learning, such asclustering algorithmslikek-means,dimensionality reductiontechniques likeprincipal component analysis (PCA),Boltzmann machine learning, andautoencoders. After the rise of deep learning, most large-scale unsupervised learning have been done by training general-purpose neural network architectures bygradient descent, adapted to performing unsupervised learning by designing an appropriate training procedure.
Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.[3][4]As another example, autoencoders are trained togood features, which can then be used as a module for other models, such as in alatent diffusion model.
Tasks are often categorized asdiscriminative(recognition) orgenerative(imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (seeVenn diagram); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward, some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent ofdropout,ReLU, andadaptive learning rates.
A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for thedenoising autoencodersandBERT.
During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.
In contrast to supervised methods' dominant use ofbackpropagation, unsupervised learning also employs other methods including: Hopfield learning rule, Boltzmann learning rule,Contrastive Divergence,Wake Sleep,Variational Inference,Maximum Likelihood,Maximum A Posteriori,Gibbs Sampling, and backpropagating reconstruction errors or hidden state reparameterizations. See the table below for more details.
An energy function is a macroscopic measure of a network's activation state. In Boltzmann machines, it plays the role of the Cost function. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motionp∝e−E/kT{\displaystyle p\propto e^{-E/kT}}, where k is the Boltzmann constant and T is temperature. In theRBMnetwork the relation isp=e−E/Z{\displaystyle p=e^{-E}/Z},[5]wherep{\displaystyle p}andE{\displaystyle E}vary over every possible activation pattern andZ=∑All Patternse−E(pattern){\displaystyle \textstyle {Z=\sum _{\scriptscriptstyle {\text{All Patterns}}}e^{-E({\text{pattern}})}}}. To be more precise,p(a)=e−E(a)/Z{\displaystyle p(a)=e^{-E(a)}/Z}, wherea{\displaystyle a}is an activation pattern of all neurons (visible and hidden). Hence, some early neural networks bear the name Boltzmann Machine. Paul Smolensky calls−E{\displaystyle -E\,}theHarmony. A network seeks low energy which is high Harmony.
This table shows connection diagrams of various unsupervised networks, the details of which will be given in the section Comparison of Networks. Circles are neurons and edges between them are connection weights. As network design changes, features are added on to enable new capabilities or removed to make learning faster. For instance, neurons change between deterministic (Hopfield) and stochastic (Boltzmann) to allow robust output, weights are removed within a layer (RBM) to hasten learning, or connections are allowed to become asymmetric (Helmholtz).
Of the networks bearing people's names, only Hopfield worked directly with neural networks. Boltzmann and Helmholtz came before artificial neural networks, but their work in physics and physiology inspired the analytical methods that were used.
Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below.
The classical example of unsupervised learning in the study of neural networks isDonald Hebb's principle, that is, neurons that fire together wire together.[8]InHebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.[9]A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticityor STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such aspattern recognitionand experiential learning.
Amongneural networkmodels, theself-organizing map(SOM) andadaptive resonance theory(ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are used for many pattern recognition tasks, such asautomatic target recognitionand seismic signal processing.[10]
Two of the main methods used in unsupervised learning areprincipal componentandcluster analysis.Cluster analysisis used in unsupervised learning to group, or segment, datasets with shared attributes in order to extrapolate algorithmic relationships.[11]Cluster analysis is a branch ofmachine learningthat groups the data that has not beenlabelled, classified or categorized. Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group.
A central application of unsupervised learning is in the field ofdensity estimationinstatistics,[12]though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It can be contrasted with supervised learning by saying that whereas supervised learning intends to infer aconditional probability distributionconditioned on the label of input data; unsupervised learning intends to infer ana priori probabilitydistribution .
Some of the most common algorithms used in unsupervised learning include: (1) Clustering, (2) Anomaly detection, (3) Approaches for learning latent variable models. Each approach uses several methods as follows:
One of the statistical approaches for unsupervised learning is themethod of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is themeanvector, and the second order moment is thecovariance matrix(when the mean is zero). Higher order moments are usually represented usingtensorswhich are the generalization of matrices to higher orders as multi-dimensional arrays.
In particular, the method of moments is shown to be effective in learning the parameters oflatent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is thetopic modelingwhich is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.[15]
TheExpectation–maximization algorithm(EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.
|
https://en.wikipedia.org/wiki/Unsupervised_learning
|
MALLETis aJava"Machine Learning for Language Toolkit".
MALLET is an integrated collection of Java code useful for statisticalnatural language processing,document classification,cluster analysis,information extraction,topic modelingand othermachine learningapplications to text.
MALLET was developed primarily byAndrew McCallum, of theUniversity of Massachusetts Amherst, with assistance from graduate students and faculty from both UMASS and theUniversity of Pennsylvania.
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Mallet_(software_project)
|
Gensimis anopen-sourcelibrary for unsupervisedtopic modeling,document indexing, retrieval by similarity, and othernatural language processingfunctionalities, using modern statisticalmachine learning.
Gensim is implemented inPythonandCythonfor performance. Gensim is designed to handle large text collections using data streaming and incremental online algorithms, which differentiates it from most other machine learning software packages that target only in-memory processing.
Gensim includes streamed parallelized implementations offastText,[2]word2vecand doc2vec algorithms,[3]as well aslatent semantic analysis(LSA, LSI, SVD),non-negative matrix factorization(NMF),latent Dirichlet allocation(LDA),tf-idfandrandom projections.[4]
Some of the novel online algorithms in Gensim were also published in the 2011 PhD dissertationScalability of Semantic Analysis in Natural Language Processingof Radim Řehůřek, the creator of Gensim.[5]
Gensim library has been used and cited in over 1400 commercial and academic applications as of 2018,[6]in a diverse array of disciplines from medicine to insurance claim analysis to patent search.[7]The software has been covered in several new articles, podcasts and interviews.[8][9][10]
The open source code is developed and hosted onGitHub[11]and a public support forum is maintained onGoogle Groups[12]andGitter.[13]
Gensim is commercially supported by the company rare-technologies.com, who also provide student mentorships and academic thesis projects for Gensim via their Student Incubator programme.[14]
Thisscientific softwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Gensim
|
Innatural language processing, asentence embeddingis a representation of a sentence as avectorof numbers which encodes meaningful semantic information.[1][2][3][4][5][6][7]
State of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models.BERTpioneered an approach involving the use of a dedicated [CLS] token prepended to the beginning of each sentence inputted into the model; the final hidden state vector of this token encodes information about the sentence and can be fine-tuned for use in sentence classification tasks. In practice however, BERT's sentence embedding with the [CLS] token achieves poor performance, often worse than simply averaging non-contextual word embeddings.SBERTlater achieved superior sentence embedding performance[8]by fine tuning BERT's [CLS] token embeddings through the usage of asiamese neural networkarchitecture on the SNLI dataset.
Other approaches are loosely based on the idea ofdistributional semanticsapplied to sentences.Skip-Thoughttrains an encoder-decoder structure for the task of neighboring sentences predictions; this has been shown to achieve worse performance than approaches such asInferSentor SBERT.
An alternative direction is to aggregate word embeddings, such as those returned byWord2vec, into sentence embeddings. The most straightforward approach is to simply compute the average of word vectors, known as continuous bag-of-words (CBOW).[9]However, more elaborate solutions based on word vector quantization have also been proposed. One such approach is the vector of locally aggregated word embeddings (VLAWE),[10]which demonstrated performance improvements in downstream text classification tasks.
In recent years, sentence embedding has seen a growing level of interest due to its applications in natural language queryable knowledge bases through the usage of vector indexing for semantic search.LangChainfor instance utilizes sentence transformers for purposes of indexing documents. In particular, an indexing is generated by generating embeddings for chunks of documents and storing (document chunk, embedding) tuples. Then given a query in natural language, the embedding for the query can be generated. A top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant document chunks as context information forquestion answeringtasks. This approach is also known formally asretrieval-augmented generation[11]
Though not as predominant as BERTScore, sentence embeddings are commonly used for sentence similarity evaluation which sees common use for the task of optimizing aLarge language model's generation parameters is often performed via comparing candidate sentences against reference sentences. By using the cosine-similarity of the sentence embeddings of candidate and reference sentences as the evaluation function, a grid-search algorithm can be utilized to automatehyperparameter optimization[citation needed].
A way of testing sentence encodings is to apply them on Sentences Involving Compositional Knowledge (SICK) corpus[12]for both entailment (SICK-E) and relatedness (SICK-R).
In[13]the best results are obtained using aBiLSTM networktrained on theStanford Natural Language Inference (SNLI) Corpus. ThePearson correlation coefficientfor SICK-R is 0.885 and the result for SICK-E is 86.3. A slight improvement over previous scores is presented in:[14]SICK-R: 0.888 and SICK-E: 87.8 using a concatenation of bidirectionalGated recurrent unit.
|
https://en.wikipedia.org/wiki/Sentence_embedding
|
Data extractionis the act or process of retrievingdataout of (usuallyunstructuredor poorly structured) data sources for furtherdata processingordata storage(data migration). Theimportinto the intermediate extracting system is thus usually followed bydata transformationand possibly the addition ofmetadataprior toexportto another stage in the dataworkflow.
Usually, the term data extraction is applied when (experimental) data is first imported into a computer from primary sources, likemeasuringorrecording devices. Today'selectronic deviceswill usually present anelectrical connector(e.g.USB) through which 'raw data' can bestreamedinto apersonal computer.
Typical unstructured data sources includeweb pages,emails, documents,PDFs, social media, scanned text, mainframe reports, spool files, multimedia files, etc. Extracting data from these unstructured sources has grown into a considerable technical challenge, where as historically data extraction has had to deal with changes in physical hardware formats, the majority of current data extraction deals with extracting data from these unstructured data sources, and from different software formats. This growing process of data extraction from the web is referred to as "Web data extraction" or "Web scraping".
The act of adding structure to unstructured data takes a number of forms
|
https://en.wikipedia.org/wiki/Data_extraction
|
Keyword extractionis tasked with the automatic identification of terms that best describe the subject of a document.[1][2]
Key phrases,key terms,key segmentsor justkeywordsare the terminology which is used for defining the terms that represent the most relevant information contained in the document. Although the terminology is different, function is the same: characterization of the topic discussed in a document. The task of keyword extraction is an important problem intext mining,information extraction,information retrievalandnatural language processing(NLP).[3]
Keyword assignment methods can be roughly divided into:
Methods for automatic keyword extraction can be supervised, semi-supervised, or unsupervised.[4]Unsupervised methods can be further divided into simple statistics, linguistics or graph-based, orensemble methodsthat combine some or most of these methods.[5]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Keyword_extraction
|
Ontology learning(ontology extraction,ontology augmentation generation,ontology generation, orontology acquisition) is the automatic or semi-automatic creation ofontologies, including extracting the correspondingdomain'sterms and the relationships between theconceptsthat these terms represent from acorpusof natural language text, and encoding them with anontology languagefor easy retrieval. Asbuilding ontologiesmanually is extremely labor-intensive and time-consuming, there is great motivation to automate the process.
Typically, the process starts byextracting termsand concepts ornoun phrasesfrom plain text using linguistic processors such aspart-of-speech taggingandphrase chunking. Then statistical[1]or symbolic[2][3]techniques are used to extractrelation signatures, often based on pattern-based[4]or definition-based[5]hypernym extraction techniques.
Ontology learning (OL) is used to (semi-)automatically extract whole ontologies from natural language text.[6][7]The process is usually split into the following eight tasks, which are not all necessarily applied in every ontology learning system.
During the domainterminology extractionstep, domain-specific terms are extracted, which are used in the following step (concept discovery) to derive concepts. Relevant terms can be determined, e.g., by calculation of theTF/IDFvalues or by application of the C-value / NC-value method. The resulting list of terms has to be filtered by a domain expert. In the subsequent step, similarly to coreference resolution ininformation extraction, the OL system determines synonyms, because they share the same meaning and therefore correspond to the same concept. The most common methods therefore are clustering and the application of statistical similarity measures.
In the concept discovery step, terms are grouped to meaning bearing units, which correspond to an abstraction of the world and therefore toconcepts. The grouped terms are these domain-specific terms and their synonyms, which were identified in the domain terminology extraction step.
In the concept hierarchy derivation step, the OL system tries to arrange the extracted concepts in a taxonomic structure. This is mostly achieved with unsupervisedhierarchical clusteringmethods. Because the result of such methods is often noisy, a supervision step, e.g., user evaluation, is added. A further method for the derivation of a concept hierarchy exists in the usage of several patterns that should indicate asub- or supersumption relationship. Patterns like “X, that is a Y” or “X is a Y” indicate that X is a subclass of Y. Such pattern can be analyzed efficiently, but they often occur too infrequently to extract enough sub- or supersumption relationships. Instead, bootstrapping methods are developed, which learn these patterns automatically and therefore ensure broader coverage.
In the learning of non-taxonomic relations step, relationships are extracted that do not express any sub- or supersumption. Such relationships are, e.g., works-for or located-in. There are two common approaches to solve this subtask. The first is based upon the extraction of anonymous associations, which are named appropriately in a second step. The second approach extracts verbs, which indicate a relationship between entities, represented by the surrounding words. The result of both approaches need to be evaluated by an ontologist to ensure accuracy.
Duringrule discovery,[8]axioms (formal description of concepts) are generated for the extracted concepts. This can be achieved, e.g., by analyzing the syntactic structure of a natural language definition and the application of transformation rules on the resulting dependency tree. The result of this process is a list of axioms, which, afterwards, is comprehended to a concept description. This output is then evaluated by an ontologist.
At this step, the ontology is augmented with instances of concepts and properties. For the augmentation with instances of concepts, methods based on the matching of lexico-syntactic patterns are used. Instances of properties are added through the application ofbootstrapping methods, which collect relation tuples.
In this step, the OL system tries to extend the taxonomic structure of an existing ontology with further concepts. This can be performed in a supervised manner with a trained classifier or in an unsupervised manner via the application ofsimilarity measures.
During frame/event detection, the OL system tries to extract complex relationships from text, e.g., who departed from where to what place and when. Approaches range from applying SVM withkernel methodstosemantic role labeling(SRL)[9]to deepsemantic parsingtechniques.[10]
Dog4Dag (Dresden Ontology Generator for Directed Acyclic Graphs) is an ontology generation plugin for Protégé 4.1 and OBOEdit 2.1. It allows for term generation, sibling generation, definition generation, and relationship induction. Integrated into Protégé 4.1 and OBO-Edit 2.1, DOG4DAG allows ontology extension for all common ontology formats (e.g., OWL and OBO). Limited largely to EBI and Bio Portal lookup service extensions.[11]
|
https://en.wikipedia.org/wiki/Ontology_extraction
|
In natural language processing,open information extraction(OIE) is the task of generating a structured, machine-readable representation of the information in text, usually in the form of triples or n-arypropositions.
A proposition can be understood astruth-bearer, a textual expression of a potentialfact(e.g., "Dante wrote the Divine Comedy"), represented in an amenable structure for computers [e.g., ("Dante", "wrote", "Divine Comedy")]. An OIE extraction normally consists of a relation and a set of arguments. For instance, ("Dante", "passed away in" "Ravenna") is a proposition formed by the relation "passed away in" and the arguments "Dante" and "Ravenna". The first argument is usually referred as the subject while the second is considered to be the object.[1]
The extraction is said to be a textual representation of a potential fact because its elements are not linked to aknowledge base. Furthermore, the factual nature of the proposition has not yet been established. In the above example, transforming the extraction into a full fledged fact would first require linking, if possible, the relation and the arguments to a knowledge base. Second, the truth of the extraction would need to be determined. In computer science transforming OIE extractions into ontological facts is known asrelation extraction.
In fact, OIE can be seen as the first step to a wide range of deeper text understanding tasks such as relation extraction, knowledge-base construction,question answering,semantic role labeling. The extracted propositions can also be directly used for end-user applications such as structured search (e.g., retrieve all propositions with "Dante" as subject).
OIE was first introduced by TextRunner[2]developed at theUniversity of WashingtonTuring Center headed byOren Etzioni. Other methods introduced later such as Reverb,[3]OLLIE,[4]ClausIE[5]or CSD[6]helped to shape the OIE task by characterizing some of its aspects. At a high level, all of these approaches make use of a set of patterns to generate the extractions. Depending on the particular approach, these patterns are either hand-crafted or learned.
Reverb[3]suggested the necessity to produce meaningful relations to more accurately capture the information in the input text. For instance, given the sentence "Faust made a pact with the devil", it would be erroneous to just produce the extraction ("Faust", "made", "a pact") since it would not be adequately informative. A more precise extraction would be ("Faust", "made a pact with", "the devil"). Reverb also argued against the generation of overspecific relations.
OLLIE[4]stressed two important aspects for OIE. First, it pointed to the lack of factuality of the propositions. For instance, in a sentence like "If John studies hard, he will pass the exam", it would be inaccurate to consider ("John", "will pass", "the exam") as a fact. Additionally, the authors indicated that an OIE system should be able to extract non-verb mediated relations, which account for significant portion of the information expressed in natural language text. For instance, in the sentence "Obama, the former US president, was born in Hawaii", an OIE system should be able to recognize a proposition ("Obama", "is", "former US president").
ClausIE[5]introduced the connection between grammatical clauses, propositions, and OIE extractions. The authors stated that as each grammatical clause expresses a proposition, each verb mediated proposition can be identified by solely recognizing the set of clauses expressed in each sentence. This implies that to correctly recognize the set of propositions in an input sentence, it is necessary to understand its grammatical structure. The authors studied the case in the English language that only admits seven clause types, meaning that the identification of each proposition only requires defining seven grammatical patterns.
The finding also established a separation between the recognition of the propositions and its materialization. In a first step, the proposition can be identified without any consideration of its final form, in a domain-independent and unsupervised way, mostly based on linguistic principles. In a second step, the information can be represented according to the requirements of the underlying application, without conditioning the identification phase.
Consider the sentence "Albert Einstein was born in Ulm and died in Princeton". The first step will recognize the two propositions ("Albert Einstein", "was born", "in Ulm") and ("Albert Einstein", "died", "in Princeton"). Once the information has been correctly identified, the propositions can take the particular form required by the underlying application [e.g., ("Albert Einstein", "was born in", "Ulm") and ("Albert Einstein", "died in", "Princeton")].
CSD[6]introduced the idea of minimality in OIE. It considers that computers can make better use of the extractions if they are expressed in a compact way. This is especially important in sentences with subordinate clauses. In these cases, CSD suggests the generation of nested extractions. For example, consider the sentence "The Embassy said that 6,700 Americans were in Pakistan". CSD generates two extractions [i] ("6,700 Americans", "were", "in Pakistan") and [ii] ("The Embassy", "said", "that [i]). This is usually known as reification.
|
https://en.wikipedia.org/wiki/Open_information_extraction
|
Table extractionis the process of recognizing and separating atablefrom a large document, possibly also recognizing individual rows, columns or elements.
It may be regarded as a special form ofinformation extraction.
Table extractions fromwebpagescan take advantage of the specialHTML elementsthat exist for tables, e.g., the "table" tag,
and programming libraries may implement table extraction from webpages.
ThePythonpandassoftware library can extract tables from HTML webpages via its read_html() function.
More challenging is table extraction fromPDFsorscanned images, where there usually is no table-specific machine readable markup.[1]Systems that extract data from tables in scientificPDFshave been described.[2][3]
Wikipediapresents some of its information in tables,
and, e.g., 3.5 million tables can be extracted from theEnglish Wikipedia.[4]Some of the tables have a specific format, e.g., the so-calledinfoboxes.
Large-scale table extraction of Wikipedia infoboxes forms one of the sources forDBpedia.[5]
Commercialweb servicesfor table extraction exist, e.g.,AmazonTextract,Google'sDocument AI,IBMWatson Discovery, andMicrosoftForm Recognizer.[1]Open source tools also exist, e.g., PDFFigures 2.0 that has been used inSemantic Scholar.[6]In a comparison published in 2017, the researchers found the proprietary programABBYY FineReaderto yield the best PDF table extraction performance among six different tools evaluated.[7]In a 2023 benchmark evaluation,[8]Adobe Extract,[9]a cloud-basedAPIthat employs Adobe’s SenseiAI-platform,[10]performed best among five tools evaluated for table extraction.
|
https://en.wikipedia.org/wiki/Table_extraction
|
Inmachine learningtherandom subspace method,[1]also calledattribute bagging[2]orfeature bagging, is anensemble learningmethod that attempts to reduce thecorrelationbetweenestimatorsin an ensemble by training them on random samples offeaturesinstead of the entire feature set.
In ensemble learning one tries to combine the models produced by severallearnersinto anensemblethat performs better than the original learners. One way of combining learners isbootstrap aggregatingorbagging, which shows each learner a randomly sampled subset of the training points so that the learners will produce differentmodelsthat can be sensibly averaged.[a]In bagging, one samples training pointswith replacementfrom the full training set.
The random subspace method is similar to bagging except that thefeatures("attributes", "predictors", "independent variables") are randomly sampled, with replacement, for each learner. Informally, this causes individual learners to not over-focus on features that appear highly predictive/descriptive in the training set, but fail to be as predictive for points outside that set. For this reason, random subspaces are an attractive choice for high-dimensional problems where the number of features is much larger than the number of training points, such as learning from fMRI data[3]or gene expression data.[4]
The random subspace method has been used fordecision trees; when combined with "ordinary" bagging of decision trees, the resulting models are calledrandom forests.[5]It has also been applied tolinear classifiers,[6]support vector machines,[7]nearest neighbours[8][9]and other types of classifiers. This method is also applicable toone-class classifiers.[10][11]The random subspace method has also been applied to theportfolio selection[12][13][14][15]problem showing its superiority to the conventionalresampled portfolioessentially based on Bagging.
To tackle high-dimensional sparse problems, a framework namedRandom Subspace Ensemble (RaSE)[16]was developed. RaSE combines weak learners trained in random subspaces with a two-layer structure and iterative process.[17]RaSE has been shown to enjoy appealing theoretical properties and practical performance.[16]
An ensemble of models employing the random subspace method can be constructed using the followingalgorithm:
Now, to apply the ensemble model to an unseen point, combine the outputs of theLindividual models by majority voting or by combining theposterior probabilities.
|
https://en.wikipedia.org/wiki/Random_subspace_method
|
Resampled efficient frontieris a technique ininvestment portfolioconstruction undermodern portfolio theoryto use a set of portfolios and then average them to create an effective portfolio. This will not necessarily be the optimal portfolio, but a portfolio that is more balanced between risk and the rate of return. It is used when an investor or analyst is faced with determining whichasset classes, such as domesticfixed income, domesticequity, foreign fixed income, and foreign equity, to invest in and what proportion of the total portfolio should be of each asset class.[1]
In 1959,Harry Markowitzfirst described a method for constructing a portfolio with optimalrisk/returncharacteristics. His portfolio optimization method finds the minimum risk portfolio with a given expected return.[2]Because the Markowitz orMean-Variance Efficient Portfoliois calculated from thesample mean and covariance, which are likely different from the populationmeanandcovariance, the resultinginvestment portfoliomay allocate too much weight to assets with better estimated than true risk/return characteristics.
To account for theuncertaintyof the sample estimates, a financial analyst can create many alternativeefficient frontiersbased onresampledversions of the data. Each resampled dataset will result in a different set of Markowitz efficient portfolios. These efficient frontiers of portfolios can then be averaged to create a resampled efficient frontier.[3]
The appropriate compromise between the investor'sRisk aversionand desired return will then guide the financial analyst to choose a portfolio from the set of resampled efficient frontier portfolios. Since such a portfolio is different from the Markowitz efficient portfolio it will have suboptimal risk/return characteristics with respect to the sample mean and covariance, but optimal characteristics when averaged over the many possible values of the unknown true mean and covariance.[4]Resampled Efficiency is covered by U. S.patent#6,003,018, patent pending worldwide. New Frontier Advisors, LLC, has exclusive worldwide licensing rights.
|
https://en.wikipedia.org/wiki/Resampled_efficient_frontier
|
Inmathematicsandcomputer science, acanonical,normal, orstandardformof amathematical objectis a standard way of presenting that object as amathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies auniquerepresentation for every object, while a normal form simply specifies its form, without the requirement of uniqueness.[1]
The canonical form of apositive integerindecimal representationis a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which anequivalence relationis defined, a canonical form consists in the choice of a specific object in each class. For example:
In computer science, and more specifically incomputer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation (withcanonicalizationbeing the process through which a representation is put into its canonical form).[2]Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms.
Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra,normal formis a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form.
Canonical form can also mean adifferential formthat is defined in a natural (canonical) way.
Given a setSof objects with anequivalence relationR on S, a canonical form is given by designating some objects ofSto be "in canonical form", such that every object under consideration is equivalent to exactly one object in canonical form. In other words, the canonical forms inSrepresent the equivalence classes, once and only once. To test whether two objects are equivalent, it then suffices to test equality on their canonical forms.
A canonical form thus provides aclassification theoremand more, in that it not only classifies every class, but also gives a distinguished (canonical)representativefor each object in the class.
Formally, a canonicalization with respect to an equivalence relationRon a setSis a mappingc:S→Ssuch that for alls,s1,s2∈S:
Property 3 is redundant; it follows by applying 2 to 1.
In practical terms, it is often advantageous to be able to recognize the canonical forms. There is also a practical, algorithmic question to consider: how to pass from a given objectsinSto its canonical forms*? Canonical forms are generally used to make operating with equivalence classes more effective. For example, inmodular arithmetic, the canonical form for a residue class is usually taken as the least non-negative integer in it. Operations on classes are carried out by combining these representatives, and then reducing the result to its least non-negative residue.
The uniqueness requirement is sometimes relaxed, allowing the forms to be unique up to some finer equivalence relation, such as allowing for reordering of terms (if there is no natural ordering on terms).
A canonical form may simply be a convention, or a deep theorem. For example, polynomials are conventionally written with the terms in descending powers: it is more usual to writex2+x+ 30 thanx+ 30 +x2, although the two forms define the same polynomial. By contrast, the existence ofJordan canonical formfor a matrix is a deep theorem.
According toOEDandLSJ, the termcanonicalstems from theAncient Greekwordkanonikós(κανονικός, "regular, according to rule") fromkanṓn(κᾰνών, "rod, rule"). The sense ofnorm,standard, orarchetypehas been used in many disciplines. Mathematical usage is attested in a 1738 letter fromLogan.[3]The German termkanonische Formis attested in a 1846 paper byEisenstein,[4]later the same yearRichelotuses the termNormalformin a paper,[5]and in 1851Sylvesterwrites:[6]
"I now proceed to [...] the mode of reducing Algebraical Functions to their simplest and most symmetrical, or as my admirable friendM. Hermitewell proposes to call them, theirCanonical forms."
In the same period, usage is attested byHesse("Normalform"),[7]Hermite("forme canonique"),[8]Borchardt("forme canonique"),[9]andCayley("canonical form").[10]
In 1865, theDictionary of Science, Literature and Artdefines canonical form as:
"In Mathematics, denotes a form, usually the simplest or most symmetrical, to which, without loss of generality, all functions of the same class can be reduced."
Note: in this section, "up to" some equivalence relation E means that the canonical form is not unique in general, but that if one object has two different canonical forms, they are E-equivalent.
Standard form is used by many mathematicians and scientists to write extremelylarge numbersin a more concise and understandable way, the most prominent of which being thescientific notation.[11]
Inanalytic geometry:
By contrast, there are alternative forms for writing equations. For example, the equation of a line may be written as alinear equationinpoint-slopeandslope-intercept form.
Convex polyhedracan be put intocanonical formsuch that:
Every differentiablemanifoldhas acotangent bundle. That bundle can always be endowed with a certaindifferential form, called thecanonical one-form. This form gives the cotangent bundle the structure of asymplectic manifold, and allows vector fields on the manifold to be integrated by means of theEuler-Lagrange equations, or by means ofHamiltonian mechanics. Such systems of integrabledifferential equationsare calledintegrable systems.
The study ofdynamical systemsoverlaps with that ofintegrable systems; there one has the idea of anormal form (dynamical systems).
In the study of manifolds in three dimensions, one has thefirst fundamental form, thesecond fundamental formand thethird fundamental form.
The symbolic manipulation of a formula from one form to another is called a "rewriting" of that formula. One can study the abstract properties of rewriting generic formulas, by studying the collection of rules by which formulas can be validly manipulated. These are the "rewriting rules"—an integral part of anabstract rewriting system. A common question is whether it is possible to bring some generic expression to a single, common form, the normal form. If different sequences of rewrites still result in the same form, then that form can be termed a normal form, with the rewrite being called a confluent. It is not always possible to obtain a normal form.
Ingraph theory, a branch of mathematics, graph canonization is the problem of finding a canonical form of a given graphG. A canonical form is alabeled graphCanon(G) that isisomorphictoG, such that every graph that is isomorphic toGhas the same canonical form asG. Thus, from a solution to the graph canonization problem, one could also solve the problem ofgraph isomorphism: to test whether two graphsGandHare isomorphic, compute their canonical forms Canon(G) and Canon(H), and test whether these two canonical forms are identical.
Incomputing, the reduction of data to any kind of canonical form is commonly calleddata normalization.
For instance,database normalizationis the process of organizing thefieldsandtablesof arelational databaseto minimizeredundancyand dependency.[13]
In the field ofsoftware security, a commonvulnerabilityis unchecked malicious input (seeCode injection). The mitigation for this problem is properinput validation. Before input validation is performed, the input is usually normalized by eliminating encoding (e.g.,HTML encoding) and reducing the input data to a single commoncharacter set.
Other forms of data, typically associated withsignal processing(includingaudioandimaging) ormachine learning, can be normalized in order to provide a limited range of values.
Incontent management, the concept of asingle source of truth(SSOT) is applicable, just as it is indatabase normalizationgenerally and insoftware development. Competentcontent management systemsprovide logical ways of obtaining it, such astransclusion.
|
https://en.wikipedia.org/wiki/Canonical_form
|
Digital signal processing(DSP) is the use ofdigital processing, such as by computers or more specializeddigital signal processors, to perform a wide variety ofsignal processingoperations. Thedigital signalsprocessed in this manner are a sequence of numbers that representsamplesof acontinuous variablein a domain such as time, space, or frequency. Indigital electronics, a digital signal is represented as apulse train,[1][2]which is typically generated by the switching of atransistor.[3]
Digital signal processing andanalog signal processingare subfields of signal processing. DSP applications includeaudioandspeech processing,sonar,radarand othersensor arrayprocessing,spectral density estimation,statistical signal processing,digital image processing,data compression,video coding,audio coding,image compression, signal processing fortelecommunications,control systems,biomedical engineering, andseismology, among others.
DSP can involve linear or nonlinear operations. Nonlinear signal processing is closely related tononlinear system identification[4]and can be implemented in thetime,frequency, andspatio-temporal domains.
The application of digital computation to signal processing allows for many advantages over analog processing in many applications, such aserror detection and correctionin transmission as well asdata compression.[5]Digital signal processing is also fundamental todigital technology, such asdigital telecommunicationandwireless communications.[6]DSP is applicable to bothstreaming dataand static (stored) data.
To digitally analyze and manipulate an analog signal, it must be digitized with ananalog-to-digital converter(ADC).[7]Sampling is usually carried out in two stages,discretizationandquantization. Discretization means that the signal is divided into equal intervals of time, and each interval is represented by a single measurement of amplitude. Quantization means each amplitude measurement is approximated by a value from a finite set. Roundingreal numbersto integers is an example.
TheNyquist–Shannon sampling theoremstates that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is often significantly higher than this.[8]It is common to use ananti-aliasing filterto limit the signal bandwidth to comply with the sampling theorem, however careful selection of this filter is required because the reconstructed signal will be the filtered signal plus residualaliasingfrom imperfectstop bandrejection instead of the original (unfiltered) signal.
Theoretical DSP analyses and derivations are typically performed ondiscrete-time signalmodels with no amplitude inaccuracies (quantization error), created by the abstract process ofsampling. Numerical methods require a quantized signal, such as those produced by an ADC. The processed result might be a frequency spectrum or a set of statistics. But often it is another quantized signal that is converted back to analog form by adigital-to-analog converter(DAC).
DSP engineers usually study digital signals in one of the following domains:time domain(one-dimensional signals), spatial domain (multidimensional signals),frequency domain, andwaveletdomains. They choose the domain in which to process a signal by making an informed assumption (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas adiscrete Fourier transformproduces the frequency domain representation.
Time domainrefers to the analysis of signals with respect to time. Similarly, space domain refers to the analysis of signals with respect to position, e.g., pixel location for the case of image processing.
The most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering.Digital filteringgenerally consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. The surrounding samples may be identified with respect to time or space. The output of a linear digital filter to any given input may be calculated byconvolvingthe input signal with animpulse response.
Signals are converted from time or space domain to the frequency domain usually through use of theFourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant, often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared.
The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is also calledspectrum-orspectral analysis.
Filtering, particularly in non-realtime work can also be achieved in the frequency domain, applying the filter and then converting back to the time domain. This can be an efficient implementation and can give essentially any filter response including excellent approximations tobrickwall filters.
There are some commonly used frequency domain transformations. For example, thecepstrumconverts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the harmonic structure of the original spectrum.
Digital filters come in bothinfinite impulse response(IIR) andfinite impulse response(FIR) types. Whereas FIR filters are always stable, IIR filters have feedback loops that may become unstable and oscillate. TheZ-transformprovides a tool for analyzing stability issues of digital IIR filters. It is analogous to theLaplace transform, which is used to design and analyze analog IIR filters.
A signal is represented as linear combination of its previous samples. Coefficients of the combination are called autoregression coefficients. This method has higher frequency resolution and can process shorter signals compared to the Fourier transform.[9]Prony's methodcan be used to estimate phases, amplitudes, initial phases and decays of the components of signal.[10][9]Components are assumed to be complex decaying exponents.[10][9]
A time-frequency representation of signal can capture both temporal evolution and frequency structure of analyzed signal. Temporal and frequency resolution are limited by the principle of uncertainty and the tradeoff is adjusted by the width of analysis window. Linear techniques such asShort-time Fourier transform,wavelet transform,filter bank,[11]non-linear (e.g.,Wigner–Ville transform[10]) andautoregressivemethods (e.g. segmented Prony method)[10][12][13]are used for representation of signal on the time-frequency plane. Non-linear and segmented Prony methods can provide higher resolution, but may produce undesirable artifacts. Time-frequency analysis is usually used for analysis of non-stationary signals. For example, methods offundamental frequencyestimation, such as RAPT and PEFAC[14]are based on windowed spectral analysis.
Innumerical analysisandfunctional analysis, adiscrete wavelet transformis anywavelet transformfor which thewaveletsare discretely sampled. As with other wavelet transforms, a key advantage it has overFourier transformsis temporal resolution: it captures both frequencyandlocation information. The accuracy of the joint time-frequency resolution is limited by theuncertainty principleof time-frequency.
Empirical mode decomposition is based on decomposition signal intointrinsic mode functions(IMFs). IMFs are quasi-harmonical oscillations that are extracted from the signal.[15]
DSPalgorithmsmay be run on general-purpose computers[16]anddigital signal processors.[17]DSP algorithms are also implemented on purpose-built hardware such asapplication-specific integrated circuit(ASICs).[18]Additional technologies for digital signal processing include more powerful general-purposemicroprocessors,graphics processing units,field-programmable gate arrays(FPGAs),digital signal controllers(mostly for industrial applications such as motor control), andstream processors.[19]
For systems that do not have areal-time computingrequirement and the signal data (either input or output) exists in data files, processing may be done economically with a general-purpose computer. This is essentially no different from any otherdata processing, except DSP mathematical techniques (such as theDCTandFFT) are used, and the sampled data is usually assumed to be uniformly sampled in time or space. An example of such an application is processingdigital photographswith software such asPhotoshop.
When the application requirement is real-time, DSP is often implemented using specialized or dedicated processors or microprocessors, sometimes using multiple processors or multiple processing cores. These may process data using fixed-point arithmetic or floating point. For more demanding applicationsFPGAsmay be used.[20]For the most demanding applications or high-volume products,ASICsmight be designed specifically for the application.
Parallel implementations of DSP algorithms, utilizing multi-core CPU and many-core GPU architectures, are developed to improve the performances in terms of latency of these algorithms.[21]
Native processingis done by the computer's CPU rather than by DSP or outboard processing, which is done by additional third-party DSP chips located on extension cards or external hardware boxes or racks. Manydigital audio workstationssuch asLogic Pro,Cubase,Digital PerformerandPro ToolsLE use native processing. Others, such asPro ToolsHD,Universal Audio's UAD-1 andTC Electronic's Powercore use DSP processing.
General application areas for DSP include
Specific examples includespeech codingand transmission in digitalmobile phones,room correctionof sound inhi-fiandsound reinforcementapplications, analysis and control ofindustrial processes,medical imagingsuch asCATscans andMRI,audio crossoversandequalization,digital synthesizers, audioeffects unitsand mobile audio surveillance platforms such as Hypatia, a real-time encrypted emergency response application.[22][23]DSP has been used inhearing aidtechnology since 1996, which allows for automatic directional microphones, complex digitalnoise reduction, and improved adjustment of thefrequency response.[24]
|
https://en.wikipedia.org/wiki/Digital_signal_processing
|
Inlinear algebra,eigendecompositionis thefactorizationof amatrixinto acanonical form, whereby the matrix is represented in terms of itseigenvalues and eigenvectors. Onlydiagonalizable matricescan be factorized in this way. When the matrix being factorized is anormalor realsymmetric matrix, the decomposition is called "spectral decomposition", derived from thespectral theorem.
A (nonzero) vectorvof dimensionNis an eigenvector of a squareN×NmatrixAif it satisfies alinear equationof the formAv=λv{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} }for some scalarλ. Thenλis called the eigenvalue corresponding tov. Geometrically speaking, the eigenvectors ofAare the vectors thatAmerely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.
This yields an equation for the eigenvaluesp(λ)=det(A−λI)=0.{\displaystyle p\left(\lambda \right)=\det \left(\mathbf {A} -\lambda \mathbf {I} \right)=0.}We callp(λ)thecharacteristic polynomial, and the equation, called the characteristic equation, is anNth-order polynomial equation in the unknownλ. This equation will haveNλdistinct solutions, where1 ≤Nλ≤N. The set of solutions, that is, the eigenvalues, is called thespectrumofA.[1][2][3]
If the field of scalars isalgebraically closed, then we canfactorpasp(λ)=(λ−λ1)n1(λ−λ2)n2⋯(λ−λNλ)nNλ=0.{\displaystyle p(\lambda )=\left(\lambda -\lambda _{1}\right)^{n_{1}}\left(\lambda -\lambda _{2}\right)^{n_{2}}\cdots \left(\lambda -\lambda _{N_{\lambda }}\right)^{n_{N_{\lambda }}}=0.}The integerniis termed thealgebraic multiplicityof eigenvalueλi. The algebraic multiplicities sum toN:∑i=1Nλni=N.{\textstyle \sum _{i=1}^{N_{\lambda }}{n_{i}}=N.}
For each eigenvalueλi, we have a specific eigenvalue equation(A−λiI)v=0.{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} =0.}There will be1 ≤mi≤nilinearly independentsolutions to each eigenvalue equation. The linear combinations of themisolutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalueλi. The integermiis termed thegeometric multiplicityofλi. It is important to keep in mind that the algebraic multiplicityniand geometric multiplicitymimay or may not be equal, but we always havemi≤ni. The simplest case is of course whenmi=ni= 1. The total number of linearly independent eigenvectors,Nv, can be calculated by summing the geometric multiplicities∑i=1Nλmi=Nv.{\displaystyle \sum _{i=1}^{N_{\lambda }}{m_{i}}=N_{\mathbf {v} }.}
The eigenvectors can be indexed by eigenvalues, using a double index, withvijbeing thejth eigenvector for theith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single indexvk, withk= 1, 2, ...,Nv.
LetAbe a squaren×nmatrix withnlinearly independent eigenvectorsqi(wherei= 1, ...,n). ThenAcan befactoredasA=QΛQ−1{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}}whereQis the squaren×nmatrix whoseith column is the eigenvectorqiofA, andΛis thediagonal matrixwhose diagonal elements are the corresponding eigenvalues,Λii=λi. Note that onlydiagonalizable matricescan be factorized in this way. For example, thedefective matrix[1101]{\displaystyle \left[{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right]}(which is ashear matrix) cannot be diagonalized.
Theneigenvectorsqiare usually normalized, but they don't have to be. A non-normalized set ofneigenvectors,vican also be used as the columns ofQ. That can be understood by noting that the magnitude of the eigenvectors inQgets canceled in the decomposition by the presence ofQ−1. If one of the eigenvaluesλihas multiple linearly independent eigenvectors (that is, the geometric multiplicity ofλiis greater than 1), then these eigenvectors for this eigenvalueλican be chosen to be mutuallyorthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that ifAis a normal matrix, then by the spectral theorem, it's always possible to diagonalizeAin anorthonormal basis{qi}.
The decomposition can be derived from the fundamental property of eigenvectors:Av=λvAQ=QΛA=QΛQ−1.{\displaystyle {\begin{aligned}\mathbf {A} \mathbf {v} &=\lambda \mathbf {v} \\\mathbf {A} \mathbf {Q} &=\mathbf {Q} \mathbf {\Lambda } \\\mathbf {A} &=\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}.\end{aligned}}}The linearly independent eigenvectorsqiwith nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible productsAx, forx∈Cn, which is the same as theimage(orrange) of the correspondingmatrix transformation, and also thecolumn spaceof the matrixA. The number of linearly independent eigenvectorsqiwith nonzero eigenvalues is equal to therankof the matrixA, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.
The linearly independent eigenvectorsqiwith an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for thenull space(also known as the kernel) of the matrix transformationA.
The 2 × 2 real matrixAA=[1013]{\displaystyle \mathbf {A} ={\begin{bmatrix}1&0\\1&3\\\end{bmatrix}}}may be decomposed into a diagonal matrix through multiplication of a non-singular matrixQQ=[abcd]∈R2×2.{\displaystyle \mathbf {Q} ={\begin{bmatrix}a&b\\c&d\end{bmatrix}}\in \mathbb {R} ^{2\times 2}.}
Then[abcd]−1[1013][abcd]=[x00y],{\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}x&0\\0&y\end{bmatrix}},}for some real diagonal matrix[x00y]{\displaystyle \left[{\begin{smallmatrix}x&0\\0&y\end{smallmatrix}}\right]}.
Multiplying both sides of the equation on the left byQ:[1013][abcd]=[abcd][x00y].{\displaystyle {\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}x&0\\0&y\end{bmatrix}}.}The above equation can be decomposed into twosimultaneous equations:{[1013][ac]=[axcx][1013][bd]=[bydy].{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}={\begin{bmatrix}ax\\cx\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}={\begin{bmatrix}by\\dy\end{bmatrix}}\end{cases}}.}Factoring out theeigenvaluesxandy:{[1013][ac]=x[ac][1013][bd]=y[bd]{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=x{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=y{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}Lettinga=[ac],b=[bd],{\displaystyle \mathbf {a} ={\begin{bmatrix}a\\c\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b\\d\end{bmatrix}},}this gives us two vector equations:{Aa=xaAb=yb{\displaystyle {\begin{cases}\mathbf {A} \mathbf {a} =x\mathbf {a} \\\mathbf {A} \mathbf {b} =y\mathbf {b} \end{cases}}}And can be represented by a single vector equation involving two solutions as eigenvalues:Au=λu{\displaystyle \mathbf {A} \mathbf {u} =\lambda \mathbf {u} }whereλrepresents the two eigenvaluesxandy, andurepresents the vectorsaandb.
Shiftingλuto the left hand side and factoringuout(A−λI)u=0{\displaystyle \left(\mathbf {A} -\lambda \mathbf {I} \right)\mathbf {u} =\mathbf {0} }SinceQis non-singular, it is essential thatuis nonzero. Therefore,det(A−λI)=0{\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0}Thus(1−λ)(3−λ)=0{\displaystyle (1-\lambda )(3-\lambda )=0}giving us the solutions of the eigenvalues for the matrixAasλ= 1orλ= 3, and the resulting diagonal matrix from the eigendecomposition ofAis thus[1003]{\displaystyle \left[{\begin{smallmatrix}1&0\\0&3\end{smallmatrix}}\right]}.
Putting the solutions back into the above simultaneous equations{[1013][ac]=1[ac][1013][bd]=3[bd]{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=1{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=3{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}
Solving the equations, we havea=−2candb=0,c,d∈R.{\displaystyle a=-2c\quad {\text{and}}\quad b=0,\qquad c,d\in \mathbb {R} .}Thus the matrixQrequired for the eigendecomposition ofAisQ=[−2c0cd],c,d∈R,{\displaystyle \mathbf {Q} ={\begin{bmatrix}-2c&0\\c&d\end{bmatrix}},\qquad c,d\in \mathbb {R} ,}that is:[−2c0cd]−1[1013][−2c0cd]=[1003],c,d∈R{\displaystyle {\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\0&3\end{bmatrix}},\qquad c,d\in \mathbb {R} }
If a matrixAcan be eigendecomposed and if none of its eigenvalues are zero, thenAisinvertibleand its inverse is given byA−1=QΛ−1Q−1{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}IfA{\displaystyle \mathbf {A} }is a symmetric matrix, sinceQ{\displaystyle \mathbf {Q} }is formed from the eigenvectors ofA{\displaystyle \mathbf {A} },Q{\displaystyle \mathbf {Q} }is guaranteed to be anorthogonal matrix, thereforeQ−1=QT{\displaystyle \mathbf {Q} ^{-1}=\mathbf {Q} ^{\mathrm {T} }}. Furthermore, becauseΛis adiagonal matrix, its inverse is easy to calculate:[Λ−1]ii=1λi{\displaystyle \left[\mathbf {\Lambda } ^{-1}\right]_{ii}={\frac {1}{\lambda _{i}}}}
When eigendecomposition is used on a matrix of measured, realdata, theinversemay be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.[4]
Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See alsoTikhonov regularizationas a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.
The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.
The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.
The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).
If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of theLaplacianof the sorted eigenvalues:[5]min|∇2λs|{\displaystyle \min \left|\nabla ^{2}\lambda _{\mathrm {s} }\right|}where the eigenvalues are subscripted with ansto denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.
The eigendecomposition allows for much easier computation ofpower seriesof matrices. Iff(x)is given byf(x)=a0+a1x+a2x2+⋯{\displaystyle f(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots }then we know thatf(A)=Qf(Λ)Q−1{\displaystyle f\!\left(\mathbf {A} \right)=\mathbf {Q} \,f\!\left(\mathbf {\Lambda } \right)\mathbf {Q} ^{-1}}BecauseΛis adiagonal matrix, functions ofΛare very easy to calculate:[f(Λ)]ii=f(λi){\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
The off-diagonal elements off(Λ)are zero; that is,f(Λ)is also a diagonal matrix. Therefore, calculatingf(A)reduces to just calculating the function on each of the eigenvalues.
A similar technique works more generally with theholomorphic functional calculus, usingA−1=QΛ−1Q−1{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}fromabove. Once again, we find that[f(Λ)]ii=f(λi){\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
A2=(QΛQ−1)(QΛQ−1)=QΛ(Q−1Q)ΛQ−1=QΛ2Q−1An=QΛnQ−1expA=Qexp(Λ)Q−1{\displaystyle {\begin{aligned}\mathbf {A} ^{2}&=\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)=\mathbf {Q} \mathbf {\Lambda } \left(\mathbf {Q} ^{-1}\mathbf {Q} \right)\mathbf {\Lambda } \mathbf {Q} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{2}\mathbf {Q} ^{-1}\\[1.2ex]\mathbf {A} ^{n}&=\mathbf {Q} \mathbf {\Lambda } ^{n}\mathbf {Q} ^{-1}\\[1.2ex]\exp \mathbf {A} &=\mathbf {Q} \exp(\mathbf {\Lambda } )\mathbf {Q} ^{-1}\end{aligned}}}which are examples for the functionsf(x)=x2,f(x)=xn,f(x)=expx{\displaystyle f(x)=x^{2},\;f(x)=x^{n},\;f(x)=\exp {x}}. Furthermore,expA{\displaystyle \exp {\mathbf {A} }}is thematrix exponential.
Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.[6]
A complex-valued square matrixA{\displaystyle A}isnormal(meaning ,A∗A=AA∗{\displaystyle \mathbf {A} ^{*}\mathbf {A} =\mathbf {A} \mathbf {A} ^{*}}, whereA∗{\displaystyle \mathbf {A} ^{*}}is theconjugate transpose) if and only if it can be decomposed asA=UΛU∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}, whereU{\displaystyle \mathbf {U} }is aunitary matrix(meaningU∗=U−1{\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}}) andΛ={\displaystyle \mathbf {\Lambda } =}diag(λ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}) is adiagonal matrix.[7]The columnsu1,⋯,un{\displaystyle \mathbf {u} _{1},\cdots ,\mathbf {u} _{n}}ofU{\displaystyle \mathbf {U} }form anorthonormal basisand are eigenvectors ofA{\displaystyle \mathbf {A} }with corresponding eigenvaluesλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}.[8]
For example, consider the 2 x 2 normal matrixA=[1221]{\displaystyle \mathbf {A} ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}}.
The eigenvalues areλ1=3{\displaystyle \lambda _{1}=3}andλ2=−1{\displaystyle \lambda _{2}=-1}.
The (normalized) eigenvectors corresponding to these eigenvalues areu1=12[11]{\displaystyle \mathbf {u} _{1}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}}andu2=12[−11]{\displaystyle \mathbf {u} _{2}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}-1\\1\end{bmatrix}}}.
The diagonalization isA=UΛU∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}, whereU=[1/21/21/2−1/2]{\displaystyle \mathbf {U} ={\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}},Λ={\displaystyle \mathbf {\Lambda } =}[300−1]{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}andU∗=U−1={\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}=}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}.
The verification isUΛU∗={\displaystyle \mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}=}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}[300−1]{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}[1/21/21/2−1/2]{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}=[1221]=A{\displaystyle ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}=\mathbf {A} }.
This example illustrates the process of diagonalizing a normal matrixA{\displaystyle \mathbf {A} }by finding its eigenvalues and eigenvectors, forming the unitary matrixU{\displaystyle \mathbf {U} }, the diagonal matrixΛ{\displaystyle \mathbf {\Lambda } }, and verifying the decomposition.
As a special case, for everyn×nrealsymmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real andorthonormal. Thus a real symmetric matrixAcan be decomposed asA=QΛQT{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{\mathsf {T}}}, whereQis anorthogonal matrixwhose columns are the real, orthonormal eigenvectors ofA, andΛis a diagonal matrix whose entries are the eigenvalues ofA.[9]
Diagonalizable matricescan be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed asA=PDP−1{\displaystyle \mathbf {A} =\mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}, whereP{\displaystyle \mathbf {P} }is a matrix whose columns are eigenvectors ofA{\displaystyle \mathbf {A} }andD{\displaystyle \mathbf {D} }is a diagonal matrix consisting of the corresponding eigenvalues ofA{\displaystyle \mathbf {A} }.[8]
Positivedefinite matricesare matrices for which all eigenvalues are positive. They can be decomposed asA=LLT{\displaystyle \mathbf {A} =\mathbf {L} \mathbf {L} ^{\mathsf {T}}}using theCholesky decomposition, whereL{\displaystyle \mathbf {L} }is a lower triangular matrix.[10]
Unitary matricessatisfyUU∗=I{\displaystyle \mathbf {U} \mathbf {U} ^{*}=\mathbf {I} }(real case) orUU†=I{\displaystyle \mathbf {U} \mathbf {U} ^{\dagger }=\mathbf {I} }(complex case), whereU∗{\displaystyle \mathbf {U} ^{*}}denotes theconjugate transposeandU†{\displaystyle \mathbf {U} ^{\dagger }}denotes the conjugate transpose. They diagonalize usingunitary transformations.[8]
Hermitian matricessatisfyH=H†{\displaystyle \mathbf {H} =\mathbf {H} ^{\dagger }}, whereH†{\displaystyle \mathbf {H} ^{\dagger }}denotes the conjugate transpose. They can be diagonalized using unitary ororthogonal matrices.[8]
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using thecharacteristic polynomial. However, this is often impossible for larger matrices, in which case we must use anumerical method.
In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: theAbel–Ruffini theoremimplies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply usingnth roots. Therefore, general algorithms to find eigenvectors and eigenvalues areiterative.
Iterative numerical algorithms for approximating roots of polynomials exist, such asNewton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that smallround-off errorsin the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremelyill-conditionedfunction of the coefficients.[11]
A simple and accurate iterative method is thepower method: arandomvectorvis chosen and a sequence ofunit vectorsis computed asAv‖Av‖,A2v‖A2v‖,A3v‖A3v‖,…{\displaystyle {\frac {\mathbf {A} \mathbf {v} }{\left\|\mathbf {A} \mathbf {v} \right\|}},{\frac {\mathbf {A} ^{2}\mathbf {v} }{\left\|\mathbf {A} ^{2}\mathbf {v} \right\|}},{\frac {\mathbf {A} ^{3}\mathbf {v} }{\left\|\mathbf {A} ^{3}\mathbf {v} \right\|}},\ldots }
Thissequencewillalmost alwaysconverge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided thatvhas a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example,Googleuses it to calculate thepage rankof documents in their search engine.[12]Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at thespanofallthe vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis ofArnoldi iteration.[11]Alternatively, the importantQR algorithmis also based on a subtle transformation of a power method.[11]
Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation(A−λiI)vi,j=0{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} _{i,j}=\mathbf {0} }usingGaussian eliminationorany other methodfor solvingmatrix equations.
However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. Inpower iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by theRayleigh quotientof the eigenvector).[11]In the QR algorithm for aHermitian matrix(or any normal matrix), the orthonormal eigenvectors are obtained as a product of theQmatrices from the steps in the algorithm.[11](For more general matrices, the QR algorithm yields theSchur decompositionfirst, from which the eigenvectors can be obtained by abacksubstitutionprocedure.[13]) For Hermitian matrices, theDivide-and-conquer eigenvalue algorithmis more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.[11]
Recall that thegeometricmultiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, thenullspaceofλI−A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associatedgeneralized eigenspace(1st sense), which is the nullspace of the matrix(λI−A)kforany sufficiently largek. That is, it is the space ofgeneralized eigenvectors(first sense), where a generalized eigenvector is any vector whicheventuallybecomes 0 ifλI−Ais applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.
This usage should not be confused with thegeneralized eigenvalue problemdescribed below.
Aconjugate eigenvectororconeigenvectoris a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called theconjugate eigenvalueorconeigenvalueof the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation isAv=λv∗.{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} ^{*}.}For example, in coherent electromagnetic scattering theory, the linear transformationArepresents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. Inoptics, the coordinate system is defined from the wave's viewpoint, known as theForward Scattering Alignment(FSA), and gives rise to a regular eigenvalue equation, whereas inradar, the coordinate system is defined from the radar's viewpoint, known as theBack Scattering Alignment(BSA), and gives rise to a coneigenvalue equation.
Ageneralized eigenvalue problem(second sense) is the problem of finding a (nonzero) vectorvthat obeysAv=λBv{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {B} \mathbf {v} }whereAandBare matrices. Ifvobeys this equation, with someλ, then we callvthegeneralized eigenvectorofAandB(in the second sense), andλis called thegeneralized eigenvalueofAandB(in the second sense) which corresponds to the generalized eigenvectorv. The possible values ofλmust obey the following equationdet(A−λB)=0.{\displaystyle \det(\mathbf {A} -\lambda \mathbf {B} )=0.}
Ifnlinearly independent vectors{v1, …,vn}can be found, such that for everyi∈ {1, …,n},Avi=λiBvi, then we define the matricesPandDsuch thatP=[||v1⋯vn||]≡[(v1)1⋯(vn)1⋮⋮(v1)n⋯(vn)n]{\displaystyle P={\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}\equiv {\begin{bmatrix}(\mathbf {v} _{1})_{1}&\cdots &(\mathbf {v} _{n})_{1}\\\vdots &&\vdots \\(\mathbf {v} _{1})_{n}&\cdots &(\mathbf {v} _{n})_{n}\end{bmatrix}}}(D)ij={λi,ifi=j0,otherwise{\displaystyle (D)_{ij}={\begin{cases}\lambda _{i},&{\text{if }}i=j\\0,&{\text{otherwise}}\end{cases}}}Then the following equality holdsA=BPDP−1{\displaystyle \mathbf {A} =\mathbf {B} \mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}And the proof isAP=A[||v1⋯vn||]=[||Av1⋯Avn||]=[||λ1Bv1⋯λnBvn||]=[||Bv1⋯Bvn||]D=BPD{\displaystyle \mathbf {A} \mathbf {P} =\mathbf {A} {\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\A\mathbf {v} _{1}&\cdots &A\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\\lambda _{1}B\mathbf {v} _{1}&\cdots &\lambda _{n}B\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\B\mathbf {v} _{1}&\cdots &B\mathbf {v} _{n}\\|&&|\end{bmatrix}}\mathbf {D} =\mathbf {B} \mathbf {P} \mathbf {D} }
And sincePis invertible, we multiply the equation from the right by its inverse, finishing the proof.
The set of matrices of the formA−λB, whereλis a complex number, is called apencil; the termmatrix pencilcan also refer to the pair(A,B)of matrices.[14]
IfBis invertible, then the original problem can be written in the formB−1Av=λv{\displaystyle \mathbf {B} ^{-1}\mathbf {A} \mathbf {v} =\lambda \mathbf {v} }which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important ifAandBareHermitian matrices, since in this caseB−1Ais not generally Hermitian and important properties of the solution are no longer apparent.
IfAandBare both symmetric or Hermitian, andBis also apositive-definite matrix, the eigenvaluesλiare real and eigenvectorsv1andv2with distinct eigenvalues areB-orthogonal (v1*Bv2= 0).[15]In this case, eigenvectors can be chosen so that the matrixPdefined above satisfiesP∗BP=I{\displaystyle \mathbf {P} ^{*}\mathbf {B} \mathbf {P} =\mathbf {I} }orPP∗B=I,{\displaystyle \mathbf {P} \mathbf {P} ^{*}\mathbf {B} =\mathbf {I} ,}and there exists abasisof generalized eigenvectors (it is not adefectiveproblem).[14]This case is sometimes called aHermitian definite pencilordefinite pencil.[14]
|
https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix
|
Instatisticsandsignal processing, the method ofempirical orthogonal function(EOF) analysis is a decomposition of asignalor data set in terms oforthogonalbasis functionswhich are determined from the data. The term is also interchangeable with the geographically weightedPrincipal components analysisingeophysics.[1]
Theithbasis function is chosen to be orthogonal to the basis functions from the first throughi− 1, and to minimize the residualvariance. That is, the basis functions are chosen to be different from each other, and to account for as much variance as possible.
The method of EOF analysis is similar in spirit toharmonic analysis, but harmonic analysis typically uses predetermined orthogonal functions, for example, sine and cosine functions at fixedfrequencies. In some cases the two methods may yield essentially the same results.
The basis functions are typically found by computing theeigenvectorsof thecovariance matrixof the data set. A more advanced technique is to form akernelout of the data, using a fixedkernel. The basis functions from the eigenvectors of the kernel matrix are thus non-linear in the location of the data (seeMercer's theoremand thekernel trickfor more information).
|
https://en.wikipedia.org/wiki/Empirical_orthogonal_functions
|
Inmathematics,Fourier analysis(/ˈfʊrieɪ,-iər/)[1]is the study of the way generalfunctionsmay be represented or approximated by sums of simplertrigonometric functions. Fourier analysis grew from the study ofFourier series, and is named afterJoseph Fourier, who showed that representing a function as asumof trigonometric functions greatly simplifies the study ofheat transfer.
The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function intooscillatorycomponents is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known asFourier synthesis. For example, determining what componentfrequenciesare present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the termFourier analysisoften refers to the study of both operations.
The decomposition process itself is called aFourier transformation. Its output, theFourier transform, is often given a more specific name, which depends on thedomainand other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known asharmonic analysis. Eachtransformused for analysis (seelist of Fourier-related transforms) has a correspondinginversetransform that can be used for synthesis.
To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably theleast-squares spectral analysis(LSSA) methods that use aleast squaresfit ofsinusoidsto data samples, similar to Fourier analysis.[2][3]Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.[4]
Fourier analysis has many scientific applications – inphysics,partial differential equations,number theory,combinatorics,signal processing,digital image processing,probability theory,statistics,forensics,option pricing,cryptography,numerical analysis,acoustics,oceanography,sonar,optics,diffraction,geometry,proteinstructure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.[9]
Fourier transformation is also useful as a compact representation of a signal. For example,JPEGcompression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lowerarithmetic precision, and weak components are eliminated, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.
Insignal processing, the Fourier transform often takes atime seriesor a function ofcontinuous time, and maps it into afrequency spectrum. That is, it takes a function from the time domain into thefrequencydomain; it is adecompositionof a function intosinusoidsof different frequencies; in the case of aFourier seriesordiscrete Fourier transform, the sinusoids areharmonicsof the fundamental frequency of the function being analyzed.
When a functions(t){\displaystyle s(t)}is a function of time and represents a physicalsignal, the transform has a standard interpretation as the frequency spectrum of the signal. Themagnitudeof the resulting complex-valued functionS(f){\displaystyle S(f)}at frequencyf{\displaystyle f}represents theamplitudeof a frequency component whoseinitial phaseis given by the angle ofS(f){\displaystyle S(f)}(polar coordinates).
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyzespatialfrequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches asimage processing,heat conduction, andautomatic control.
When processing signals, such asaudio,radio waves, light waves,seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.[10]
Some examples include:
Most often, the unqualified termFourier transformrefers to the transform of functions of a continuousrealargument, and it produces a continuous function of frequency, known as afrequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (t{\displaystyle t}), and the domain of the output (final) function isordinary frequency, the transform of functions(t){\displaystyle s(t)}at frequencyf{\displaystyle f}is given by thecomplex number:
Evaluating this quantity for all values off{\displaystyle f}produces thefrequency-domainfunction. Thens(t){\displaystyle s(t)}can be represented as a recombination ofcomplex exponentialsof all possible frequencies:
which is the inverse transform formula. The complex number,S(f),{\displaystyle S(f),}conveys both amplitude and phase of frequencyf.{\displaystyle f.}
SeeFourier transformfor much more information, including:
The Fourier transform of a periodic function,sP(t),{\displaystyle s_{_{P}}(t),}with periodP,{\displaystyle P,}becomes aDirac combfunction, modulated by a sequence of complexcoefficients:
The inverse transform, known asFourier series, is a representation ofsP(t){\displaystyle s_{_{P}}(t)}in terms of a summation of a potentially infinite number of harmonically related sinusoids orcomplex exponentialfunctions, each with an amplitude and phase specified by one of the coefficients:
AnysP(t){\displaystyle s_{_{P}}(t)}can be expressed as aperiodic summationof another function,s(t){\displaystyle s(t)}:
and the coefficients are proportional to samples ofS(f){\displaystyle S(f)}at discrete intervals of1P{\displaystyle {\frac {1}{P}}}:
Note that anys(t){\displaystyle s(t)}whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recoverings(t){\displaystyle s(t)}(and thereforeS(f){\displaystyle S(f)}) from just these samples (i.e. from the Fourier series) is that the non-zero portion ofs(t){\displaystyle s(t)}be confined to a known interval of durationP,{\displaystyle P,}which is the frequency domain dual of theNyquist–Shannon sampling theorem.
SeeFourier seriesfor more information, including the historical development.
The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergentperiodic summationin the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
which is known as the DTFT. Thus theDTFTof thes[n]{\displaystyle s[n]}sequence is also theFourier transformof the modulatedDirac combfunction.[B]
The Fourier series coefficients (and inverse transform), are defined by:
ParameterT{\displaystyle T}corresponds to the sampling interval, and this Fourier series can now be recognized as a form of thePoisson summation formula. Thus we have the important result that when a discrete data sequence,s[n],{\displaystyle s[n],}is proportional to samples of an underlying continuous function,s(t),{\displaystyle s(t),}one can observe a periodic summation of the continuous Fourier transform,S(f).{\displaystyle S(f).}Note that anys(t){\displaystyle s(t)}with the same discrete sample values produces the same DTFT. But under certain idealized conditions one can theoretically recoverS(f){\displaystyle S(f)}ands(t){\displaystyle s(t)}exactly. A sufficient condition for perfect recovery is that the non-zero portion ofS(f){\displaystyle S(f)}be confined to a known frequency interval of width1T.{\displaystyle {\tfrac {1}{T}}.}When that interval is[−12T,12T],{\displaystyle \left[-{\tfrac {1}{2T}},{\tfrac {1}{2T}}\right],}the applicable reconstruction formula is theWhittaker–Shannon interpolation formula. This is a cornerstone in the foundation ofdigital signal processing.
Another reason to be interested inS1T(f){\displaystyle S_{\tfrac {1}{T}}(f)}is that it often provides insight into the amount ofaliasingcaused by the sampling process.
Applications of the DTFT are not limited to sampled functions. SeeDiscrete-time Fourier transformfor more information on this and other topics, including:
Similar to a Fourier series, the DTFT of a periodic sequence,sN[n],{\displaystyle s_{_{N}}[n],}with periodN{\displaystyle N}, becomes a Dirac comb function, modulated by a sequence of complex coefficients (seeDTFT § Periodic data):
TheS[k]{\displaystyle S[k]}sequence is customarily known as theDFTof one cycle ofsN.{\displaystyle s_{_{N}}.}It is alsoN{\displaystyle N}-periodic, so it is never necessary to compute more thanN{\displaystyle N}coefficients. The inverse transform, also known as adiscrete Fourier series, is given by:
WhensN[n]{\displaystyle s_{_{N}}[n]}is expressed as aperiodic summationof another function:
the coefficients are samples ofS1T(f){\displaystyle S_{\tfrac {1}{T}}(f)}at discrete intervals of1P=1NT{\displaystyle {\tfrac {1}{P}}={\tfrac {1}{NT}}}:
Conversely, when one wants to compute an arbitrary number(N){\displaystyle (N)}of discrete samples of one cycle of a continuous DTFT,S1T(f),{\displaystyle S_{\tfrac {1}{T}}(f),}it can be done by computing the relatively simple DFT ofsN[n],{\displaystyle s_{_{N}}[n],}as defined above. In most cases,N{\displaystyle N}is chosen equal to the length of the non-zero portion ofs[n].{\displaystyle s[n].}IncreasingN,{\displaystyle N,}known aszero-paddingorinterpolation, results in more closely spaced samples of one cycle ofS1T(f).{\displaystyle S_{\tfrac {1}{T}}(f).}DecreasingN,{\displaystyle N,}causes overlap (adding) in the time-domain (analogous toaliasing), which corresponds to decimation in the frequency domain. (seeDiscrete-time Fourier transform § L=N×I) In most cases of practical interest, thes[n]{\displaystyle s[n]}sequence represents a longer sequence that was truncated by the application of a finite-lengthwindow functionorFIR filterarray.
The DFT can be computed using afast Fourier transform(FFT) algorithm, which makes it a practical and important transformation on computers.
SeeDiscrete Fourier transformfor much more information, including:
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence viaDirac deltaandDirac combfunctions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration ofs(•) to be limited to the period,PorN. But these formulas do not require that condition.
S1T(kNT)⏟S[k]≜∑n=−∞∞s[n]⋅e−i2πknN≡∑NsN[n]⋅e−i2πknN⏟DFT{\displaystyle {\begin{aligned}\underbrace {S_{\tfrac {1}{T}}\left({\frac {k}{NT}}\right)} _{S[k]}\,&\triangleq \,\sum _{n=-\infty }^{\infty }s[n]\cdot e^{-i2\pi {\frac {kn}{N}}}\\&\equiv \underbrace {\sum _{N}s_{_{N}}[n]\cdot e^{-i2\pi {\frac {kn}{N}}}} _{\text{DFT}}\,\end{aligned}}}
∑n=−∞∞s[n]⋅δ(t−nT)=∫−∞∞S1T(f)⋅ei2πftdf⏟inverse Fourier transform{\displaystyle \sum _{n=-\infty }^{\infty }s[n]\cdot \delta (t-nT)=\underbrace {\int _{-\infty }^{\infty }S_{\tfrac {1}{T}}(f)\cdot e^{i2\pi ft}\,df} _{\text{inverse Fourier transform}}\,}
sN[n]=1N∑NS[k]⋅ei2πknN⏟inverse DFT{\displaystyle s_{_{N}}[n]=\underbrace {{\frac {1}{N}}\sum _{N}S[k]\cdot e^{i2\pi {\frac {kn}{N}}}} _{\text{inverse DFT}}}
When the real and imaginary parts of a complex function are decomposed into theireven and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:[11]
From this, various relationships are apparent, for example:
An early form of harmonic series dates back to ancientBabylonian mathematics, where they were used to computeephemerides(tables of astronomical positions).[12][13][14][15]
The Classical Greek concepts ofdeferent and epicyclein thePtolemaic systemof astronomy were related to Fourier series (seeDeferent and epicycle § Mathematical formalism).
In modern times, variants of the discrete Fourier transform were used byAlexis Clairautin 1754 to compute an orbit,[16]which has been described as the first formula for the DFT,[17]and in 1759 byJoseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string.[17]Technically, Clairaut's work was a cosine-only series (a form ofdiscrete cosine transform), while Lagrange's work was a sine-only series (a form ofdiscrete sine transform); a true cosine+sine DFT was used byGaussin 1805 fortrigonometric interpolationofasteroidorbits.[18]Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.[17]
An early modern development toward Fourier analysis was the 1770 paperRéflexions sur la résolution algébrique des équationsby Lagrange, which in the method ofLagrange resolventsused a complex Fourier decomposition to study the solution of a cubic:[19]Lagrange transformed the rootsx1,{\displaystyle x_{1},}x2,{\displaystyle x_{2},}x3{\displaystyle x_{3}}into the resolvents:
whereζis a cubicroot of unity, which is the DFT of order 3.
A number of authors, notablyJean le Rond d'Alembert, andCarl Friedrich Gaussusedtrigonometric seriesto study theheat equation,[20]but the breakthrough development was the 1807 paperMémoire sur la propagation de la chaleur dans les corps solidesbyJoseph Fourier, whose crucial insight was to modelallfunctions by trigonometric series, introducing the Fourier series. Independently of Fourier, astronomerFriedrich Wilhelm Besselalso introduced Fourier series to solveKepler's equation. His work was published in 1819, unaware of Fourier's work which remained unpublished until 1822.[21]
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory:Daniel BernoulliandLeonhard Eulerhad introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.[17]
The subsequent development of the field is known asharmonic analysis, and is also an early instance ofrepresentation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 byCarl Friedrich Gausswhen interpolating measurements of the orbit of the asteroidsJunoandPallas, although that particular FFT algorithm is more often attributed to its modern rediscoverersCooley and Tukey.[18][16]
Insignal processingterms, a function (of time) is a representation of a signal with perfecttime resolution, but no frequency information, while the Fourier transform has perfectfrequency resolution, but no time information.
As alternatives to the Fourier transform, intime–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by theuncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as theshort-time Fourier transform, theGabor transformorfractional Fourier transform(FRFT), or can use different functions to represent signals, as inwavelet transformsandchirplet transforms, with the wavelet analog of the (continuous) Fourier transform being thecontinuous wavelet transform.
The Fourier variants can also be generalized to Fourier transforms on arbitrarylocally compactAbeliantopological groups, which are studied inharmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of theconvolution theorem, which relates Fourier transforms andconvolutions. See also thePontryagin dualityfor the generalized underpinnings of the Fourier transform.
More specific, Fourier analysis can be done on cosets,[22]even discrete cosets.
|
https://en.wikipedia.org/wiki/Fourier_analysis
|
Inlinear algebra, thegeneralized singular value decomposition(GSVD) is the name of two different techniques based on thesingular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like thehigher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
Thegeneralized singular value decomposition(GSVD) is amatrix decompositionon a pair of matrices which generalizes thesingular value decomposition. It was introduced by Van Loan[1]in 1976 and later developed by Paige andSaunders,[2]which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD,[3][4][5]are extensively used in the study of theconditioningandregularizationof linear systems with respect to quadraticsemi-norms. In the following, letF=R{\displaystyle \mathbb {F} =\mathbb {R} }, orF=C{\displaystyle \mathbb {F} =\mathbb {C} }.
Thegeneralized singular value decompositionof matricesA1∈Fm1×n{\displaystyle A_{1}\in \mathbb {F} ^{m_{1}\times n}}andA2∈Fm2×n{\displaystyle A_{2}\in \mathbb {F} ^{m_{2}\times n}}isA1=U1Σ1[W∗D,0D]Q∗,A2=U2Σ2[W∗D,0D]Q∗,{\displaystyle {\begin{aligned}A_{1}&=U_{1}\Sigma _{1}[W^{*}D,0_{D}]Q^{*},\\A_{2}&=U_{2}\Sigma _{2}[W^{*}D,0_{D}]Q^{*},\end{aligned}}}where
We denoteα1=⋯=αr=1{\displaystyle \alpha _{1}=\cdots =\alpha _{r}=1},αr+s+1=⋯=αk=0{\displaystyle \alpha _{r+s+1}=\cdots =\alpha _{k}=0},β1=⋯=βr=0{\displaystyle \beta _{1}=\cdots =\beta _{r}=0}, andβr+s+1=⋯=βk=1{\displaystyle \beta _{r+s+1}=\cdots =\beta _{k}=1}. WhileΣ1{\displaystyle \Sigma _{1}}is diagonal,Σ2{\displaystyle \Sigma _{2}}is not always diagonal, because of the leading rectangularzero matrix; insteadΣ2{\displaystyle \Sigma _{2}}is "bottom-right-diagonal".
There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiplyQ∗{\displaystyle Q^{*}}from the left byEE∗=I{\displaystyle EE^{*}=I}whereE∈Fn×n{\displaystyle E\in \mathbb {F} ^{n\times n}}is an arbitrary unitary matrix. We denote
Here are some variations of the GSVD:
Ageneralized singular valueofA1{\displaystyle A_{1}}andA2{\displaystyle A_{2}}is a pair(a,b)∈R2{\displaystyle (a,b)\in \mathbb {R} ^{2}}such that
limδ→0det(b2A1∗A1−a2A2∗A2+δIn)/det(δIn−k)=0,a2+b2=1,a,b≥0.{\displaystyle {\begin{aligned}\lim _{\delta \to 0}\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})/\det(\delta I_{n-k})&=0,\\a^{2}+b^{2}&=1,\\a,b&\geq 0.\end{aligned}}}We have
By these properties we can show that the generalized singular values are exactly the pairs(αi,βi){\displaystyle (\alpha _{i},\beta _{i})}. We havedet(b2A1∗A1−a2A2∗A2+δIn)=det(b2A1∗A1−a2A2∗A2+δQQ∗)=det(Q[Y∗(b2Σ1∗Σ1−a2Σ2∗Σ2)Y+δIk00δIn−k]Q∗)=det(δIn−k)det(Y∗(b2Σ1∗Σ1−a2Σ2∗Σ2)Y+δIk).{\displaystyle {\begin{aligned}&\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta I_{n})\\=&\det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2}+\delta QQ^{*})\\=&\det \left(Q{\begin{bmatrix}Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k}&0\\0&\delta I_{n-k}\end{bmatrix}}Q^{*}\right)\\=&\det(\delta I_{n-k})\det(Y^{*}(b^{2}\Sigma _{1}^{*}\Sigma _{1}-a^{2}\Sigma _{2}^{*}\Sigma _{2})Y+\delta I_{k}).\end{aligned}}}Therefore
This expression is zero exactly whena=αi{\displaystyle a=\alpha _{i}}andb=βi{\displaystyle b=\beta _{i}}for somei{\displaystyle i}.
In,[2]the generalized singular values are claimed to be those which solvedet(b2A1∗A1−a2A2∗A2)=0{\displaystyle \det(b^{2}A_{1}^{*}A_{1}-a^{2}A_{2}^{*}A_{2})=0}. However, this claim only holds whenk=n{\displaystyle k=n}, since otherwise the determinant is zero for every pair(a,b)∈R2{\displaystyle (a,b)\in \mathbb {R} ^{2}}; this can be seen by substitutingδ=0{\displaystyle \delta =0}above.
DefineE+=E−1{\displaystyle E^{+}=E^{-1}}for any invertible matrixE∈Fn×n{\displaystyle E\in \mathbb {F} ^{n\times n}},0+=0∗{\displaystyle 0^{+}=0^{*}}for any zero matrix0∈Fm×n{\displaystyle 0\in \mathbb {F} ^{m\times n}}, and⌈E1,E2⌋+=⌈E1+,E2+⌋{\displaystyle \left\lceil E_{1},E_{2}\right\rfloor ^{+}=\left\lceil E_{1}^{+},E_{2}^{+}\right\rfloor }for any block-diagonal matrix. Then defineAi+=Q[Y−10]Σi+Ui∗{\displaystyle A_{i}^{+}=Q{\begin{bmatrix}Y^{-1}\\0\end{bmatrix}}\Sigma _{i}^{+}U_{i}^{*}}It can be shown thatAi+{\displaystyle A_{i}^{+}}as defined here is ageneralized inverseofAi{\displaystyle A_{i}}; in particular a{1,2,3}{\displaystyle \{1,2,3\}}-inverse ofAi{\displaystyle A_{i}}. Since it does not in general satisfy(Ai+Ai)∗=Ai+Ai{\displaystyle (A_{i}^{+}A_{i})^{*}=A_{i}^{+}A_{i}}, this is not theMoore–Penrose inverse; otherwise we could derive(AB)+=B+A+{\displaystyle (AB)^{+}=B^{+}A^{+}}for any choice of matrices, which only holds forcertain class of matrices.
SupposeQ=[Q1Q2]{\displaystyle Q={\begin{bmatrix}Q_{1}&Q_{2}\end{bmatrix}}}, whereQ1∈Fn×k{\displaystyle Q_{1}\in \mathbb {F} ^{n\times k}}andQ2∈Fn×(n−k){\displaystyle Q_{2}\in \mathbb {F} ^{n\times (n-k)}}. This generalized inverse has the following properties:
Ageneralized singular ratioofA1{\displaystyle A_{1}}andA2{\displaystyle A_{2}}isσi=αiβi+{\displaystyle \sigma _{i}=\alpha _{i}\beta _{i}^{+}}. By the above properties,A1A2+=U1Σ1Σ2+U2∗{\displaystyle A_{1}A_{2}^{+}=U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}}. Note thatΣ1Σ2+=⌈0,S1S2−1,0⌋{\displaystyle \Sigma _{1}\Sigma _{2}^{+}=\lceil 0,S_{1}S_{2}^{-1},0\rfloor }is diagonal, and that, ignoring the leading zeros, contains the singular ratios in decreasing order. IfA2{\displaystyle A_{2}}is invertible, thenΣ1Σ2+{\displaystyle \Sigma _{1}\Sigma _{2}^{+}}has no leading zeros, and the generalized singular ratios are the singular values, andU1{\displaystyle U_{1}}andU2{\displaystyle U_{2}}are the matrices of singular vectors, of the matrixA1A2+=A1A2−1{\displaystyle A_{1}A_{2}^{+}=A_{1}A_{2}^{-1}}. In fact, computing the SVD ofA1A2−1{\displaystyle A_{1}A_{2}^{-1}}is one of the motivations for the GSVD, as "formingAB−1{\displaystyle AB^{-1}}and finding its SVD can lead to unnecessary and large numerical errors whenB{\displaystyle B}is ill-conditioned for solution of equations".[2]Hence the sometimes used name "quotient SVD", although this is not the only reason for using GSVD. IfA2{\displaystyle A_{2}}is not invertible, thenU1Σ1Σ2+U2∗{\displaystyle U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}}is still the SVD ofA1A2+{\displaystyle A_{1}A_{2}^{+}}if we relax the requirement of having the singular values in decreasing order. Alternatively, a decreasing order SVD can be found by moving the leading zeros to the back:U1Σ1Σ2+U2∗=(U1P1)P1∗Σ1Σ2+P2(P2∗U2∗){\displaystyle U_{1}\Sigma _{1}\Sigma _{2}^{+}U_{2}^{*}=(U_{1}P_{1})P_{1}^{*}\Sigma _{1}\Sigma _{2}^{+}P_{2}(P_{2}^{*}U_{2}^{*})}, whereP1{\displaystyle P_{1}}andP2{\displaystyle P_{2}}are appropriate permutation matrices. Since rank equals the number of non-zero singular values,rank(A1A2+)=s{\displaystyle \mathrm {rank} (A_{1}A_{2}^{+})=s}.
Let
ThenC=P⌈D,0⌋Q∗=[P1D,0]Q∗=[U1Σ1W∗D0U2Σ2W∗D0]Q∗=[U1Σ1[W∗D,0]Q∗U2Σ2[W∗D,0]Q∗].{\displaystyle {\begin{aligned}C&=P\lceil D,0\rfloor Q^{*}\\{}&=[P_{1}D,0]Q^{*}\\{}&={\begin{bmatrix}U_{1}\Sigma _{1}W^{*}D&0\\U_{2}\Sigma _{2}W^{*}D&0\end{bmatrix}}Q^{*}\\{}&={\begin{bmatrix}U_{1}\Sigma _{1}[W^{*}D,0]Q^{*}\\U_{2}\Sigma _{2}[W^{*}D,0]Q^{*}\end{bmatrix}}.\end{aligned}}}We also have[U1∗00U2∗]P1W=[Σ1Σ2].{\displaystyle {\begin{bmatrix}U_{1}^{*}&0\\0&U_{2}^{*}\end{bmatrix}}P_{1}W={\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}.}ThereforeΣ1∗Σ1+Σ2∗Σ2=[Σ1Σ2]∗[Σ1Σ2]=W∗P1∗[U100U2][U1∗00U2∗]P1W=I.{\displaystyle \Sigma _{1}^{*}\Sigma _{1}+\Sigma _{2}^{*}\Sigma _{2}={\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}^{*}{\begin{bmatrix}\Sigma _{1}\\\Sigma _{2}\end{bmatrix}}=W^{*}P_{1}^{*}{\begin{bmatrix}U_{1}&0\\0&U_{2}\end{bmatrix}}{\begin{bmatrix}U_{1}^{*}&0\\0&U_{2}^{*}\end{bmatrix}}P_{1}W=I.}SinceP1{\displaystyle P_{1}}has orthonormal columns,||P1||2≤1{\displaystyle ||P_{1}||_{2}\leq 1}. Therefore||Σ1||2=||U1∗P1W||2=||P1||2≤1.{\displaystyle ||\Sigma _{1}||_{2}=||U_{1}^{*}P_{1}W||_{2}=||P_{1}||_{2}\leq 1.}We also have for eachx∈Rk{\displaystyle x\in \mathbb {R} ^{k}}such that||x||2=1{\displaystyle ||x||_{2}=1}that||P21x||22≤||P11x||22+||P21x||22=||P1x||22≤1.{\displaystyle ||P_{21}x||_{2}^{2}\leq ||P_{11}x||_{2}^{2}+||P_{21}x||_{2}^{2}=||P_{1}x||_{2}^{2}\leq 1.}Therefore||P21||2≤1{\displaystyle ||P_{21}||_{2}\leq 1}, and||Σ2||2=||U2∗P21W||2=||P21||2≤1.{\displaystyle ||\Sigma _{2}||_{2}=||U_{2}^{*}P_{21}W||_{2}=||P_{21}||_{2}\leq 1.}
The GSVD, formulated as a comparative spectral decomposition,[6]has been successfully applied to signal processing and data science, e.g., in genomic signal processing.[7][8][9]
These applications inspired several additional comparative spectral decompositions, i.e., the higher-order GSVD (HO GSVD)[10]and the tensor GSVD.[11][12]
It has equally found applications to estimate the spectral decompositions of linear operators when the eigenfunctions are parameterized with alinear model, i.e. areproducing kernel Hilbert space.[13]
The weighted version of thegeneralized singular value decomposition(GSVD) is a constrainedmatrix decompositionwith constraints imposed on the left and right singular vectors of thesingular value decomposition.[14][15][16]This form of theGSVDis an extension of theSVDas such. Given theSVDof anm×nreal or complex matrixM
where
WhereIis theidentity matrixand whereU{\displaystyle U}andV{\displaystyle V}are orthonormal given their constraints (Wu{\displaystyle W_{u}}andWv{\displaystyle W_{v}}). Additionally,Wu{\displaystyle W_{u}}andWv{\displaystyle W_{v}}are positive definite matrices (often diagonal matrices of weights). This form of theGSVDis the core of certain techniques, such as generalizedprincipal component analysisandCorrespondence analysis.
The weighted form of theGSVDis called as such because, with the correct selection of weights, itgeneralizesmany techniques (such asmultidimensional scalingandlinear discriminant analysis).[17]
|
https://en.wikipedia.org/wiki/Generalized_singular_value_decomposition
|
Inmathematics, in particularfunctional analysis, thesingular valuesof acompact operatorT:X→Y{\displaystyle T:X\rightarrow Y}acting betweenHilbert spacesX{\displaystyle X}andY{\displaystyle Y}, are the square roots of the (necessarily non-negative)eigenvaluesof the self-adjoint operatorT∗T{\displaystyle T^{*}T}(whereT∗{\displaystyle T^{*}}denotes theadjointofT{\displaystyle T}).
The singular values are non-negativereal numbers, usually listed in decreasing order (σ1(T),σ2(T), …). The largest singular valueσ1(T) is equal to theoperator normofT(seeMin-max theorem).
IfTacts on Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, there is a simple geometric interpretation for the singular values: Consider the image byT{\displaystyle T}of theunit sphere; this is anellipsoid, and the lengths of its semi-axes are the singular values ofT{\displaystyle T}(the figure provides an example inR2{\displaystyle \mathbb {R} ^{2}}).
The singular values are the absolute values of theeigenvaluesof anormal matrixA, because thespectral theoremcan be applied to obtain unitary diagonalization ofA{\displaystyle A}asA=UΛU∗{\displaystyle A=U\Lambda U^{*}}. Therefore,A∗A=UΛ∗ΛU∗=U|Λ|U∗{\textstyle {\sqrt {A^{*}A}}={\sqrt {U\Lambda ^{*}\Lambda U^{*}}}=U\left|\Lambda \right|U^{*}}.
Mostnormson Hilbert space operators studied are defined using singular values. For example, theKy Fan-k-norm is the sum of firstksingular values, the trace norm is the sum of all singular values, and theSchatten normis thepth root of the sum of thepth powers of the singular values. Note that each norm is defined only on a special class of operators, hence singular values can be useful in classifying different operators.
In the finite-dimensional case, amatrixcan always be decomposed in the formUΣV∗{\displaystyle \mathbf {U\Sigma V^{*}} }, whereU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V^{*}} }areunitary matricesandΣ{\displaystyle \mathbf {\Sigma } }is arectangular diagonal matrixwith the singular values lying on the diagonal. This is thesingular value decomposition.
ForA∈Cm×n{\displaystyle A\in \mathbb {C} ^{m\times n}}, andi=1,2,…,min{m,n}{\displaystyle i=1,2,\ldots ,\min\{m,n\}}.
Min-max theorem for singular values. HereU:dim(U)=i{\displaystyle U:\dim(U)=i}is a subspace ofCn{\displaystyle \mathbb {C} ^{n}}of dimensioni{\displaystyle i}.
Matrix transpose and conjugate do not alter singular values.
For any unitaryU∈Cm×m,V∈Cn×n.{\displaystyle U\in \mathbb {C} ^{m\times m},V\in \mathbb {C} ^{n\times n}.}
Relation to eigenvalues:
Relation totrace:
IfA∗A{\displaystyle A^{*}A}is full rank, the product of singular values isdetA∗A{\displaystyle \det {\sqrt {A^{*}A}}}.
IfAA∗{\displaystyle AA^{*}}is full rank, the product of singular values isdetAA∗{\displaystyle \det {\sqrt {AA^{*}}}}.
IfA{\displaystyle A}is square and full rank, the product of singular values is|detA|{\displaystyle |\det A|}.
IfA{\displaystyle A}isnormal, thenσ(A)=|λ(A)|{\displaystyle \sigma (A)=|\lambda (A)|}, that is, its singular values are the absolute values of its eigenvalues.
For a generic rectangular matrixA{\displaystyle A}, letA~=[0AA∗0]{\textstyle {\tilde {A}}={\begin{bmatrix}0&A\\A^{*}&0\end{bmatrix}}}be its augmented matrix. It has eigenvalues±σ(A){\textstyle \pm \sigma (A)}(whereσ(A){\textstyle \sigma (A)}are the singular values ofA{\textstyle A}) and the remaining eigenvalues are zero. LetA=UΣV∗{\textstyle A=U\Sigma V^{*}}be the singular value decomposition, then the eigenvectors ofA~{\textstyle {\tilde {A}}}are[ui±vi]{\textstyle {\begin{bmatrix}\mathbf {u} _{i}\\\pm \mathbf {v} _{i}\end{bmatrix}}}for±σi{\displaystyle \pm \sigma _{i}}[1]: 52
The smallest singular value of a matrixAisσn(A). It has the following properties for a non-singular matrix A:
Intuitively, ifσn(A) is small, then the rows of A are "almost" linearly dependent. If it isσn(A) = 0, then the rows of A are linearly dependent and A is not invertible.
See also.[3]
ForA∈Cm×n.{\displaystyle A\in \mathbb {C} ^{m\times n}.}
ForA,B∈Cm×n{\displaystyle A,B\in \mathbb {C} ^{m\times n}}
ForA,B∈Cn×n{\displaystyle A,B\in \mathbb {C} ^{n\times n}}
ForA,B∈Cm×n{\displaystyle A,B\in \mathbb {C} ^{m\times n}}[4]2σi(AB∗)≤σi(A∗A+B∗B),i=1,2,…,n.{\displaystyle 2\sigma _{i}(AB^{*})\leq \sigma _{i}\left(A^{*}A+B^{*}B\right),\quad i=1,2,\ldots ,n.}
ForA∈Cn×n{\displaystyle A\in \mathbb {C} ^{n\times n}}.
This concept was introduced byErhard Schmidtin 1907. Schmidt called singular values "eigenvalues" at that time. The name "singular value" was first quoted by Smithies in 1937. In 1957, Allahverdiev proved the following characterization of thenth singular number:[6]
This formulation made it possible to extend the notion of singular values to operators inBanach space.
Note that there is a more general concept ofs-numbers, which also includes Gelfand and Kolmogorov width.
|
https://en.wikipedia.org/wiki/Singular_value#Inequalities_about_singular_values
|
Inapplied mathematics,k-SVDis adictionary learningalgorithm for creating a dictionary forsparse representations, via asingular value decompositionapproach.k-SVD is a generalization of thek-means clusteringmethod, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. It is structurally related to theexpectation–maximization (EM) algorithm.[1][2]k-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis.
k-SVD is a kind of generalization ofk-means, as follows. Thek-means clusteringcan be also regarded as a method ofsparse representation. That is, finding the best possible codebook to represent the data samples{yi}i=1M{\displaystyle \{y_{i}\}_{i=1}^{M}}bynearest neighbor, by solving
which is nearly equivalent to
which is k-means that allows "weights".
The letter F denotes theFrobenius norm. The sparse representation termxi=ek{\displaystyle x_{i}=e_{k}}enforcesk-means algorithm to use only one atom (column) in dictionaryD{\displaystyle D}. To relax this constraint, the target of thek-SVD algorithm is to represent the signal as a linear combination of atoms inD{\displaystyle D}.
Thek-SVD algorithm follows the construction flow of thek-means algorithm. However, in contrast tok-means, in order to achieve a linear combination of atoms inD{\displaystyle D}, the sparsity term of the constraint is relaxed so that the number of nonzero entries of each columnxi{\displaystyle x_{i}}can be more than 1, but less than a numberT0{\displaystyle T_{0}}.
So, the objective function becomes
or in another objective form
In thek-SVD algorithm, theD{\displaystyle D}is first fixed and the best coefficient matrixX{\displaystyle X}is found. As finding the truly optimalX{\displaystyle X}is hard, we use an approximation pursuit method. Any algorithm such as OMP, the orthogonalmatching pursuitcan be used for the calculation of the coefficients, as long as it can supply a solution with a fixed and predetermined number of nonzero entriesT0{\displaystyle T_{0}}.
After the sparse coding task, the next is to search for a better dictionaryD{\displaystyle D}. However, finding the whole dictionary all at a time is impossible, so the process is to update only one column of the dictionaryD{\displaystyle D}each time, while fixingX{\displaystyle X}. The update of thek{\displaystyle k}-th column is done by rewriting the penalty term as
wherexkT{\displaystyle x_{k}^{\text{T}}}denotes thek-th row ofX.
By decomposing the multiplicationDX{\displaystyle DX}into sum ofK{\displaystyle K}rank 1 matrices, we can assume the otherK−1{\displaystyle K-1}terms are assumed fixed, and thek{\displaystyle k}-th remains unknown. After this step, we can solve the minimization problem by approximate theEk{\displaystyle E_{k}}term with arank−1{\displaystyle rank-1}matrix usingsingular value decomposition, then updatedk{\displaystyle d_{k}}with it. However, the new solution for the vectorxkT{\displaystyle x_{k}^{\text{T}}}is not guaranteed to be sparse.
To cure this problem, defineωk{\displaystyle \omega _{k}}as
which points to examples{yi}i=1N{\displaystyle \{y_{i}\}_{i=1}^{N}}that use atomdk{\displaystyle d_{k}}(also the entries ofxi{\displaystyle x_{i}}that is nonzero). Then, defineΩk{\displaystyle \Omega _{k}}as a matrix of sizeN×|ωk|{\displaystyle N\times |\omega _{k}|}, with ones on the(i,ωk(i))th{\displaystyle (i,\omega _{k}(i)){\text{th}}}entries and zeros otherwise. When multiplyingx~kT=xkTΩk{\displaystyle {\tilde {x}}_{k}^{\text{T}}=x_{k}^{\text{T}}\Omega _{k}}, this shrinks the row vectorxkT{\displaystyle x_{k}^{\text{T}}}by discarding the zero entries. Similarly, the multiplicationY~k=YΩk{\displaystyle {\tilde {Y}}_{k}=Y\Omega _{k}}is the subset of the examples that are current using thedk{\displaystyle d_{k}}atom. The same effect can be seen onE~k=EkΩk{\displaystyle {\tilde {E}}_{k}=E_{k}\Omega _{k}}.
So the minimization problem as mentioned before becomes
and can be done by directly using SVD. SVD decomposesE~k{\displaystyle {\tilde {E}}_{k}}intoUΔVT{\displaystyle U\Delta V^{\text{T}}}. The solution fordk{\displaystyle d_{k}}is the first column of U, the coefficient vectorx~kT{\displaystyle {\tilde {x}}_{k}^{\text{T}}}as the first column ofV×Δ(1,1){\displaystyle V\times \Delta (1,1)}. After updating the whole dictionary, the process then turns to iteratively solve X, then iteratively solve D.
Choosing an appropriate "dictionary" for a dataset is a non-convex problem, andk-SVD operates by an iterative update which does not guarantee to find the global optimum.[2]However, this is common to other algorithms for this purpose, andk-SVD works fairly well in practice.[2][better source needed]
|
https://en.wikipedia.org/wiki/K-SVD
|
This is a list oflinear transformationsoffunctionsrelated toFourier analysis. Such transformationsmapa function to a set ofcoefficientsofbasis functions, where the basis functions aresinusoidaland are therefore strongly localized in thefrequency spectrum. (These transforms are generally designed to be invertible.) In the case of the Fourier transform, each basis function corresponds to a singlefrequencycomponent.
Applied to functions of continuous arguments, Fourier-related transforms include:
For usage oncomputers, number theory and algebra, discrete arguments (e.g. functions of a series of discrete samples) are often more appropriate, and are handled by the transforms (analogous to the continuous cases above):
The use of all of these transforms is greatly facilitated by the existence of efficient algorithms based on afast Fourier transform(FFT). TheNyquist–Shannon sampling theoremis critical for understanding the output of such discrete transforms.
|
https://en.wikipedia.org/wiki/List_of_Fourier-related_transforms
|
Incomputer science,locality-sensitive hashing(LSH) is afuzzy hashingtechnique that hashes similar input items into the same "buckets" with high probability.[1](The number of buckets is much smaller than the universe of possible input items.)[1]Since similar items end up in the same buckets, this technique can be used fordata clusteringandnearest neighbor search. It differs fromconventional hashing techniquesin thathash collisionsare maximized, not minimized. Alternatively, the technique can be seen as a way toreduce the dimensionalityof high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
Hashing-based approximatenearest-neighbor searchalgorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).[2][3]
Locality-preserving hashing was initially devised as a way to facilitatedata pipeliningin implementations ofmassively parallelalgorithms that userandomized routinganduniversal hashingto reduce memorycontentionandnetwork congestion.[4][5]
A finite familyF{\displaystyle {\mathcal {F}}}of functionsh:M→S{\displaystyle h\colon M\to S}is defined to be anLSH family[1][6][7]for
if it satisfies the following condition. For any two pointsa,b∈M{\displaystyle a,b\in M}and a hash functionh{\displaystyle h}chosen uniformly at random fromF{\displaystyle {\mathcal {F}}}:
Such a familyF{\displaystyle {\mathcal {F}}}is called(r,cr,p1,p2){\displaystyle (r,cr,p_{1},p_{2})}-sensitive.
Alternatively[8]it is possible to define an LSH family on a universe of itemsUendowed with a similarity functionϕ:U×U→[0,1]{\displaystyle \phi \colon U\times U\to [0,1]}. In this setting, a LSH scheme is a family ofhash functionsHcoupled with aprobability distributionDoverHsuch that a functionh∈H{\displaystyle h\in H}chosen according toDsatisfiesPr[h(a)=h(b)]=ϕ(a,b){\displaystyle Pr[h(a)=h(b)]=\phi (a,b)}for eacha,b∈U{\displaystyle a,b\in U}.
Given a(d1,d2,p1,p2){\displaystyle (d_{1},d_{2},p_{1},p_{2})}-sensitive familyF{\displaystyle {\mathcal {F}}}, we can construct new familiesG{\displaystyle {\mathcal {G}}}by either the AND-construction or OR-construction ofF{\displaystyle {\mathcal {F}}}.[1]
To create an AND-construction, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis constructed fromkrandom functionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}. We then say that for a hash functiong∈G{\displaystyle g\in {\mathcal {G}}},g(x)=g(y){\displaystyle g(x)=g(y)}if and only if allhi(x)=hi(y){\displaystyle h_{i}(x)=h_{i}(y)}fori=1,2,…,k{\displaystyle i=1,2,\ldots ,k}. Since the members ofF{\displaystyle {\mathcal {F}}}are independently chosen for anyg∈G{\displaystyle g\in {\mathcal {G}}},G{\displaystyle {\mathcal {G}}}is a(d1,d2,p1k,p2k){\displaystyle (d_{1},d_{2},p_{1}^{k},p_{2}^{k})}-sensitive family.
To create an OR-construction, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis constructed fromkrandom functionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}. We then say that for a hash functiong∈G{\displaystyle g\in {\mathcal {G}}},g(x)=g(y){\displaystyle g(x)=g(y)}if and only ifhi(x)=hi(y){\displaystyle h_{i}(x)=h_{i}(y)}for one or more values ofi. Since the members ofF{\displaystyle {\mathcal {F}}}are independently chosen for anyg∈G{\displaystyle g\in {\mathcal {G}}},G{\displaystyle {\mathcal {G}}}is a(d1,d2,1−(1−p1)k,1−(1−p2)k){\displaystyle (d_{1},d_{2},1-(1-p_{1})^{k},1-(1-p_{2})^{k})}-sensitive family.
LSH has been applied to several problem domains, including:
One of the easiest ways to construct an LSH family is by bit sampling.[7]This approach works for theHamming distanceoverd-dimensional vectors{0,1}d{\displaystyle \{0,1\}^{d}}. Here, the familyF{\displaystyle {\mathcal {F}}}of hash functions is simply the family of all the projections of points on one of thed{\displaystyle d}coordinates, i.e.,F={h:{0,1}d→{0,1}∣h(x)=xifor somei∈{1,…,d}}{\displaystyle {\mathcal {F}}=\{h\colon \{0,1\}^{d}\to \{0,1\}\mid h(x)=x_{i}{\text{ for some }}i\in \{1,\ldots ,d\}\}}, wherexi{\displaystyle x_{i}}is thei{\displaystyle i}th coordinate ofx{\displaystyle x}. A random functionh{\displaystyle h}fromF{\displaystyle {\mathcal {F}}}simply selects a random bit from the input point. This family has the following parameters:P1=1−R/d{\displaystyle P_{1}=1-R/d},P2=1−cR/d{\displaystyle P_{2}=1-cR/d}.
That is, any two vectorsx,y{\displaystyle x,y}with Hamming distance at mostR{\displaystyle R}collide under a randomh{\displaystyle h}with probability at leastP1{\displaystyle P_{1}}.
Anyx,y{\displaystyle x,y}with Hamming distance at leastcR{\displaystyle cR}collide with probability at mostP2{\displaystyle P_{2}}.
SupposeUis composed of subsets of some ground set of enumerable itemsSand the similarity function of interest is theJaccard indexJ. Ifπis a permutation on the indices ofS, forA⊆S{\displaystyle A\subseteq S}leth(A)=mina∈A{π(a)}{\displaystyle h(A)=\min _{a\in A}\{\pi (a)\}}. Each possible choice ofπdefines a single hash functionhmapping input sets to elements ofS.
Define the function familyHto be the set of all such functions and letDbe theuniform distribution. Given two setsA,B⊆S{\displaystyle A,B\subseteq S}the event thath(A)=h(B){\displaystyle h(A)=h(B)}corresponds exactly to the event that the minimizer ofπoverA∪B{\displaystyle A\cup B}lies insideA∩B{\displaystyle A\cap B}. Ashwas chosen uniformly at random,Pr[h(A)=h(B)]=J(A,B){\displaystyle Pr[h(A)=h(B)]=J(A,B)\,}and(H,D){\displaystyle (H,D)\,}define an LSH scheme for the Jaccard index.
Because thesymmetric grouponnelements has sizen!, choosing a trulyrandom permutationfrom the full symmetric group is infeasible for even moderately sizedn. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosenπ. It has been established that a min-wise independent family of permutations is at least of sizelcm{1,2,…,n}≥en−o(n){\displaystyle \operatorname {lcm} \{\,1,2,\ldots ,n\,\}\geq e^{n-o(n)}},[20]and that this bound is tight.[21]
Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families.
Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at mostk.[22]Approximate min-wise independence differs from the property by at most a fixedε.[23]
Nilsimsais a locality-sensitive hashing algorithm used inanti-spamefforts.[24]The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements:
Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.[25]
TLSHis locality-sensitive hashing algorithm designed for a range of security and digital forensic applications.[18]The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar.
An implementation of TLSH is available asopen-source software.[26]
The random projection method of LSH due toMoses Charikar[8]calledSimHash(also sometimes called arccos[27]) uses an approximation of thecosine distancebetween vectors. The technique was used to approximate the NP-completemax-cutproblem.[8]
The basic idea of this technique is to choose a randomhyperplane(defined by a normal unit vectorr) at the outset and use the hyperplane to hash input vectors.
Given an input vectorvand a hyperplane defined byr, we leth(v)=sgn(v⋅r){\displaystyle h(v)=\operatorname {sgn}(v\cdot r)}. That is,h(v)=±1{\displaystyle h(v)=\pm 1}depending on which side of the hyperplanevlies. This way, each possible choice of a random hyperplanercan be interpreted as a hash functionh(v){\displaystyle h(v)}.
For two vectorsu,vwith angleθ(u,v){\displaystyle \theta (u,v)}between them, it can be shown that
Since the ratio betweenθ(u,v)π{\displaystyle {\frac {\theta (u,v)}{\pi }}}and1−cos(θ(u,v)){\displaystyle 1-\cos(\theta (u,v))}is at least 0.439 whenθ(u,v)∈[0,π]{\displaystyle \theta (u,v)\in [0,\pi ]},[8][28]the probability of two vectors being on different sides of the random hyperplane is approximately proportional to thecosine distancebetween them.
The hash function[29]ha,b(υ):Rd→N{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }}):{\mathcal {R}}^{d}\to {\mathcal {N}}}maps ad-dimensional vectorυ{\displaystyle {\boldsymbol {\upsilon }}}onto the set of integers. Each hash function
in the family is indexed by a choice of randoma{\displaystyle \mathbf {a} }andb{\displaystyle b}wherea{\displaystyle \mathbf {a} }is ad-dimensional
vector with
entries chosen independently from astable distributionandb{\displaystyle b}is
a real number chosen uniformly from the range [0,r]. For a fixeda,b{\displaystyle \mathbf {a} ,b}the hash functionha,b{\displaystyle h_{\mathbf {a} ,b}}is
given byha,b(υ)=⌊a⋅υ+br⌋{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }})=\left\lfloor {\frac {\mathbf {a} \cdot {\boldsymbol {\upsilon }}+b}{r}}\right\rfloor }.
Other construction methods for hash functions have been proposed to better fit the data.[30]In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.
Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have highersemantic similarity.[31]The hashcodes are found via training of anartificial neural networkorgraphical model.[citation needed]
One of the main applications of LSH is to provide a method for efficient approximatenearest neighbor searchalgorithms. Consider an LSH familyF{\displaystyle {\mathcal {F}}}. The algorithm has two main parameters: the width parameterkand the number of hash tablesL.
In the first step, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis obtained by concatenatingkfunctionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}, i.e.,g(p)=[h1(p),…,hk(p)]{\displaystyle g(p)=[h_{1}(p),\ldots ,h_{k}(p)]}. In other words, a random hash functiongis obtained by concatenatingkrandomly chosen hash functions fromF{\displaystyle {\mathcal {F}}}. The algorithm then constructsLhash tables, each corresponding to a different randomly chosen hash functiong.
In the preprocessing step we hash allnd-dimensional points from the data setSinto each of theLhash tables. Given that the resulting hash tables have onlynnon-zero entries, one can reduce the amount of memory used per each hash table toO(n){\displaystyle O(n)}using standardhash functions.
Given a query pointq, the algorithm iterates over theLhash functionsg. For eachgconsidered, it retrieves the data points that are hashed into the same bucket asq. The process is stopped as soon as a point within distancecRfromqis found.
Given the parameterskandL, the algorithm has the following performance guarantees:
For a fixed approximation ratioc=1+ϵ{\displaystyle c=1+\epsilon }and probabilitiesP1{\displaystyle P_{1}}andP2{\displaystyle P_{2}}, one can setk=⌈lognlog1/P2⌉{\displaystyle k=\left\lceil {\tfrac {\log n}{\log 1/P_{2}}}\right\rceil }andL=⌈P1−k⌉=O(nρP1−1){\displaystyle L=\lceil P_{1}^{-k}\rceil =O(n^{\rho }P_{1}^{-1})}, whereρ=logP1logP2{\displaystyle \rho ={\tfrac {\log P_{1}}{\log P_{2}}}}. Then one obtains the following performance guarantees:
Whentis large, it is possible to reduce the hashing time fromO(nρ){\displaystyle O(n^{\rho })}.
This was shown by[32]and[33]which gave
It is also sometimes the case that the factor1/P1{\displaystyle 1/P_{1}}can be very large.
This happens for example withJaccard similaritydata, where even the most similar neighbor often has a quite low Jaccard similarity with the query.
In[34]it was shown how to reduce the query time toO(nρ/P11−ρ){\displaystyle O(n^{\rho }/P_{1}^{1-\rho })}(not including hashing costs) and similarly the space usage.
|
https://en.wikipedia.org/wiki/Locality-sensitive_hashing
|
Principal component analysis(PCA) is alineardimensionality reductiontechnique with applications inexploratory data analysis, visualization anddata preprocessing.
The data islinearly transformedonto a newcoordinate systemsuch that the directions (principal components) capturing the largest variation in the data can be easily identified.
Theprincipal componentsof a collection of points in areal coordinate spaceare a sequence ofp{\displaystyle p}unit vectors, where thei{\displaystyle i}-th vector is the direction of a line that best fits the data while beingorthogonalto the firsti−1{\displaystyle i-1}vectors. Here, a best-fitting line is defined as one that minimizes the average squaredperpendiculardistance from the points to the line. These directions (i.e., principal components) constitute anorthonormal basisin which different individual dimensions of the data arelinearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points.[1]
Principal component analysis has applications in many fields such aspopulation genetics,microbiomestudies, andatmospheric science.[2]
When performing PCA, the first principal component of a set ofp{\displaystyle p}variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed throughp{\displaystyle p}iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to anindependent set.
The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. Thei{\displaystyle i}-th principal component can be taken as a direction orthogonal to the firsti−1{\displaystyle i-1}principal components that maximizes the variance of the projected data.
For either objective, it can be shown that the principal components areeigenvectorsof the data'scovariance matrix. Thus, the principal components are often computed byeigendecompositionof the data covariance matrix orsingular value decompositionof the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related tofactor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related tocanonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe thecross-covariancebetween two datasets while PCA defines a neworthogonal coordinate systemthat optimally describes variance in a single dataset.[3][4][5][6]RobustandL1-norm-based variants of standard PCA have also been proposed.[7][8][9][6]
PCA was invented in 1901 byKarl Pearson,[10]as an analogue of theprincipal axis theoremin mechanics; it was later independently developed and named byHarold Hotellingin the 1930s.[11]Depending on the field of application, it is also named the discreteKarhunen–Loèvetransform (KLT) insignal processing, theHotellingtransform in multivariate quality control,proper orthogonal decomposition(POD) in mechanical engineering,singular value decomposition(SVD) ofX(invented in the last quarter of the 19th century[12]),eigenvalue decomposition(EVD) ofXTXin linear algebra,factor analysis(for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe'sPrincipal Component Analysis),[13]Eckart–Young theorem(Harman, 1960), orempirical orthogonal functions(EOF) in meteorological science (Lorenz, 1956), empirical eigenfunction decomposition (Sirovich, 1987), quasiharmonic modes (Brooks et al., 1988),spectral decompositionin noise and vibration, andempirical modal analysisin structural dynamics.
PCA can be thought of as fitting ap-dimensionalellipsoidto the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small.
To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute thecovariance matrixof the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we mustnormalizeeach of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.
Biplotsandscree plots(degree ofexplained variance) are used to interpret findings of the PCA.
PCA is defined as anorthogonallinear transformationon a realinner product spacethat transforms the data to a newcoordinate systemsuch that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[13]
Consider ann×p{\displaystyle n\times p}datamatrix,X, with column-wise zeroempirical mean(the sample mean of each column has been shifted to zero), where each of thenrows represents a different repetition of the experiment, and each of thepcolumns gives a particular kind of feature (say, the results from a particular sensor).
Mathematically, the transformation is defined by a set of sizel{\displaystyle l}ofp-dimensional vectors of weights or coefficientsw(k)=(w1,…,wp)(k){\displaystyle \mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}}that map each row vectorx(i)=(x1,…,xp)(i){\displaystyle \mathbf {x} _{(i)}=(x_{1},\dots ,x_{p})_{(i)}}ofXto a new vector of principal componentscorest(i)=(t1,…,tl)(i){\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}, given by
in such a way that the individual variablest1,…,tl{\displaystyle t_{1},\dots ,t_{l}}oftconsidered over the data set successively inherit the maximum possible variance fromX, with each coefficient vectorwconstrained to be aunit vector(wherel{\displaystyle l}is usually selected to be strictly less thanp{\displaystyle p}to reduce dimensionality).
The above may equivalently be written in matrix form as
whereTik=tk(i){\displaystyle {\mathbf {T} }_{ik}={t_{k}}_{(i)}},Xij=xj(i){\displaystyle {\mathbf {X} }_{ij}={x_{j}}_{(i)}}, andWjk=wj(k){\displaystyle {\mathbf {W} }_{jk}={w_{j}}_{(k)}}.
In order to maximize variance, the first weight vectorw(1)thus has to satisfy
Equivalently, writing this in matrix form gives
Sincew(1)has been defined to be a unit vector, it equivalently also satisfies
The quantity to be maximised can be recognised as aRayleigh quotient. A standard result for apositive semidefinite matrixsuch asXTXis that the quotient's maximum possible value is the largesteigenvalueof the matrix, which occurs whenwis the correspondingeigenvector.
Withw(1)found, the first principal component of a data vectorx(i)can then be given as a scoret1(i)=x(i)⋅w(1)in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)⋅w(1)}w(1).
Thek-th component can be found by subtracting the firstk− 1 principal components fromX:
and then finding the weight vector which extracts the maximum variance from this new data matrix
It turns out that this gives the remaining eigenvectors ofXTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors ofXTX.
Thek-th principal component of a data vectorx(i)can therefore be given as a scoretk(i)=x(i)⋅w(k)in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x(i)⋅w(k)}w(k), wherew(k)is thekth eigenvector ofXTX.
The full principal components decomposition ofXcan therefore be given as
whereWis ap-by-pmatrix of weights whose columns are the eigenvectors ofXTX. The transpose ofWis sometimes called thewhitening or sphering transformation. Columns ofWmultiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are calledloadingsin PCA or in Factor analysis.
XTXitself can be recognized as proportional to the empirical samplecovariance matrixof the datasetXT.[13]: 30–31
The sample covarianceQbetween two of the different principal components over the dataset is given by:
where the eigenvalue property ofw(k)has been used to move from line 2 to line 3. However eigenvectorsw(j)andw(k)corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset.
Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix.
In matrix form, the empirical covariance matrix for the original variables can be written
The empirical covariance matrix between the principal components becomes
whereΛis the diagonal matrix of eigenvaluesλ(k)ofXTX.λ(k)is equal to the sum of the squares over the dataset associated with each componentk, that is,λ(k)= Σitk2(i)= Σi(x(i)⋅w(k))2.
The transformationP=XWmaps a data vectorx(i)from an original space ofxvariables to a new space ofpvariables which are uncorrelated over the dataset.
To non-dimensionalize the centered data, letXcrepresent the characteristic values of data vectorsXi, given by:
for a dataset of sizen. These norms are used to transform the original space of variablesx, yto a new space of uncorrelated variablesp, q(givenYcwith same meaning), such thatpi=XiXc,qi=YiYc{\displaystyle p_{i}={\frac {X_{i}}{X_{c}}},\quad q_{i}={\frac {Y_{i}}{Y_{c}}}};
and the new variables are linearly related as:q=αp{\displaystyle q=\alpha p}.
To find the optimal linear relationship, we minimize the total squared reconstruction error:E(α)=11−α2∑i=1n(αpi−qi)2{\displaystyle E(\alpha )={\frac {1}{1-\alpha ^{2}}}\sum _{i=1}^{n}(\alpha p_{i}-q_{i})^{2}}; such that setting the derivative of the error function to zero(E′(α)=0){\displaystyle (E'(\alpha )=0)}yields:α=12(−λ±λ2+4){\displaystyle \alpha ={\frac {1}{2}}\left(-\lambda \pm {\sqrt {\lambda ^{2}+4}}\right)}whereλ=p⋅p−q⋅qp⋅q{\displaystyle \lambda ={\frac {p\cdot p-q\cdot q}{p\cdot q}}}.[14]
Suchdimensionality reductioncan be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selectingL= 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data containsclustersthese too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.
Similarly, inregression analysis, the larger the number ofexplanatory variablesallowed, the greater is the chance ofoverfittingthe model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method calledprincipal component regression.
Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns ofTwill also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrixW, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a highersignal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested usingparametric bootstrap, as an aid in determining how many principal components to retain.[15]
The principal components transformation can also be associated with another matrix factorization, thesingular value decomposition(SVD) ofX,
HereΣis ann-by-prectangular diagonal matrixof positive numbersσ(k), called the singular values ofX;Uis ann-by-nmatrix, the columns of which are orthogonal unit vectors of lengthncalled the left singular vectors ofX; andWis ap-by-pmatrix whose columns are orthogonal unit vectors of lengthpand called the right singular vectors ofX.
In terms of this factorization, the matrixXTXcan be written
whereΣ^{\displaystyle \mathbf {\hat {\Sigma }} }is the square diagonal matrix with the singular values ofXand the excess zeros chopped off that satisfiesΣ^2=ΣTΣ{\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{\mathsf {T}}\mathbf {\Sigma } }. Comparison with the eigenvector factorization ofXTXestablishes that the right singular vectorsWofXare equivalent to the eigenvectors ofXTX, while the singular valuesσ(k)ofX{\displaystyle \mathbf {X} }are equal to the square-root of the eigenvaluesλ(k)ofXTX.
Using the singular value decomposition the score matrixTcan be written
so each column ofTis given by one of the left singular vectors ofXmultiplied by the corresponding singular value. This form is also thepolar decompositionofT.
Efficient algorithms exist to calculate the SVD ofXwithout having to form the matrixXTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix,[16]unless only a handful of components are required.
As with the eigen-decomposition, a truncatedn×Lscore matrixTLcan be obtained by considering only the first L largest singular values and their singular vectors:
The truncation of a matrixMorTusing a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix ofrankLto the original matrix, in the sense of the difference between the two having the smallest possibleFrobenius norm, a result known as theEckart–Young theorem[1936].
Theorem (Optimal k‑dimensional fit).Let P be an n×m data matrix whose columns have been mean‑centered and scaled, and letP=UΣVT{\displaystyle P=U\,\Sigma \,V^{T}}be its singular value decomposition. Then the best rank‑k approximation to P in the least‑squares (Frobenius‑norm) sense isPk=UkΣkVkT{\displaystyle P_{k}=U_{k}\,\Sigma _{k}\,V_{k}^{T}},
where Vkconsists of the first k columns of V. Moreover, the relative residual variance isR(k)=∑j=k+1mσj2∑j=1mσj2{\displaystyle R(k)={\frac {\sum _{j=k+1}^{m}\sigma _{j}^{2}}{\sum _{j=1}^{m}\sigma _{j}^{2}}}}.
[14]
The singular values (inΣ) are the square roots of theeigenvaluesof the matrixXTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (seebelow). PCA is often used in this manner fordimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to thediscrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT".Nonlinear dimensionality reductiontechniques tend to be more computationally demanding than PCA.
PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.[14]
LetXc{\displaystyle \mathbf {X} _{\text{c}}}be the *centered* data matrix (nrows,pcolumns) and define the covarianceΣ=1nXcTXc.{\displaystyle \Sigma ={\frac {1}{n}}\,\mathbf {X} _{\text{c}}^{\mathsf {T}}\mathbf {X} _{\text{c}}.}If thej{\displaystyle j}‑th variable is multiplied by a factorαj{\displaystyle \alpha _{j}}we obtainXc(α)=XcD,D=diag(α1,…,αp).{\displaystyle \mathbf {X} _{\text{c}}^{(\alpha )}=\mathbf {X} _{\text{c}}D,\qquad D=\operatorname {diag} (\alpha _{1},\ldots ,\alpha _{p}).}Hence the new covariance isΣ(α)=DTΣD.{\displaystyle \Sigma ^{(\alpha )}=D^{\mathsf {T}}\,\Sigma \,D.}
Because the eigenvalues and eigenvectors ofΣ(α){\displaystyle \Sigma ^{(\alpha )}}are those ofΣ{\displaystyle \Sigma }scaled byD{\displaystyle D}, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates.
If we have just two variables and they have the samesample varianceand are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.
Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.[14]
Write each observation asqi=μ+zi,μ=1n∑i=1nqi.{\displaystyle \mathbf {q} _{i}={\boldsymbol {\mu }}+\mathbf {z} _{i},\qquad {\boldsymbol {\mu }}={\tfrac {1}{n}}\sum _{i=1}^{n}\mathbf {q} _{i}.}
Without subtractingμ{\displaystyle {\boldsymbol {\mu }}}we are in effect diagonalising
Σunc=nμμT+1nZTZ,{\displaystyle \Sigma _{\text{unc}}\;=\;n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}\;+\;{\tfrac {1}{n}}\,\mathbf {Z} ^{\mathsf {T}}\mathbf {Z} ,}
whereZ{\displaystyle \mathbf {Z} }is the centered matrix.
The rank‑one termnμμT{\displaystyle n\,{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\mathsf {T}}}often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred partZ{\displaystyle \mathbf {Z} }.
After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance.
Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name:Pearson Product-Moment Correlation). Also see the article by Kromrey & Foster-Johnson (1998) on"Mean-centering in Moderated Regression: Much Ado About Nothing". Sincecovariances are correlations of normalized variables(Z- or standard-scores) a PCA based on the correlation matrix ofXisequalto a PCA based on the covariance matrix ofZ, the standardized version ofX.
PCA is a popular primary technique inpattern recognition. It is not, however, optimized for class separability.[17]However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.[18]Thelinear discriminant analysisis an alternative which is optimized for class separability.
Some properties of PCA include:[13][page needed]
The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements ofx, and they may also be useful inregression, in selecting a subset of variables fromx, and in outlier detection.
Before we look at its usage, we first look atdiagonalelements,
Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements ofxinto decreasing contributions due to each PC, but we can also decompose the wholecovariance matrixinto contributionsλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}from each PC. Although not strictly decreasing, the elements ofλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}will tend to become smaller ask{\displaystyle k}increases, asλkαkαk′{\displaystyle \lambda _{k}\alpha _{k}\alpha _{k}'}is nonincreasing for increasingk{\displaystyle k}, whereas the elements ofαk{\displaystyle \alpha _{k}}tend to stay about the same size because of the normalization constraints:αk′αk=1,k=1,…,p{\displaystyle \alpha _{k}'\alpha _{k}=1,k=1,\dots ,p}.
As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance.[19]
The applicability of PCA as described above is limited by certain (tacit) assumptions[20]made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (seekernel PCA).
Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes,[21]and forward modeling has to be performed to recover the true magnitude of the signals.[22]As an alternative method,non-negative matrix factorizationfocusing only on the non-negative elements in the matrices is well-suited for astrophysical observations.[23][24][25]See more atthe relation between PCA and non-negative matrix factorization.
PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms the original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.[26]
PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress.[27][page needed]Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted".[28]The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled".[28]
Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models.
Under the assumption that
that is, that the data vectorx{\displaystyle \mathbf {x} }is the sum of the desired information-bearing signals{\displaystyle \mathbf {s} }and a noise signaln{\displaystyle \mathbf {n} }one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view.
In particular, Linsker showed that ifs{\displaystyle \mathbf {s} }is Gaussian andn{\displaystyle \mathbf {n} }is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes themutual informationI(y;s){\displaystyle I(\mathbf {y} ;\mathbf {s} )}between the desired informations{\displaystyle \mathbf {s} }and the dimensionality-reduced outputy=WLTx{\displaystyle \mathbf {y} =\mathbf {W} _{L}^{T}\mathbf {x} }.[29]
If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vectorn{\displaystyle \mathbf {n} }areiid), but the information-bearing signals{\displaystyle \mathbf {s} }is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on theinformation loss, which is defined as[30][31]
The optimality of PCA is also preserved if the noisen{\displaystyle \mathbf {n} }is iid and at least more Gaussian (in terms of theKullback–Leibler divergence) than the information-bearing signals{\displaystyle \mathbf {s} }.[32]In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noisen{\displaystyle \mathbf {n} }becomes dependent.
The following is a detailed description of PCA using the covariance method[33]as opposed to the correlation method.[34]
The goal is to transform a given data setXof dimensionpto an alternative data setYof smaller dimensionL. Equivalently, we are seeking to find the matrixY, whereYis theKarhunen–Loèvetransform (KLT) of matrixX:
Y=KLT{X}{\displaystyle \mathbf {Y} =\mathbb {KLT} \{\mathbf {X} \}}
Suppose you have data comprising a set of observations ofpvariables, and you want to reduce the data so that each observation can be described with onlyLvariables,L<p. Suppose further, that the data are arranged as a set ofndata vectorsx1…xn{\displaystyle \mathbf {x} _{1}\ldots \mathbf {x} _{n}}with eachxi{\displaystyle \mathbf {x} _{i}}representing a single grouped observation of thepvariables.
Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data.[35]Hence we proceed by centering the data as follows:
In some applications, each variable (column ofB) may also be scaled to have a variance equal to 1 (seeZ-score).[36]This step affects the calculated principal components, but makes them independent of the units used to measure the different variables.
LetXbe ad-dimensional random vector expressed as column vector. Without loss of generality, assumeXhas zero mean.
We want to find(∗){\displaystyle (\ast )}ad×dorthonormal transformation matrixPso thatPXhas a diagonal covariance matrix (that is,PXis a random vector with all its distinct components pairwise uncorrelated).
A quick computation assumingP{\displaystyle P}were unitary yields:
Hence(∗){\displaystyle (\ast )}holds if and only ifcov(X){\displaystyle \operatorname {cov} (X)}were diagonalisable byP{\displaystyle P}.
This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix.
In practical implementations, especially withhigh dimensional data(largep), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids thenp2operations of explicitly calculating and storing the covariance matrixXTX, instead utilizing one ofmatrix-free methods, for example, based on the function evaluating the productXT(X r)at the cost of2npoperations.
One way to compute the first principal component efficiently[41]is shown in the following pseudo-code, for a data matrixXwith zero mean, without ever computing its covariance matrix.
Thispower iterationalgorithm simply calculates the vectorXT(X r), normalizes, and places the result back inr. The eigenvalue is approximated byrT(XTX) r, which is theRayleigh quotienton the unit vectorrfor the covariance matrixXTX. If the largest singular value is well separated from the next largest one, the vectorrgets close to the first principal component ofXwithin the number of iterationsc, which is small relative top, at the total cost2cnp. Thepower iterationconvergence can be accelerated without noticeably sacrificing the small cost per iteration using more advancedmatrix-free methods, such as theLanczos algorithmor the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method.
Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectorsrandswith block-vectors, matricesRandS. Every column ofRapproximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the productXT(X R). Implemented, for example, inLOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-levelBLASmatrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique.
Non-linear iterative partial least squares (NIPALS)is a variant the classicalpower iterationwith matrix deflation by subtraction implemented for computing the first few components in a principal component orpartial least squaresanalysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example,genomics,metabolomics) it is usually only necessary to compute the first few PCs. Thenon-linear iterative partial least squares(NIPALS) algorithm updates iterative approximations to the leading scores and loadingst1andr1Tby thepower iterationmultiplying on every iteration byXon the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations toXTX, based on the function evaluating the productXT(X r)=((X r)TX)T.
The matrix deflation by subtraction is performed by subtracting the outer product,t1r1TfromXleaving the deflated residual matrix used to calculate the subsequent leading PCs.[42]For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precisionround-off errorsaccumulated in each iteration and matrix deflation by subtraction.[43]AGram–Schmidtre-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality.[44]NIPALS reliance on single-vector multiplications cannot take advantage of high-levelBLASand results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method.
In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms.[45]
In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variablespecies.
For this, the following results are produced.
These results are what is calledintroducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, Lê, & Pagès (2009) and Pagès (2013).
Few software offer this option in an "automatic" way. This is the case ofSPADthat historically, following the work ofLudovic Lebart, was the first to propose this option, and the R packageFactoMineR.
The earliest application of factor analysis was in locating and measuring components of human intelligence. It was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as theIntelligence Quotient(IQ). The pioneering statistical psychologistSpearmanactually developed factor analysis in 1904 for histwo-factor theoryof intelligence, adding a formal technique to the science ofpsychometrics. In 1924Thurstonelooked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work.[46]
In 1949, Shevky and Williams introduced the theory offactorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s.[47]Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms.
One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate.[48]
About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis.[49]
PCA can be used as a formal method for the development of indexes. As an alternativeconfirmatory composite analysishas been proposed to develop and assess indexes.[50]
The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city.
The country-levelHuman Development Index(HDI) fromUNDP, which has been published since 1990 and is very extensively used in development studies,[51]has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA.
In 1978Cavalli-Sforzaand others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events.
Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations.[52]
PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologistEran Elhaikpublished a theoretical paper inScientific Reportsanalyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking andcircular reasoning.[53]
Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics.[54]
PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'.[55]
Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type.[56]
Inquantitative finance, PCA is used[57]infinancial risk management, and has been applied toother problemssuch asportfolio optimization.
PCA is commonly used in problems involvingfixed incomesecurities andportfolios, andinterest rate derivatives.
Valuations here depend on the entireyield curve, comprising numerous highly correlated instruments, and PCA is used to define a set of components or factors that explain rate movements,[58]thereby facilitating the modelling.
One common risk management application is tocalculating value at risk, VaR, applying PCA to theMonte Carlo simulation.[59]Here, for each simulation-sample, the components are stressed, and rates, andin turn option values, are then reconstructed;
with VaR calculated, finally, over the entire run.
PCA is also used inhedgingexposure tointerest rate risk, givenpartial durationsand other sensitivities.[58]Under both, the first three, typically, principal components of the system are of interest (representing"shift", "twist", and "curvature").
These principal components are derived from an eigen-decomposition of thecovariance matrixofyieldat predefined maturities;[60]and where thevarianceof each component is itseigenvalue(and as the components areorthogonal, no correlation need be incorporated in subsequent modelling).
Forequity, an optimal portfolio is one where theexpected returnis maximized for a given level of risk, or alternatively, where risk is minimized for a given return; seeMarkowitz modelfor discussion.
Thus, one approach is to reduce portfolio risk, whereallocation strategiesare applied to the "principal portfolios" instead of the underlyingstocks.
A second approach is to enhance portfolio return, using the principal components to select companies' stocks with upside potential.[61][62]PCA has also been used to understand relationships[57]between internationalequity markets, and within markets between groups of companies in industries orsectors.
PCA may also be applied tostress testing,[63]essentially an analysis of a bank's ability to endurea hypothetical adverse economic scenario. Its utility is in "distilling the information contained in [several]macroeconomic variablesinto a more manageable data set, which can then [be used] for analysis."[63]Here, the resulting factors are linked to e.g. interest rates – based on the largest elements of the factor'seigenvector– and it is then observed how a "shock" to each of the factors affects the implied assets of each of the banks.
A variant of principal components analysis is used inneuroscienceto identify the specific properties of a stimulus that increases aneuron's probability of generating anaction potential.[64][65]This technique is known asspike-triggered covariance analysis. In a typical application an experimenter presents awhite noiseprocess as a stimulus (usually either as a sensory input to a test subject, or as acurrentinjected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates thecovariance matrixof thespike-triggered ensemble, the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. Theeigenvectorsof the difference between the spike-triggered covariance matrix and the covariance matrix of theprior stimulus ensemble(the set of all stimuli, defined over the same length time window) then indicate the directions in thespaceof stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features.
In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential.Spike sortingis an important procedure becauseextracellularrecording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performsclustering analysisto associate specific action potentials with individual neurons.
PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is,order parameters, duringphase transitionsin the brain.[66]
Correspondence analysis(CA)
was developed byJean-Paul Benzécri[67]and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied tocontingency tables.
CA decomposes thechi-squared statisticassociated to this table into orthogonal factors.[68]Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not.
Several variants of CA are available includingdetrended correspondence analysisandcanonical correspondence analysis. One special extension ismultiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.[69]
Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors.
Factor analysisis similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance".[70]In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations.[13]: 158Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) orcausal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results.[71]
It has been asserted that the relaxed solution ofk-means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace.[72][73]However, that PCA is a useful relaxation ofk-means clustering was not a new result,[74]and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[75]
Non-negative matrix factorization(NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy,[23][24][25]in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis.
In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data.[21]For NMF, its components are ranked based only on the empirical FRV curves.[25]The residual fractional eigenvalue plots, that is,1−∑i=1kλi/∑j=1nλj{\displaystyle 1-\sum _{i=1}^{k}\lambda _{i}{\Big /}\sum _{j=1}^{n}\lambda _{j}}as a function of component numberk{\displaystyle k}given a total ofn{\displaystyle n}components, for PCA have a flat plateau, where no data is captured to remove the quasi-static noise, then the curves drop quickly as an indication of over-fitting (random noise).[21]The FRV curves for NMF is decreasing continuously[25]when the NMF components are constructedsequentially,[24]indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA,[25]indicating the less over-fitting property of NMF.
It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane.
Theiconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables.
The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation).
A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable".
A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables.Sparse PCAovercomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables.
Several approaches have been proposed, including
The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper.[82]
Most of the modern methods fornonlinear dimensionality reductionfind their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points.Trevor Hastieexpanded on this concept by proposingPrincipalcurves[86]as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for dataapproximationfollowed byprojectingthe points onto it. See also theelastic mapalgorithm andprincipal geodesic analysis.[87]Another popular generalization iskernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel.
Inmultilinear subspace learning,[88][89][90]PCA is generalized tomultilinear PCA(MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA.
N-way principal component analysis may be performed with models such asTucker decomposition,PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS.
While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive tooutliersin the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify.[91]For example, indata miningalgorithms likecorrelation clustering, the assignment of points to clusters and outliers is not known beforehand.
A recently proposed generalization of PCA[92]based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy.
Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA).[7][5]
Robust principal component analysis(RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations.[93][94][95]
Independent component analysis(ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations.
Given a matrixE{\displaystyle E}, it tries to decompose it into two matrices such thatE=AP{\displaystyle E=AP}. A key difference from techniques such as PCA and ICA is that some of the entries ofA{\displaystyle A}are constrained to be 0. HereP{\displaystyle P}is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied :
then the decomposition is unique up to multiplication by a scalar.[96]
Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups[97]In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA).
A DAPC can be realized on R using the package Adegenet. (more info:adegenet on the web)
Directional component analysis(DCA) is a method used in the atmospheric sciences for analysing multivariate datasets.[98]Like PCA, it allows for dimension reduction, improved visualization and improved interpretability of large data-sets.
Also like PCA, it is based on a covariance matrix derived from the input dataset.
The difference between PCA and DCA is that DCA additionally requires the input of a vector direction, referred to as the impact.
Whereas PCA maximises explained variance, DCA maximises probability density given impact.
The motivation for DCA is to find components of a multivariate dataset that are both likely (measured using probability density) and important (measured using the impact).
DCA has been used to find the most likely and most serious heat-wave patterns in weather prediction ensembles
,[99]and the most likely and most impactful changes in rainfall due to climate change
.[100]
|
https://en.wikipedia.org/wiki/Non-linear_iterative_partial_least_squares
|
Inmathematics, thepolar decompositionof a squarerealorcomplexmatrixA{\displaystyle A}is afactorizationof the formA=UP{\displaystyle A=UP}, whereU{\displaystyle U}is aunitary matrix, andP{\displaystyle P}is apositive semi-definiteHermitian matrix(U{\displaystyle U}is anorthogonal matrix, andP{\displaystyle P}is a positive semi-definitesymmetric matrixin the real case), both square and of the same size.[1]
If a realn×n{\displaystyle n\times n}matrixA{\displaystyle A}is interpreted as alinear transformationofn{\displaystyle n}-dimensionalspaceRn{\displaystyle \mathbb {R} ^{n}}, the polar decomposition separates it into arotationorreflectionU{\displaystyle U}ofRn{\displaystyle \mathbb {R} ^{n}}and ascalingof the space along a set ofn{\displaystyle n}orthogonal axes.
The polar decomposition of a square matrixA{\displaystyle A}always exists. IfA{\displaystyle A}isinvertible, the decomposition is unique, and the factorP{\displaystyle P}will bepositive-definite. In that case,A{\displaystyle A}can be written uniquely in the formA=UeX{\displaystyle A=Ue^{X}}, whereU{\displaystyle U}is unitary, andX{\displaystyle X}is the unique self-adjointlogarithmof the matrixP{\displaystyle P}.[2]This decomposition is useful in computing thefundamental groupof (matrix)Lie groups.[3]
The polar decomposition can also be defined asA=P′U{\displaystyle A=P'U}, whereP′=UPU−1{\displaystyle P'=UPU^{-1}}is a symmetric positive-definite matrix with the same eigenvalues asP{\displaystyle P}but different eigenvectors.
The polar decomposition of a matrix can be seen as the matrix analog of thepolar formof acomplex numberz{\displaystyle z}asz=ur{\displaystyle z=ur}, wherer{\displaystyle r}is itsabsolute value(a non-negativereal number), andu{\displaystyle u}is a complex number with unit norm (an element of thecircle group).
The definitionA=UP{\displaystyle A=UP}may be extended to rectangular matricesA∈Cm×n{\displaystyle A\in \mathbb {C} ^{m\times n}}by requiringU∈Cm×n{\displaystyle U\in \mathbb {C} ^{m\times n}}to be asemi-unitarymatrix, andP∈Cn×n{\displaystyle P\in \mathbb {C} ^{n\times n}}to be a positive-semidefinite Hermitian matrix. The decomposition always exists, andP{\displaystyle P}is always unique. The matrixU{\displaystyle U}is unique if and only ifA{\displaystyle A}has full rank.[4]
A real squarem×m{\displaystyle m\times m}matrixA{\displaystyle A}can be interpreted as thelinear transformationofRm{\displaystyle \mathbb {R} ^{m}}that takes a column vectorx{\displaystyle x}toAx{\displaystyle Ax}. Then, in the polar decompositionA=RP{\displaystyle A=RP}, the factorR{\displaystyle R}is anm×m{\displaystyle m\times m}real orthogonal matrix. The polar decomposition then can be seen as expressing the linear transformation defined byA{\displaystyle A}into ascalingof the spaceRm{\displaystyle \mathbb {R} ^{m}}along each eigenvectorei{\displaystyle e_{i}}ofP{\displaystyle P}by a scale factorσi{\displaystyle \sigma _{i}}(the action ofP{\displaystyle P}), followed by a rotation ofRm{\displaystyle \mathbb {R} ^{m}}(the action ofR{\displaystyle R}).
Alternatively, the decompositionA=PR{\displaystyle A=PR}expresses the transformation defined byA{\displaystyle A}as a rotation (R{\displaystyle R}) followed by a scaling (P{\displaystyle P}) along certain orthogonal directions. The scale factors are the same, but the directions are different.
The polar decomposition of thecomplex conjugateofA{\displaystyle A}is given byA¯=U¯P¯.{\displaystyle {\overline {A}}={\overline {U}}{\overline {P}}.}Note thatdetA=detUdetP=eiθr{\displaystyle \det A=\det U\det P=e^{i\theta }r}gives the corresponding polar decomposition of thedeterminantofA, sincedetU=eiθ,{\displaystyle \det U=e^{i\theta },}anddetP=r=|detA|.{\displaystyle \det P=r=|\det A|.}In particular, ifA{\displaystyle A}has determinant 1, then bothU{\displaystyle U}andP{\displaystyle P}have determinant 1.
The positive-semidefinite matrixPis always unique, even ifAissingular, and is denoted asP=(A∗A)1/2,{\displaystyle P=(A^{*}A)^{1/2},}whereA∗{\displaystyle A^{*}}denotes theconjugate transposeofA{\displaystyle A}. The uniqueness ofPensures that this expression is well-defined. The uniqueness is guaranteed by the fact thatA∗A{\displaystyle A^{*}A}is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitiansquare root.[5]IfAis invertible, thenPis positive-definite, thus also invertible, and the matrixUis uniquely determined byU=AP−1.{\displaystyle U=AP^{-1}.}
In terms of thesingular value decomposition(SVD) ofA{\displaystyle A},A=WΣV∗{\displaystyle A=W\Sigma V^{*}}, one hasP=VΣV∗,U=WV∗,{\displaystyle {\begin{aligned}P&=V\Sigma V^{*},\\U&=WV^{*},\end{aligned}}}whereU{\displaystyle U},V{\displaystyle V}, andW{\displaystyle W}are unitary matrices (orthogonalif the field is the realsR{\displaystyle \mathbb {R} }). This confirms thatP{\displaystyle P}is positive-definite, andU{\displaystyle U}is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition.
One can also decomposeA{\displaystyle A}in the formA=P′U.{\displaystyle A=P'U.}HereU{\displaystyle U}is the same as before, andP′{\displaystyle P'}is given byP′=UPU−1=(AA∗)1/2=WΣW∗.{\displaystyle P'=UPU^{-1}=(AA^{*})^{1/2}=W\Sigma W^{*}.}This is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.
Thepolar decompositionof a square invertible real matrixA{\displaystyle A}is of the formA=[A]R,{\displaystyle A=[A]R,}where[A]≡(AAT)1/2{\displaystyle [A]\equiv \left(AA^{\mathsf {T}}\right)^{1/2}}is apositive-definitematrix, andR=[A]−1A{\displaystyle R=[A]^{-1}A}is an orthogonal matrix.
The matrixA{\displaystyle A}with polar decompositionA=UP{\displaystyle A=UP}isnormalif and only ifU{\displaystyle U}andP{\displaystyle P}commute(UP=PU{\displaystyle UP=PU}), or equivalently, they aresimultaneously diagonalizable.
The core idea behind the construction of the polar decomposition is similar to that used to compute thesingular-value decomposition.
IfA{\displaystyle A}isnormal, then it is unitarily equivalent to a diagonal matrix:A=VΛV∗{\displaystyle A=V\Lambda V^{*}}for some unitary matrixV{\displaystyle V}and some diagonal matrixΛ.{\displaystyle \Lambda ~.}This makes the derivation of its polar decomposition particularly straightforward, as we can then writeA=VΦΛ|Λ|V∗=(VΦΛV∗)⏟≡U(V|Λ|V∗)⏟≡P,{\displaystyle A=V\Phi _{\Lambda }|\Lambda |V^{*}=\underbrace {\left(V\Phi _{\Lambda }V^{*}\right)} _{\equiv U}\underbrace {\left(V|\Lambda |V^{*}\right)} _{\equiv P},}
where|Λ|{\displaystyle |\Lambda |}is the matrix of absolute diagonal values, andΦΛ{\displaystyle \Phi _{\Lambda }}is a diagonal matrix containing thephasesof the elements ofΛ,{\displaystyle \Lambda ,}that is,(ΦΛ)ii≡Λii/|Λii|{\displaystyle (\Phi _{\Lambda })_{ii}\equiv \Lambda _{ii}/|\Lambda _{ii}|}whenΛii≠0,{\displaystyle \Lambda _{ii}\neq 0,}, and(ΦΛ)ii=0{\displaystyle (\Phi _{\Lambda })_{ii}=0}whenΛii=0.{\displaystyle \Lambda _{ii}=0~.}
The polar decomposition is thusA=UP,{\displaystyle A=UP,}withU{\displaystyle U}andP{\displaystyle P}diagonal in the eigenbasis ofA{\displaystyle A}and having eigenvalues equal to the phases and absolute values of those ofA,{\displaystyle A,}respectively.
From thesingular-value decomposition, it can be shown that a matrixA{\displaystyle A}is invertible if and only ifA∗A{\displaystyle A^{*}A}(equivalently,AA∗{\displaystyle AA^{*}}) is. Moreover, this is true if and only if the eigenvalues ofA∗A{\displaystyle A^{*}A}are all not zero.[6]
In this case, the polar decomposition is directly obtained by writingA=A(A∗A)−1/2(A∗A)1/2,{\displaystyle A=A\left(A^{*}A\right)^{-1/2}\left(A^{*}A\right)^{1/2},}and observing thatA(A∗A)−1/2{\displaystyle A\left(A^{*}A\right)^{-1/2}}is unitary. To see this, we can exploit the spectral decomposition ofA∗A{\displaystyle A^{*}A}to writeA(A∗A)−1/2=AVD−1/2V∗{\displaystyle A\left(A^{*}A\right)^{-1/2}=AVD^{-1/2}V^{*}}.
In this expression,V∗{\displaystyle V^{*}}is unitary becauseV{\displaystyle V}is. To show that alsoAVD−1/2{\displaystyle AVD^{-1/2}}is unitary, we can use theSVDto writeA=WD1/2V∗{\displaystyle A=WD^{1/2}V^{*}}, so thatAVD−1/2=WD1/2V∗VD−1/2=W,{\displaystyle AVD^{-1/2}=WD^{1/2}V^{*}VD^{-1/2}=W,}where againW{\displaystyle W}is unitary by construction.
Yet another way to directly show the unitarity ofA(A∗A)−1/2{\displaystyle A\left(A^{*}A\right)^{-1/2}}is to note that, writing theSVDofA{\displaystyle A}in terms of rank-1 matrices asA=∑kskvkwk∗{\textstyle A=\sum _{k}s_{k}v_{k}w_{k}^{*}}, wheresk{\displaystyle s_{k}}are the singular values ofA{\displaystyle A}, we haveA(A∗A)−1/2=(∑jλjvjwj∗)(∑k|λk|−1wkwk∗)=∑kλk|λk|vkwk∗,{\displaystyle A\left(A^{*}A\right)^{-1/2}=\left(\sum _{j}\lambda _{j}v_{j}w_{j}^{*}\right)\left(\sum _{k}|\lambda _{k}|^{-1}w_{k}w_{k}^{*}\right)=\sum _{k}{\frac {\lambda _{k}}{|\lambda _{k}|}}v_{k}w_{k}^{*},}which directly implies the unitarity ofA(A∗A)−1/2{\displaystyle A\left(A^{*}A\right)^{-1/2}}because a matrix is unitary if and only if its singular values have unitary absolute value.
Note how, from the above construction, it follows thatthe unitary matrix in the polar decomposition of an invertible matrix is uniquely defined.
The SVD of a square matrixA{\displaystyle A}readsA=WD1/2V∗{\displaystyle A=WD^{1/2}V^{*}}, withW,V{\displaystyle W,V}unitary matrices, andD{\displaystyle D}a diagonal, positive semi-definite matrix. By simply inserting an additional pair ofW{\displaystyle W}s orV{\displaystyle V}s, we obtain the two forms of the polar decomposition ofA{\displaystyle A}:A=WD1/2V∗=(WD1/2W∗)⏟P(WV∗)⏟U=(WV∗)⏟U(VD1/2V∗)⏟P′.{\displaystyle A=WD^{1/2}V^{*}=\underbrace {\left(WD^{1/2}W^{*}\right)} _{P}\underbrace {\left(WV^{*}\right)} _{U}=\underbrace {\left(WV^{*}\right)} _{U}\underbrace {\left(VD^{1/2}V^{*}\right)} _{P'}.}More generally, ifA{\displaystyle A}is some rectangularn×m{\displaystyle n\times m}matrix, its SVD can be written asA=WD1/2V∗{\displaystyle A=WD^{1/2}V^{*}}where nowW{\displaystyle W}andV{\displaystyle V}are isometries with dimensionsn×r{\displaystyle n\times r}andm×r{\displaystyle m\times r}, respectively, wherer≡rank(A){\displaystyle r\equiv \operatorname {rank} (A)}, andD{\displaystyle D}is again a diagonal positive semi-definite square matrix with dimensionsr×r{\displaystyle r\times r}. We can now apply the same reasoning used in the above equation to writeA=PU=UP′{\displaystyle A=PU=UP'}, but nowU≡WV∗{\displaystyle U\equiv WV^{*}}is not in general unitary. Nonetheless,U{\displaystyle U}has the same support and range asA{\displaystyle A}, and it satisfiesU∗U=VV∗{\displaystyle U^{*}U=VV^{*}}andUU∗=WW∗{\displaystyle UU^{*}=WW^{*}}. This makesU{\displaystyle U}into an isometry when its action is restricted onto the support ofA{\displaystyle A}, that is, it means thatU{\displaystyle U}is apartial isometry.
As an explicit example of this more general case, consider the SVD of the following matrix:A≡(112−200)=(100100)⏟≡W(2008)⏟D(121212−12)⏟V†.{\displaystyle A\equiv {\begin{pmatrix}1&1\\2&-2\\0&0\end{pmatrix}}=\underbrace {\begin{pmatrix}1&0\\0&1\\0&0\end{pmatrix}} _{\equiv W}\underbrace {\begin{pmatrix}{\sqrt {2}}&0\\0&{\sqrt {8}}\end{pmatrix}} _{\sqrt {D}}\underbrace {\begin{pmatrix}{\frac {1}{\sqrt {2}}}&{\frac {1}{\sqrt {2}}}\\{\frac {1}{\sqrt {2}}}&-{\frac {1}{\sqrt {2}}}\end{pmatrix}} _{V^{\dagger }}.}We then haveWV†=12(111−100){\displaystyle WV^{\dagger }={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&1\\1&-1\\0&0\end{pmatrix}}}which is an isometry, but not unitary. On the other hand, if we consider the decomposition ofA≡(100020)=(1001)(1002)(100010),{\displaystyle A\equiv {\begin{pmatrix}1&0&0\\0&2&0\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}{\begin{pmatrix}1&0\\0&2\end{pmatrix}}{\begin{pmatrix}1&0&0\\0&1&0\end{pmatrix}},}we findWV†=(100010),{\displaystyle WV^{\dagger }={\begin{pmatrix}1&0&0\\0&1&0\end{pmatrix}},}which is a partial isometry (but not an isometry).
Thepolar decompositionof anybounded linear operatorAbetween complexHilbert spacesis a canonical factorization as the product of apartial isometryand a non-negative operator.
The polar decomposition for matrices generalizes as follows: ifAis a bounded linear operator then there is a unique factorization ofAas a productA=UPwhereUis a partial isometry,Pis a non-negative self-adjoint operator and the initial space ofUis the closure of the range ofP.
The operatorUmust be weakened to a partial isometry, rather than unitary, because of the following issues. IfAis theone-sided shiftonl2(N), then |A| = {A*A}1/2=I. So ifA=U|A|,Umust beA, which is not unitary.
The existence of a polar decomposition is a consequence ofDouglas' lemma:
Lemma—IfA,Bare bounded operators on a Hilbert spaceH, andA*A≤B*B, then there exists a contractionCsuch thatA = CB. Furthermore,Cis unique if ker(B*) ⊂ ker(C).
The operatorCcan be defined byC(Bh) :=Ahfor allhinH, extended by continuity to the closure ofRan(B), and by zero on the orthogonal complement to all ofH. The lemma then follows sinceA*A≤B*Bimplies ker(B) ⊂ ker(A).
In particular. IfA*A=B*B, thenCis a partial isometry, which is unique if ker(B*) ⊂ ker(C).
In general, for any bounded operatorA,A∗A=(A∗A)1/2(A∗A)1/2,{\displaystyle A^{*}A=\left(A^{*}A\right)^{1/2}\left(A^{*}A\right)^{1/2},}where (A*A)1/2is the unique positive square root ofA*Agiven by the usualfunctional calculus. So by the lemma, we haveA=U(A∗A)1/2{\displaystyle A=U\left(A^{*}A\right)^{1/2}}for some partial isometryU, which is unique if ker(A*) ⊂ ker(U). TakePto be (A*A)1/2and one obtains the polar decompositionA=UP. Notice that an analogous argument can be used to showA = P'U', whereP'is positive andU'a partial isometry.
WhenHis finite-dimensional,Ucan be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version ofsingular value decomposition.
By property of thecontinuous functional calculus, |A| is in theC*-algebragenerated byA. A similar but weaker statement holds for the partial isometry:Uis in thevon Neumann algebragenerated byA. IfAis invertible, the polar partUwill be in theC*-algebraas well.
IfAis a closed, densely definedunbounded operatorbetween complex Hilbert spaces then it still has a (unique)polar decompositionA=U|A|,{\displaystyle A=U|A|,}where |A| is a (possibly unbounded) non-negative self-adjoint operator with the same domain asA, andUis a partial isometry vanishing on the orthogonal complement of the range ran(|A|).
The proof uses the same lemma as above, which goes through for unbounded operators in general. If dom(A*A) = dom(B*B), andA*Ah=B*Bhfor allh∈ dom(A*A), then there exists a partial isometryUsuch thatA=UB.Uis unique if ran(B)⊥⊂ ker(U). The operatorAbeing closed and densely defined ensures that the operatorA*Ais self-adjoint (with dense domain) and therefore allows one to define (A*A)1/2. Applying the lemma gives polar decomposition.
If an unbounded operatorAisaffiliatedto a von Neumann algebraM, andA=UPis its polar decomposition, thenUis inMand so is the spectral projection ofP, 1B(P), for any Borel setBin[0, ∞).
The polar decomposition ofquaternionsH{\displaystyle \mathbb {H} }withorthonormal basisquaternions1,ı^,ȷ^,k^{\displaystyle 1,{\hat {\imath }},{\hat {\jmath }},{\hat {k}}}depends on the unit 2-dimensional spherer^∈{xı^+yȷ^+zk^∈H∖R:x2+y2+z2=1}{\displaystyle {\hat {r}}\in \{x{\hat {\imath }}+y{\hat {\jmath }}+z{\hat {k}}\in \mathbb {H} \setminus \mathbb {R} :x^{2}+y^{2}+z^{2}=1\}}ofsquare roots of minus one, known asright versors. Given anyr^{\displaystyle {\hat {r}}}on this sphere and an angle−π<a≤π,theversorear^=cosa+r^sina{\displaystyle e^{a{\hat {r}}}=\cos a+{\hat {r}}\sin a}is on the unit3-sphereofH.{\displaystyle \mathbb {H} .}Fora= 0anda=π,the versor is 1 or −1, regardless of whichris selected. Thenormtof a quaternionqis theEuclidean distancefrom the origin toq. When a quaternion is not just a real number, then there is auniquepolar decomposition:q=texp(ar^).{\displaystyle q=t\exp(a{\hat {r}}).}Herer,a,tare all uniquely determined such thatris a right versor(r2= –1),asatisfies0 <a<π,andt> 0.
In theCartesian plane, alternative planarringdecompositions arise as follows:
Polar decomposition of an element of thealgebraM(2, R) of 2 × 2 real matrices uses these alternative planar decompositions since any planarsubalgebrais isomorphic to dual numbers, split-complex numbers, or ordinary complex numbers.
To compute an approximation of the polar decompositionA=UP, usually the unitary factorUis approximated.[8][9]The iteration is based onHeron's methodfor the square root of1and computes, starting fromU0=A{\displaystyle U_{0}=A}, the sequenceUk+1=12(Uk+(Uk∗)−1),k=0,1,2,…{\displaystyle U_{k+1}={\frac {1}{2}}\left(U_{k}+\left(U_{k}^{*}\right)^{-1}\right),\qquad k=0,1,2,\ldots }
The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.
This basic iteration may be refined to speed up the process:
γk=‖Uk−1‖1‖Uk−1‖∞‖Uk‖1‖Uk‖∞4{\displaystyle \gamma _{k}={\sqrt[{4}]{\frac {\left\|U_{k}^{-1}\right\|_{1}\left\|U_{k}^{-1}\right\|_{\infty }}{\left\|U_{k}\right\|_{1}\left\|U_{k}\right\|_{\infty }}}}}using the row-sum and column-summatrix normsorγk=‖Uk−1‖F‖Uk‖F{\displaystyle \gamma _{k}={\sqrt {\frac {\left\|U_{k}^{-1}\right\|_{F}}{\left\|U_{k}\right\|_{F}}}}}using theFrobenius norm. Including the scale factor, the iteration is now
|
https://en.wikipedia.org/wiki/Polar_decomposition
|
Inlinear algebra, theSchmidt decomposition(named after its originatorErhard Schmidt) refers to a particular way of expressing avectorin thetensor productof twoinner product spaces. It has numerous applications inquantum information theory, for example inentanglementcharacterization and instate purification, andplasticity.
LetH1{\displaystyle H_{1}}andH2{\displaystyle H_{2}}beHilbert spacesofdimensionsnandmrespectively. Assumen≥m{\displaystyle n\geq m}. For any vectorw{\displaystyle w}in the tensor productH1⊗H2{\displaystyle H_{1}\otimes H_{2}}, there exist orthonormal sets{u1,…,um}⊂H1{\displaystyle \{u_{1},\ldots ,u_{m}\}\subset H_{1}}and{v1,…,vm}⊂H2{\displaystyle \{v_{1},\ldots ,v_{m}\}\subset H_{2}}such thatw=∑i=1mαiui⊗vi{\textstyle w=\sum _{i=1}^{m}\alpha _{i}u_{i}\otimes v_{i}}, where the scalarsαi{\displaystyle \alpha _{i}}are real, non-negative, and unique up to re-ordering.
The Schmidt decomposition is essentially a restatement of thesingular value decompositionin a different context. Fix orthonormal bases{e1,…,en}⊂H1{\displaystyle \{e_{1},\ldots ,e_{n}\}\subset H_{1}}and{f1,…,fm}⊂H2{\displaystyle \{f_{1},\ldots ,f_{m}\}\subset H_{2}}. We can identify an elementary tensorei⊗fj{\displaystyle e_{i}\otimes f_{j}}with the matrixeifjT{\displaystyle e_{i}f_{j}^{\mathsf {T}}}, wherefjT{\displaystyle f_{j}^{\mathsf {T}}}is thetransposeoffj{\displaystyle f_{j}}. A general element of the tensor product
can then be viewed as then×mmatrix
By thesingular value decomposition, there exist ann×nunitaryU,m×munitaryV, and apositive semidefinitediagonalm×mmatrix Σ such that
WriteU=[U1U2]{\displaystyle U={\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}}}whereU1{\displaystyle U_{1}}isn×mand we have
Let{u1,…,um}{\displaystyle \{u_{1},\ldots ,u_{m}\}}be themcolumn vectors ofU1{\displaystyle U_{1}},{v1,…,vm}{\displaystyle \{v_{1},\ldots ,v_{m}\}}the column vectors ofV¯{\displaystyle {\overline {V}}}, andα1,…,αm{\displaystyle \alpha _{1},\ldots ,\alpha _{m}}the diagonal elements of Σ. The previous expression is then
Then
which proves the claim.
Some properties of the Schmidt decomposition are of physical interest.
Consider a vectorw{\displaystyle w}of the tensor product
in the form of Schmidt decomposition
Form the rank 1 matrixρ=ww∗{\displaystyle \rho =ww^{*}}. Then thepartial traceofρ{\displaystyle \rho }, with respect to either systemAorB, is a diagonal matrix whose non-zero diagonal elements are|αi|2{\displaystyle |\alpha _{i}|^{2}}. In other words, the Schmidt decomposition shows that the reduced states ofρ{\displaystyle \rho }on either subsystem have the same spectrum.
The strictly positive valuesαi{\displaystyle \alpha _{i}}in the Schmidt decomposition ofw{\displaystyle w}are itsSchmidt coefficients, orSchmidt numbers. The total number of Schmidt coefficients ofw{\displaystyle w}, counted with multiplicity, is called itsSchmidt rank.
Ifw{\displaystyle w}can be expressed as a product
thenw{\displaystyle w}is called aseparable state. Otherwise,w{\displaystyle w}is said to be anentangled state. From the Schmidt decomposition, we can see thatw{\displaystyle w}is entangled if and only ifw{\displaystyle w}has Schmidt rank strictly greater than 1. Therefore, two subsystems that partition a pure state are entangled if and only if their reduced states are mixed states.
A consequence of the above comments is that, for pure states, thevon Neumann entropyof the reduced states is a well-defined measure ofentanglement. For the von Neumann entropy of both reduced states ofρ{\displaystyle \rho }is−∑i|αi|2log(|αi|2){\textstyle -\sum _{i}|\alpha _{i}|^{2}\log \left(|\alpha _{i}|^{2}\right)}, and this is zero if and only ifρ{\displaystyle \rho }is a product state (not entangled).
The Schmidt rank is defined for bipartite systems, namely quantum states
|ψ⟩∈HA⊗HB{\displaystyle |\psi \rangle \in H_{A}\otimes H_{B}}
The concept of Schmidt rank can be extended to quantum systems made up of more than two subsystems.[1]
Consider the tripartite quantum system:
|ψ⟩∈HA⊗HB⊗HC{\displaystyle |\psi \rangle \in H_{A}\otimes H_{B}\otimes H_{C}}
There are three ways to reduce this to a bipartite system by performing thepartial tracewith respect toHA,HB{\displaystyle H_{A},H_{B}}orHC{\displaystyle H_{C}}
{ρ^A=TrA(|ψ⟩⟨ψ|)ρ^B=TrB(|ψ⟩⟨ψ|)ρ^C=TrC(|ψ⟩⟨ψ|){\displaystyle {\begin{cases}{\hat {\rho }}_{A}=Tr_{A}(|\psi \rangle \langle \psi |)\\{\hat {\rho }}_{B}=Tr_{B}(|\psi \rangle \langle \psi |)\\{\hat {\rho }}_{C}=Tr_{C}(|\psi \rangle \langle \psi |)\end{cases}}}
Each of the systems obtained is a bipartite system and therefore can be characterized by one number (its Schmidt rank), respectivelyrA,rB{\displaystyle r_{A},r_{B}}andrC{\displaystyle r_{C}}. These numbers capture the "amount of entanglement" in the bipartite system when respectively A, B or C are discarded. For these reasons the tripartite system can be described by a vector, namely the Schmidt-rank vector
r→=(rA,rB,rC){\displaystyle {\vec {r}}=(r_{A},r_{B},r_{C})}
The concept of Schmidt-rank vector can be likewise extended to systems made up of more than three subsystems through the use oftensors.
Take the tripartite quantum state|ψ4,2,2⟩=12(|0,0,0⟩+|1,0,1⟩+|2,1,0⟩+|3,1,1⟩){\displaystyle |\psi _{4,2,2}\rangle ={\frac {1}{2}}{\big (}|0,0,0\rangle +|1,0,1\rangle +|2,1,0\rangle +|3,1,1\rangle {\big )}}
This kind of system is made possible by encoding the value of aquditinto theorbital angular momentum (OAM)of a photon rather than itsspin, since the latter can only take two values.
The Schmidt-rank vector for this quantum state is(4,2,2){\displaystyle (4,2,2)}.
|
https://en.wikipedia.org/wiki/Schmidt_decomposition
|
Inmathematics, theSmith normal form(sometimes abbreviatedSNF[1]) is anormal formthat can be defined for anymatrix(not necessarilysquare) with entries in aprincipal ideal domain(PID). The Smith normal form of a matrix isdiagonal, and can be obtained from the original matrix by multiplying on the left and right byinvertiblesquare matrices. In particular, theintegersare a PID, so one can always calculate the Smith normal form of aninteger matrix. The Smith normal form is very useful for working withfinitely generatedmodulesover a PID, and in particular for deducing the structure of aquotientof afree module. It is named after the Irish mathematicianHenry John Stephen Smith.[2][3]
LetA{\displaystyle A}be a nonzerom×n{\displaystyle m\times n}matrix over aprincipal ideal domainR{\displaystyle R}. There exist invertiblem×m{\displaystyle m\times m}andn×n{\displaystyle n\times n}-matricesS,T{\displaystyle S,T}(with entries inR{\displaystyle R}) such that the productSAT{\displaystyle SAT}is
(α100⋯0⋯00α2000⋱⋮⋮⋮αr0⋯0⋯0⋮⋮⋮0⋯0⋯0).{\displaystyle {\begin{pmatrix}\alpha _{1}&0&0&\cdots &0&\cdots &0\\0&\alpha _{2}&0&&&&\\0&0&\ddots &&\vdots &&\vdots \\\vdots &&&\alpha _{r}&&&\\0&&\cdots &&0&\cdots &0\\\vdots &&&&\vdots &&\vdots \\0&&\cdots &&0&\cdots &0\end{pmatrix}}.}
and the diagonal elementsαi{\displaystyle \alpha _{i}}satisfyαi∣αi+1{\displaystyle \alpha _{i}\mid \alpha _{i+1}}for all1≤i<r{\displaystyle 1\leq i<r}. This is the Smith normal form of the matrixA{\displaystyle A}. The elementsαi{\displaystyle \alpha _{i}}are uniqueup tomultiplication by aunitand are called theelementary divisors,invariants, orinvariant factors. They can be computed (up to multiplication by a unit) as
wheredi(A){\displaystyle d_{i}(A)}(calledi-thdeterminant divisor) equals thegreatest common divisorof the determinants of alli×i{\displaystyle i\times i}minorsof the matrixA{\displaystyle A}andd0(A):=1{\displaystyle d_{0}(A):=1}.
Example :For a2×2{\displaystyle 2\times 2}matrix,SNF(abcd)=diag(d1,d2/d1){\displaystyle {\rm {SNF}}{a~~b \choose c~~d}={\rm {diag}}(d_{1},d_{2}/d_{1})}withd1=gcd(a,b,c,d){\displaystyle d_{1}=\gcd(a,b,c,d)}andd2=|ad−bc|{\displaystyle d_{2}=|ad-bc|}.
The first goal is to find invertible square matricesS{\displaystyle S}andT{\displaystyle T}such that the productSAT{\displaystyle SAT}is diagonal. This is the hardest part of the algorithm. Once diagonality is achieved, it becomes relatively easy to put the matrix into Smith normal form. Phrased more abstractly, the goal is to show that, thinking ofA{\displaystyle A}as a map fromRn{\displaystyle R^{n}}(the freeR{\displaystyle R}-module of rankn{\displaystyle n}) toRm{\displaystyle R^{m}}(the freeR{\displaystyle R}-module of rankm{\displaystyle m}), there areisomorphismsS:Rm→Rm{\displaystyle S:R^{m}\to R^{m}}andT:Rn→Rn{\displaystyle T:R^{n}\to R^{n}}such thatS⋅A⋅T{\displaystyle S\cdot A\cdot T}has the simple form of a diagonal matrix. The matricesS{\displaystyle S}andT{\displaystyle T}can be found by starting out with identity matrices of the appropriate size, and modifyingS{\displaystyle S}each time a row operation is performed onA{\displaystyle A}in the algorithm by the corresponding column operation (for example, if rowi{\displaystyle i}is added to rowj{\displaystyle j}ofA{\displaystyle A}, then columnj{\displaystyle j}should be subtracted from columni{\displaystyle i}ofS{\displaystyle S}to retain the product invariant), and similarly modifyingT{\displaystyle T}for each column operation performed. Since row operations are left-multiplications and column operations are right-multiplications, this preserves the invariantA′=S′⋅A⋅T′{\displaystyle A'=S'\cdot A\cdot T'}whereA′,S′,T′{\displaystyle A',S',T'}denote current values andA{\displaystyle A}denotes the original matrix; eventually the matrices in this invariant become diagonal. Only invertible row and column operations are performed, which ensures thatS{\displaystyle S}andT{\displaystyle T}remain invertible matrices.
Fora∈R∖{0}{\displaystyle a\in R\setminus \{0\}}, writeδ(a){\displaystyle \delta (a)}for the number of prime factors ofa{\displaystyle a}(these exist and are unique since any PID is also aunique factorization domain). In particular,R{\displaystyle R}is also aBézout domain, so it is agcd domainand the gcd of any two elements satisfies aBézout's identity.
To put a matrix into Smith normal form, one can repeatedly apply the following, wheret{\displaystyle t}loops from 1 tom{\displaystyle m}.
Choosejt{\displaystyle j_{t}}to be the smallest column index ofA{\displaystyle A}with a non-zero entry, starting the search at column indexjt−1+1{\displaystyle j_{t-1}+1}ift>1{\displaystyle t>1}.
We wish to haveat,jt≠0{\displaystyle a_{t,j_{t}}\neq 0}; if this is the case this step is complete, otherwise there is by assumption somek{\displaystyle k}withak,jt≠0{\displaystyle a_{k,j_{t}}\neq 0}, and we can exchange rowst{\displaystyle t}andk{\displaystyle k}, thereby obtainingat,jt≠0{\displaystyle a_{t,j_{t}}\neq 0}.
Our chosen pivot is now at position(t,jt){\displaystyle (t,j_{t})}.
If there is an entry at position (k,jt) such thatat,jt∤ak,jt{\displaystyle a_{t,j_{t}}\nmid a_{k,j_{t}}}, then, lettingβ=gcd(at,jt,ak,jt){\displaystyle \beta =\gcd \left(a_{t,j_{t}},a_{k,j_{t}}\right)}, we know by the Bézout property that there exist σ, τ inRsuch that
By left-multiplication with an appropriate invertible matrixL, it can be achieved that rowtof the matrix product is the sum of σ times the original rowtand τ times the original rowk, that rowkof the product is anotherlinear combinationof those original rows, and that all other rows are unchanged. Explicitly, if σ and τ satisfy the above equation, then forα=at,jt/β{\displaystyle \alpha =a_{t,j_{t}}/\beta }andγ=ak,jt/β{\displaystyle \gamma =a_{k,j_{t}}/\beta }(which divisions are possible by the definition of β) one has
so that the matrix
is invertible, with inverse
NowLcan be obtained by fittingL0{\displaystyle L_{0}}into rows and columnstandkof theidentity matrix. By construction the matrix obtained after left-multiplying byLhas entry β at position (t,jt) (and due to our choice of α and γ it also has an entry 0 at position (k,jt), which is useful though not essential for the algorithm). This new entry β divides the entryat,jt{\displaystyle a_{t,j_{t}}}that was there before, and so in particularδ(β)<δ(at,jt){\displaystyle \delta (\beta )<\delta (a_{t,j_{t}})}; therefore repeating these steps must eventually terminate. One ends up with a matrix having an entry at position (t,jt) that divides all entries in columnjt.
Finally, adding appropriate multiples of rowt, it can be achieved that all entries in columnjtexcept for that at position (t,jt) are zero. This can be achieved by left-multiplication with an appropriate matrix. However, to make the matrix fully diagonal we need to eliminate nonzero entries on the row of position (t,jt) as well. This can be achieved by repeating the steps in Step II for columns instead of rows, and using multiplication on the right by thetransposeof the obtained matrixL. In general this will result in the zero entries from the prior application of Step III becoming nonzero again.
However, notice that each application of Step II for either rows or columns must continue to reduce the value ofδ(at,jt){\displaystyle \delta (a_{t,j_{t}})}, and so the process must eventually stop after some number of iterations, leading to a matrix where the entry at position (t,jt) is the only non-zero entry in both its row and column.
At this point, only the block ofAto the lower right of (t,jt) needs to be diagonalized, and conceptually the algorithm can be applied recursively, treating this block as a separate matrix. In other words, we can incrementtby one and go back to Step I.
Applying the steps described above to the remaining non-zero columns of the resulting matrix (if any), we get anm×n{\displaystyle m\times n}-matrix with column indicesj1<…<jr{\displaystyle j_{1}<\ldots <j_{r}}wherer≤min(m,n){\displaystyle r\leq \min(m,n)}. The matrix entries(l,jl){\displaystyle (l,j_{l})}are non-zero, and every other entry is zero.
Now we can move the null columns of this matrix to the right, so that the nonzero entries are on positions(i,i){\displaystyle (i,i)}for1≤i≤r{\displaystyle 1\leq i\leq r}. For short, setαi{\displaystyle \alpha _{i}}for the element at position(i,i){\displaystyle (i,i)}.
The condition of divisibility of diagonal entries might not be satisfied. For any indexi<r{\displaystyle i<r}for whichαi∤αi+1{\displaystyle \alpha _{i}\nmid \alpha _{i+1}}, one can repair this shortcoming by operations on rows and columnsi{\displaystyle i}andi+1{\displaystyle i+1}only: first add columni+1{\displaystyle i+1}to columni{\displaystyle i}to get an entryαi+1{\displaystyle \alpha _{i+1}}in columniwithout disturbing the entryαi{\displaystyle \alpha _{i}}at position(i,i){\displaystyle (i,i)}, and then apply a row operation to make the entry at position(i,i){\displaystyle (i,i)}equal toβ=gcd(αi,αi+1){\displaystyle \beta =\gcd(\alpha _{i},\alpha _{i+1})}as in Step II; finally proceed as in Step III to make the matrix diagonal again. Since the new entry at position(i+1,i+1){\displaystyle (i+1,i+1)}is a linear combination of the originalαi,αi+1{\displaystyle \alpha _{i},\alpha _{i+1}}, it is divisible by β.
The valueδ(α1)+⋯+δ(αr){\displaystyle \delta (\alpha _{1})+\cdots +\delta (\alpha _{r})}does not change by the above operation (it is δ of the determinant of the upperr×r{\displaystyle r\times r}submatrix), whence that operation does diminish (by moving prime factors to the right) the value of
So after finitely many applications of this operation no further application is possible, which means that we have obtainedα1∣α2∣⋯∣αr{\displaystyle \alpha _{1}\mid \alpha _{2}\mid \cdots \mid \alpha _{r}}as desired.
Since all row and column manipulations involved in the process are invertible, this shows that there exist invertiblem×m{\displaystyle m\times m}andn×n{\displaystyle n\times n}-matricesS, Tso that the productS A Tsatisfies the definition of a Smith normal form. In particular, this shows that the Smith normal form exists, which was assumed without proof in the definition.
The Smith normal form is useful for computing thehomologyof achain complexwhen the chain modules of the chain complex arefinitely generated. For instance, intopology, it can be used to compute the homology of a finitesimplicial complexorCW complexover the integers, because the boundary maps in such a complex are just integer matrices. It can also be used to determine theinvariant factorsthat occur in thestructure theorem for finitely generated modules over a principal ideal domain, which includes thefundamental theorem of finitely generated abelian groups.
The Smith normal form is also used incontrol theoryto computetransmission and blocking zerosof atransfer function matrix.[4]
As an example, we will find the Smith normal form of the following matrix over the integers.
The following matrices are the intermediate steps as the algorithm is applied to the above matrix.
So the Smith normal form is
and the invariant factors are 2, 2 and 156.
The Smith Normal Form of anN-by-NmatrixAcan be computed in timeO(‖A‖log‖A‖N4logN){\displaystyle O(\|A\|\log \|A\|N^{4}\log N)}.[5]If the matrix issparse, the computation is typically much faster.
The Smith normal form can be used to determine whether or not matrices with entries over a commonfieldK{\displaystyle K}aresimilar. Specifically two matricesAandBare similarif and only ifthecharacteristic matricesxI−A{\displaystyle xI-A}andxI−B{\displaystyle xI-B}have the same Smith normal form (working in the PIDK[x]{\displaystyle K[x]}).
For example, with
AandBare similar because the Smith normal form of their characteristic matrices match, but are not similar toCbecause the Smith normal form of the characteristic matrices do not match.
|
https://en.wikipedia.org/wiki/Smith_normal_form
|
Inmathematics, in particularfunctional analysis, thesingular valuesof acompact operatorT:X→Y{\displaystyle T:X\rightarrow Y}acting betweenHilbert spacesX{\displaystyle X}andY{\displaystyle Y}, are the square roots of the (necessarily non-negative)eigenvaluesof the self-adjoint operatorT∗T{\displaystyle T^{*}T}(whereT∗{\displaystyle T^{*}}denotes theadjointofT{\displaystyle T}).
The singular values are non-negativereal numbers, usually listed in decreasing order (σ1(T),σ2(T), …). The largest singular valueσ1(T) is equal to theoperator normofT(seeMin-max theorem).
IfTacts on Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, there is a simple geometric interpretation for the singular values: Consider the image byT{\displaystyle T}of theunit sphere; this is anellipsoid, and the lengths of its semi-axes are the singular values ofT{\displaystyle T}(the figure provides an example inR2{\displaystyle \mathbb {R} ^{2}}).
The singular values are the absolute values of theeigenvaluesof anormal matrixA, because thespectral theoremcan be applied to obtain unitary diagonalization ofA{\displaystyle A}asA=UΛU∗{\displaystyle A=U\Lambda U^{*}}. Therefore,A∗A=UΛ∗ΛU∗=U|Λ|U∗{\textstyle {\sqrt {A^{*}A}}={\sqrt {U\Lambda ^{*}\Lambda U^{*}}}=U\left|\Lambda \right|U^{*}}.
Mostnormson Hilbert space operators studied are defined using singular values. For example, theKy Fan-k-norm is the sum of firstksingular values, the trace norm is the sum of all singular values, and theSchatten normis thepth root of the sum of thepth powers of the singular values. Note that each norm is defined only on a special class of operators, hence singular values can be useful in classifying different operators.
In the finite-dimensional case, amatrixcan always be decomposed in the formUΣV∗{\displaystyle \mathbf {U\Sigma V^{*}} }, whereU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V^{*}} }areunitary matricesandΣ{\displaystyle \mathbf {\Sigma } }is arectangular diagonal matrixwith the singular values lying on the diagonal. This is thesingular value decomposition.
ForA∈Cm×n{\displaystyle A\in \mathbb {C} ^{m\times n}}, andi=1,2,…,min{m,n}{\displaystyle i=1,2,\ldots ,\min\{m,n\}}.
Min-max theorem for singular values. HereU:dim(U)=i{\displaystyle U:\dim(U)=i}is a subspace ofCn{\displaystyle \mathbb {C} ^{n}}of dimensioni{\displaystyle i}.
Matrix transpose and conjugate do not alter singular values.
For any unitaryU∈Cm×m,V∈Cn×n.{\displaystyle U\in \mathbb {C} ^{m\times m},V\in \mathbb {C} ^{n\times n}.}
Relation to eigenvalues:
Relation totrace:
IfA∗A{\displaystyle A^{*}A}is full rank, the product of singular values isdetA∗A{\displaystyle \det {\sqrt {A^{*}A}}}.
IfAA∗{\displaystyle AA^{*}}is full rank, the product of singular values isdetAA∗{\displaystyle \det {\sqrt {AA^{*}}}}.
IfA{\displaystyle A}is square and full rank, the product of singular values is|detA|{\displaystyle |\det A|}.
IfA{\displaystyle A}isnormal, thenσ(A)=|λ(A)|{\displaystyle \sigma (A)=|\lambda (A)|}, that is, its singular values are the absolute values of its eigenvalues.
For a generic rectangular matrixA{\displaystyle A}, letA~=[0AA∗0]{\textstyle {\tilde {A}}={\begin{bmatrix}0&A\\A^{*}&0\end{bmatrix}}}be its augmented matrix. It has eigenvalues±σ(A){\textstyle \pm \sigma (A)}(whereσ(A){\textstyle \sigma (A)}are the singular values ofA{\textstyle A}) and the remaining eigenvalues are zero. LetA=UΣV∗{\textstyle A=U\Sigma V^{*}}be the singular value decomposition, then the eigenvectors ofA~{\textstyle {\tilde {A}}}are[ui±vi]{\textstyle {\begin{bmatrix}\mathbf {u} _{i}\\\pm \mathbf {v} _{i}\end{bmatrix}}}for±σi{\displaystyle \pm \sigma _{i}}[1]: 52
The smallest singular value of a matrixAisσn(A). It has the following properties for a non-singular matrix A:
Intuitively, ifσn(A) is small, then the rows of A are "almost" linearly dependent. If it isσn(A) = 0, then the rows of A are linearly dependent and A is not invertible.
See also.[3]
ForA∈Cm×n.{\displaystyle A\in \mathbb {C} ^{m\times n}.}
ForA,B∈Cm×n{\displaystyle A,B\in \mathbb {C} ^{m\times n}}
ForA,B∈Cn×n{\displaystyle A,B\in \mathbb {C} ^{n\times n}}
ForA,B∈Cm×n{\displaystyle A,B\in \mathbb {C} ^{m\times n}}[4]2σi(AB∗)≤σi(A∗A+B∗B),i=1,2,…,n.{\displaystyle 2\sigma _{i}(AB^{*})\leq \sigma _{i}\left(A^{*}A+B^{*}B\right),\quad i=1,2,\ldots ,n.}
ForA∈Cn×n{\displaystyle A\in \mathbb {C} ^{n\times n}}.
This concept was introduced byErhard Schmidtin 1907. Schmidt called singular values "eigenvalues" at that time. The name "singular value" was first quoted by Smithies in 1937. In 1957, Allahverdiev proved the following characterization of thenth singular number:[6]
This formulation made it possible to extend the notion of singular values to operators inBanach space.
Note that there is a more general concept ofs-numbers, which also includes Gelfand and Kolmogorov width.
|
https://en.wikipedia.org/wiki/Singular_value
|
Inlinear algebra,two-dimensional singular-value decomposition(2DSVD) computes thelow-rank approximationof a set ofmatricessuch as2Dimages or weather maps in a manner almost identical to SVD (singular-value decomposition) which computes the low-rank approximation of a single matrix (or a set of 1D vectors).
Let matrixX=[x1,…,xn]{\displaystyle X=[\mathbf {x} _{1},\ldots ,\mathbf {x} _{n}]}contains the set of 1D vectors which have been centered. In PCA/SVD, we construct covariance matrixF{\displaystyle F}and Gram matrixG{\displaystyle G}
and compute their eigenvectorsU=[u1,…,un]{\displaystyle U=[\mathbf {u} _{1},\ldots ,\mathbf {u} _{n}]}andV=[v1,…,vn]{\displaystyle V=[\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}]}. SinceVVT=I{\displaystyle VV^{\mathsf {T}}=I}andUUT=I{\displaystyle UU^{\mathsf {T}}=I}we have
If we retain onlyK{\displaystyle K}principal eigenvectors inU,V{\displaystyle U,V}, this gives low-rank approximation ofX{\displaystyle X}.
Here we deal with a set of 2D matrices(X1,…,Xn){\displaystyle (X_{1},\ldots ,X_{n})}. Suppose they are centered∑iXi=0{\textstyle \sum _{i}X_{i}=0}. We construct row–row and column–column covariance matrices
in exactly the same manner as in SVD, and compute their eigenvectorsU{\displaystyle U}andV{\displaystyle V}. We approximateXi{\displaystyle X_{i}}as
in identical fashion as in SVD. This gives a near optimal low-rank approximation of(X1,…,Xn){\displaystyle (X_{1},\ldots ,X_{n})}with the objective function
Error bounds similar toEckard–Young theoremalso exist.
2DSVD is mostly used inimage compressionand representation.
|
https://en.wikipedia.org/wiki/Two-dimensional_singular-value_decomposition
|
Inmathematics, there are many kinds ofinequalitiesinvolvingmatricesandlinear operatorsonHilbert spaces. This article covers some important operator inequalities connected withtracesof matrices.[1][2][3][4]
LetHn{\displaystyle \mathbf {H} _{n}}denote the space ofHermitiann×n{\displaystyle n\times n}matrices,Hn+{\displaystyle \mathbf {H} _{n}^{+}}denote the set consisting ofpositive semi-definiten×n{\displaystyle n\times n}Hermitian matrices andHn++{\displaystyle \mathbf {H} _{n}^{++}}denote the set ofpositive definiteHermitian matrices. For operators on an infinite dimensional Hilbert space we require that they betrace classandself-adjoint, in which case similar definitions apply, but we discuss only matrices, for simplicity.
For any real-valued functionf{\displaystyle f}on an intervalI⊆R,{\displaystyle I\subseteq \mathbb {R} ,}one may define amatrix functionf(A){\displaystyle f(A)}for any operatorA∈Hn{\displaystyle A\in \mathbf {H} _{n}}witheigenvaluesλ{\displaystyle \lambda }inI{\displaystyle I}by defining it on the eigenvalues and correspondingprojectorsP{\displaystyle P}asf(A)≡∑jf(λj)Pj,{\displaystyle f(A)\equiv \sum _{j}f(\lambda _{j})P_{j}~,}given thespectral decompositionA=∑jλjPj.{\displaystyle A=\sum _{j}\lambda _{j}P_{j}.}
A functionf:I→R{\displaystyle f:I\to \mathbb {R} }defined on an intervalI⊆R{\displaystyle I\subseteq \mathbb {R} }is said to beoperator monotoneif for alln,{\displaystyle n,}and allA,B∈Hn{\displaystyle A,B\in \mathbf {H} _{n}}with eigenvalues inI,{\displaystyle I,}the following holds,A≥B⟹f(A)≥f(B),{\displaystyle A\geq B\implies f(A)\geq f(B),}where the inequalityA≥B{\displaystyle A\geq B}means that the operatorA−B≥0{\displaystyle A-B\geq 0}is positive semi-definite. One may check thatf(A)=A2{\displaystyle f(A)=A^{2}}is, in fact,notoperator monotone!
A functionf:I→R{\displaystyle f:I\to \mathbb {R} }is said to beoperator convexif for alln{\displaystyle n}and allA,B∈Hn{\displaystyle A,B\in \mathbf {H} _{n}}with eigenvalues inI,{\displaystyle I,}and0<λ<1{\displaystyle 0<\lambda <1}, the following holdsf(λA+(1−λ)B)≤λf(A)+(1−λ)f(B).{\displaystyle f(\lambda A+(1-\lambda )B)\leq \lambda f(A)+(1-\lambda )f(B).}Note that the operatorλA+(1−λ)B{\displaystyle \lambda A+(1-\lambda )B}has eigenvalues inI,{\displaystyle I,}sinceA{\displaystyle A}andB{\displaystyle B}have eigenvalues inI.{\displaystyle I.}
A functionf{\displaystyle f}isoperator concaveif−f{\displaystyle -f}is operator convex;=, that is, the inequality above forf{\displaystyle f}is reversed.
A functiong:I×J→R,{\displaystyle g:I\times J\to \mathbb {R} ,}defined on intervalsI,J⊆R{\displaystyle I,J\subseteq \mathbb {R} }is said to bejointly convexif for alln{\displaystyle n}and allA1,A2∈Hn{\displaystyle A_{1},A_{2}\in \mathbf {H} _{n}}with eigenvalues inI{\displaystyle I}and allB1,B2∈Hn{\displaystyle B_{1},B_{2}\in \mathbf {H} _{n}}with eigenvalues inJ,{\displaystyle J,}and any0≤λ≤1{\displaystyle 0\leq \lambda \leq 1}the following holdsg(λA1+(1−λ)A2,λB1+(1−λ)B2)≤λg(A1,B1)+(1−λ)g(A2,B2).{\displaystyle g(\lambda A_{1}+(1-\lambda )A_{2},\lambda B_{1}+(1-\lambda )B_{2})~\leq ~\lambda g(A_{1},B_{1})+(1-\lambda )g(A_{2},B_{2}).}
A functiong{\displaystyle g}isjointly concaveif −g{\displaystyle g}is jointly convex, i.e. the inequality above forg{\displaystyle g}is reversed.
Given a functionf:R→R,{\displaystyle f:\mathbb {R} \to \mathbb {R} ,}the associatedtrace functiononHn{\displaystyle \mathbf {H} _{n}}is given byA↦Trf(A)=∑jf(λj),{\displaystyle A\mapsto \operatorname {Tr} f(A)=\sum _{j}f(\lambda _{j}),}whereA{\displaystyle A}has eigenvaluesλ{\displaystyle \lambda }andTr{\displaystyle \operatorname {Tr} }stands for atraceof the operator.
Letf:R→R{\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} }be continuous, and letnbe anyinteger. Then, ift↦f(t){\displaystyle t\mapsto f(t)}is monotone increasing, so
isA↦Trf(A){\displaystyle A\mapsto \operatorname {Tr} f(A)}onHn.
Likewise, ift↦f(t){\displaystyle t\mapsto f(t)}isconvex, so isA↦Trf(A){\displaystyle A\mapsto \operatorname {Tr} f(A)}onHn, and
it is strictly convex iffis strictly convex.
See proof and discussion in,[1]for example.
For−1≤p≤0{\displaystyle -1\leq p\leq 0}, the functionf(t)=−tp{\displaystyle f(t)=-t^{p}}is operator monotone and operator concave.
For0≤p≤1{\displaystyle 0\leq p\leq 1}, the functionf(t)=tp{\displaystyle f(t)=t^{p}}is operator monotone and operator concave.
For1≤p≤2{\displaystyle 1\leq p\leq 2}, the functionf(t)=tp{\displaystyle f(t)=t^{p}}is operator convex. Furthermore,
The original proof of this theorem is due toK. Löwnerwho gave a necessary and sufficient condition forfto be operator monotone.[5]Anelementary proofof the theorem is discussed in[1]and a more general version of it in.[6]
For all Hermitiann×nmatricesAandBand all differentiableconvex functionsf:R→R{\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} }withderivativef ', or for all positive-definite Hermitiann×nmatricesAandB, and all differentiable
convex functionsf:(0,∞) →R{\displaystyle \mathbb {R} }, the following inequality holds,
Tr[f(A)−f(B)−(A−B)f′(B)]≥0.{\displaystyle \operatorname {Tr} [f(A)-f(B)-(A-B)f'(B)]\geq 0~.}
In either case, iffis strictly convex, equality holds if and only ifA=B.
A popular choice in applications isf(t) =tlogt, see below.
LetC=A−B{\displaystyle C=A-B}so that, fort∈(0,1){\displaystyle t\in (0,1)},
varies fromB{\displaystyle B}toA{\displaystyle A}.
Define
By convexity and monotonicity of trace functions,F(t){\displaystyle F(t)}is convex, and so for allt∈(0,1){\displaystyle t\in (0,1)},
which is,
and, in fact, the right hand side is monotone decreasing int{\displaystyle t}.
Taking the limitt→0{\displaystyle t\to 0}yields,
which with rearrangement and substitution is Klein's inequality:
Note that iff(t){\displaystyle f(t)}is strictly convex andC≠0{\displaystyle C\neq 0}, thenF(t){\displaystyle F(t)}is strictly convex. The final assertion follows from this and the fact thatF(t)−F(0)t{\displaystyle {\tfrac {F(t)-F(0)}{t}}}is monotone decreasing int{\displaystyle t}.
In 1965, S. Golden[7]and C.J. Thompson[8]independently discovered that
For any matricesA,B∈Hn{\displaystyle A,B\in \mathbf {H} _{n}},
This inequality can be generalized for three operators:[9]for non-negative operatorsA,B,C∈Hn+{\displaystyle A,B,C\in \mathbf {H} _{n}^{+}},
LetR,F∈Hn{\displaystyle R,F\in \mathbf {H} _{n}}be such that Tr eR= 1.
Definingg= TrFeR, we have
The proof of this inequality follows from the above combined withKlein's inequality. Takef(x) = exp(x),A=R+F, andB=R+gI.[10]
LetH{\displaystyle H}be a self-adjoint operator such thate−H{\displaystyle e^{-H}}istrace class. Then for anyγ≥0{\displaystyle \gamma \geq 0}withTrγ=1,{\displaystyle \operatorname {Tr} \gamma =1,}
with equality if and only ifγ=exp(−H)/Trexp(−H).{\displaystyle \gamma =\exp(-H)/\operatorname {Tr} \exp(-H).}
The following theorem was proved byE. H. Liebin.[9]It proves and generalizes a conjecture ofE. P. Wigner,M. M. Yanase, andFreeman Dyson.[11]Six years later other proofs were given by T. Ando[12]and B. Simon,[3]and several more have been given since then.
For allm×n{\displaystyle m\times n}matricesK{\displaystyle K}, and allq{\displaystyle q}andr{\displaystyle r}such that0≤q≤1{\displaystyle 0\leq q\leq 1}and0≤r≤1{\displaystyle 0\leq r\leq 1}, withq+r≤1{\displaystyle q+r\leq 1}the real valued map onHm+×Hn+{\displaystyle \mathbf {H} _{m}^{+}\times \mathbf {H} _{n}^{+}}given by
HereK∗{\displaystyle K^{*}}stands for theadjoint operatorofK.{\displaystyle K.}
For a fixed Hermitian matrixL∈Hn{\displaystyle L\in \mathbf {H} _{n}}, the function
is concave onHn++{\displaystyle \mathbf {H} _{n}^{++}}.
The theorem and proof are due to E. H. Lieb,[9]Thm 6, where he obtains this theorem as a corollary of Lieb's concavity Theorem.
The most direct proof is due to H. Epstein;[13]seeM.B. Ruskaipapers,[14][15]for a review of this argument.
T. Ando's proof[12]ofLieb's concavity theoremled to the following significant complement to it:
For allm×n{\displaystyle m\times n}matricesK{\displaystyle K}, and all1≤q≤2{\displaystyle 1\leq q\leq 2}and0≤r≤1{\displaystyle 0\leq r\leq 1}withq−r≥1{\displaystyle q-r\geq 1}, the real valued map onHm++×Hn++{\displaystyle \mathbf {H} _{m}^{++}\times \mathbf {H} _{n}^{++}}given by
is convex.
For two operatorsA,B∈Hn++{\displaystyle A,B\in \mathbf {H} _{n}^{++}}define the following map
Fordensity matricesρ{\displaystyle \rho }andσ{\displaystyle \sigma }, the mapR(ρ∥σ)=S(ρ∥σ){\displaystyle R(\rho \parallel \sigma )=S(\rho \parallel \sigma )}is the Umegaki'squantum relative entropy.
Note that the non-negativity ofR(A∥B){\displaystyle R(A\parallel B)}follows from Klein's inequality withf(t)=tlogt{\displaystyle f(t)=t\log t}.
The mapR(A∥B):Hn++×Hn++→R{\displaystyle R(A\parallel B):\mathbf {H} _{n}^{++}\times \mathbf {H} _{n}^{++}\rightarrow \mathbf {R} }is jointly convex.
For all0<p<1{\displaystyle 0<p<1},(A,B)↦Tr(B1−pAp){\displaystyle (A,B)\mapsto \operatorname {Tr} (B^{1-p}A^{p})}is jointly concave, byLieb's concavity theorem, and thus
is convex. But
and convexity is preserved in the limit.
The proof is due to G. Lindblad.[16]
The operator version ofJensen's inequalityis due to C. Davis.[17]
A continuous, real functionf{\displaystyle f}on an intervalI{\displaystyle I}satisfiesJensen's Operator Inequalityif the following holds
for operators{Ak}k{\displaystyle \{A_{k}\}_{k}}with∑kAk∗Ak=1{\displaystyle \sum _{k}A_{k}^{*}A_{k}=1}and forself-adjoint operators{Xk}k{\displaystyle \{X_{k}\}_{k}}withspectrumonI{\displaystyle I}.
See,[17][18]for the proof of the following two theorems.
Letfbe acontinuous functiondefined on an intervalIand letmandnbe natural numbers. Iffis convex, we then have the inequality
for all (X1, ... ,Xn) self-adjointm×mmatrices with spectra contained inIand
all (A1, ... ,An) ofm×mmatrices with
Conversely, if the above inequality is satisfied for somenandm, wheren> 1, thenfis convex.
For a continuous functionf{\displaystyle f}defined on an intervalI{\displaystyle I}the following conditions are equivalent:
for all(X1,…,Xn){\displaystyle (X_{1},\ldots ,X_{n})}bounded, self-adjoint operators on an arbitraryHilbert spaceH{\displaystyle {\mathcal {H}}}with
spectra contained inI{\displaystyle I}and all(A1,…,An){\displaystyle (A_{1},\ldots ,A_{n})}onH{\displaystyle {\mathcal {H}}}with∑k=1nAk∗Ak=1.{\displaystyle \sum _{k=1}^{n}A_{k}^{*}A_{k}=1.}
every self-adjoint operatorX{\displaystyle X}with spectrum inI{\displaystyle I}.
E. H. Lieb and W. E. Thirring proved the following inequality in[19]1976: For anyA≥0,{\displaystyle A\geq 0,}B≥0{\displaystyle B\geq 0}andr≥1,{\displaystyle r\geq 1,}Tr((BAB)r)≤Tr(BrArBr).{\displaystyle \operatorname {Tr} ((BAB)^{r})~\leq ~\operatorname {Tr} (B^{r}A^{r}B^{r}).}
In 1990[20]H. Araki generalized the above inequality to the following one: For anyA≥0,{\displaystyle A\geq 0,}B≥0{\displaystyle B\geq 0}andq≥0,{\displaystyle q\geq 0,}Tr((BAB)rq)≤Tr((BrArBr)q),{\displaystyle \operatorname {Tr} ((BAB)^{rq})~\leq ~\operatorname {Tr} ((B^{r}A^{r}B^{r})^{q}),}forr≥1,{\displaystyle r\geq 1,}andTr((BrArBr)q)≤Tr((BAB)rq),{\displaystyle \operatorname {Tr} ((B^{r}A^{r}B^{r})^{q})~\leq ~\operatorname {Tr} ((BAB)^{rq}),}for0≤r≤1.{\displaystyle 0\leq r\leq 1.}
There are several other inequalities close to the Lieb–Thirring inequality, such as the following:[21]for anyA≥0,{\displaystyle A\geq 0,}B≥0{\displaystyle B\geq 0}andα∈[0,1],{\displaystyle \alpha \in [0,1],}Tr(BAαBBA1−αB)≤Tr(B2AB2),{\displaystyle \operatorname {Tr} (BA^{\alpha }BBA^{1-\alpha }B)~\leq ~\operatorname {Tr} (B^{2}AB^{2}),}and even more generally:[22]for anyA≥0,{\displaystyle A\geq 0,}B≥0,{\displaystyle B\geq 0,}r≥1/2{\displaystyle r\geq 1/2}andc≥0,{\displaystyle c\geq 0,}Tr((BAB2cAB)r)≤Tr((Bc+1A2Bc+1)r).{\displaystyle \operatorname {Tr} ((BAB^{2c}AB)^{r})~\leq ~\operatorname {Tr} ((B^{c+1}A^{2}B^{c+1})^{r}).}The above inequality generalizes the previous one, as can be seen by exchangingA{\displaystyle A}byB2{\displaystyle B^{2}}andB{\displaystyle B}byA(1−α)/2{\displaystyle A^{(1-\alpha )/2}}withα=2c/(2c+2){\displaystyle \alpha =2c/(2c+2)}and using the cyclicity of the trace, leading toTr((BAαBBA1−αB)r)≤Tr((B2AB2)r).{\displaystyle \operatorname {Tr} ((BA^{\alpha }BBA^{1-\alpha }B)^{r})~\leq ~\operatorname {Tr} ((B^{2}AB^{2})^{r}).}
Additionally, building upon the Lieb-Thirring inequality the following inequality was derived:[23]For anyA,B∈Hn,T∈Cn×n{\displaystyle A,B\in \mathbf {H} _{n},T\in \mathbb {C} ^{n\times n}}and all1≤p,q≤∞{\displaystyle 1\leq p,q\leq \infty }with1/p+1/q=1{\displaystyle 1/p+1/q=1}, it holds that|Tr(TAT∗B)|≤Tr(T∗T|A|p)1pTr(TT∗|B|q)1q.{\displaystyle |\operatorname {Tr} (TAT^{*}B)|~\leq ~\operatorname {Tr} (T^{*}T|A|^{p})^{\frac {1}{p}}\operatorname {Tr} (TT^{*}|B|^{q})^{\frac {1}{q}}.}
E. Effros in[24]proved the following theorem.
Iff(x){\displaystyle f(x)}is an operator convex function, andL{\displaystyle L}andR{\displaystyle R}are commuting bounded linear operators, i.e. the commutator[L,R]=LR−RL=0{\displaystyle [L,R]=LR-RL=0}, theperspective
is jointly convex, i.e. ifL=λL1+(1−λ)L2{\displaystyle L=\lambda L_{1}+(1-\lambda )L_{2}}andR=λR1+(1−λ)R2{\displaystyle R=\lambda R_{1}+(1-\lambda )R_{2}}with[Li,Ri]=0{\displaystyle [L_{i},R_{i}]=0}(i=1,2),0≤λ≤1{\displaystyle 0\leq \lambda \leq 1},
Ebadian et al. later extended the inequality to the case whereL{\displaystyle L}andR{\displaystyle R}do not commute .[25]
Von Neumann's trace inequality, named after its originatorJohn von Neumann, states that for anyn×n{\displaystyle n\times n}complex matricesA{\displaystyle A}andB{\displaystyle B}withsingular valuesα1≥α2≥⋯≥αn{\displaystyle \alpha _{1}\geq \alpha _{2}\geq \cdots \geq \alpha _{n}}andβ1≥β2≥⋯≥βn{\displaystyle \beta _{1}\geq \beta _{2}\geq \cdots \geq \beta _{n}}respectively,[26]|Tr(AB)|≤∑i=1nαiβi,{\displaystyle |\operatorname {Tr} (AB)|~\leq ~\sum _{i=1}^{n}\alpha _{i}\beta _{i}\,,}with equality if and only ifA{\displaystyle A}andB†{\displaystyle B^{\dagger }}share singular vectors.[27]
A simple corollary to this is the following result:[28]ForHermitiann×n{\displaystyle n\times n}positive semi-definite complex matricesA{\displaystyle A}andB{\displaystyle B}where now theeigenvaluesare sorted decreasingly (a1≥a2≥⋯≥an{\displaystyle a_{1}\geq a_{2}\geq \cdots \geq a_{n}}andb1≥b2≥⋯≥bn,{\displaystyle b_{1}\geq b_{2}\geq \cdots \geq b_{n},}respectively),∑i=1naibn−i+1≤Tr(AB)≤∑i=1naibi.{\displaystyle \sum _{i=1}^{n}a_{i}b_{n-i+1}~\leq ~\operatorname {Tr} (AB)~\leq ~\sum _{i=1}^{n}a_{i}b_{i}\,.}
|
https://en.wikipedia.org/wiki/Von_Neumann%27s_trace_inequality
|
Inmathematics, awavelet seriesis a representation of asquare-integrable(real- orcomplex-valued)functionby a certainorthonormalseriesgenerated by awavelet. This article provides a formal, mathematical definition of anorthonormal waveletand of theintegral wavelet transform.[1][2][3][4]
A functionψ∈L2(R){\displaystyle \psi \,\in \,L^{2}(\mathbb {R} )}is called anorthonormal waveletif it can be used to define aHilbert basis, that is, acomplete orthonormal systemfor theHilbert spaceofsquare-integrable functionson the real line.
The Hilbert basis is constructed as the family of functions{ψjk:j,k∈Z}{\displaystyle \{\psi _{jk}:\,j,\,k\,\in \,\mathbb {Z} \}}by means ofdyadictranslationsanddilationsofψ{\displaystyle \psi \,},ψjk(x)=2j2ψ(2jx−k),{\displaystyle \psi _{jk}(x)=2^{\frac {j}{2}}\psi \left(2^{j}x-k\right),}for integersj,k∈Z{\displaystyle j,\,k\,\in \,\mathbb {Z} }.
If, under the standardinner productonL2(R){\displaystyle L^{2}\left(\mathbb {R} \right)},⟨f,g⟩=∫−∞∞f(x)g(x)¯dx,{\displaystyle \langle f,g\rangle =\int _{-\infty }^{\infty }f(x){\overline {g(x)}}dx,}this family is orthonormal, then it is an orthonormal system:⟨ψjk,ψlm⟩=∫−∞∞ψjk(x)ψlm(x)¯dx,=δjlδkm,{\displaystyle {\begin{aligned}\langle \psi _{jk},\psi _{lm}\rangle &=\int _{-\infty }^{\infty }\psi _{jk}(x){\overline {\psi _{lm}(x)}}dx,\\&=\delta _{jl}\delta _{km},\end{aligned}}}whereδjl{\displaystyle \delta _{jl}\,}is theKronecker delta.
Completeness is satisfied if every functionf∈L2(R){\displaystyle f\,\in \,L^{2}\left(\mathbb {R} \right)}may be expanded in the basis as
with convergence of the series understood to beconvergence in norm. Such a representation off{\displaystyle f}is known as awavelet series. This implies that an orthonormal wavelet isself-dual.
Theintegral wavelet transformis theintegral transformdefined as[Wψf](a,b)=1|a|∫−∞∞ψ(x−ba)¯f(x)dx{\displaystyle \left[W_{\psi }f\right](a,b)={\frac {1}{\sqrt {|a|}}}\int _{-\infty }^{\infty }{\overline {\psi \left({\frac {x-b}{a}}\right)}}f(x)dx\,}Thewavelet coefficientscjk{\displaystyle c_{jk}}are then given bycjk=[Wψf](2−j,k2−j){\displaystyle c_{jk}=\left[W_{\psi }f\right]\left(2^{-j},k2^{-j}\right)}
Here,a=2−j{\displaystyle a=2^{-j}}is called thebinary dilationordyadic dilation, andb=k2−j{\displaystyle b=k2^{-j}}is thebinaryordyadic position.
The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape, imposing a restriction on choosing suitable basis functions. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on theuncertainty principleof signal processing,
wheret{\displaystyle t}represents time andω{\displaystyle \omega }angular frequency(ω=2πf{\displaystyle \omega =2\pi f}, wheref{\displaystyle f}isordinary frequency).
The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysiswindowsis chosen, the larger is the value ofΔt{\displaystyle \Delta t}.
WhenΔt{\displaystyle \Delta t}is large
WhenΔt{\displaystyle \Delta t}is small
In other words, the basis functionψ{\displaystyle \psi }can be regarded as an impulse response of a system with which the functionx(t){\displaystyle x(t)}has been filtered. The transformed signal provides information about the time and the frequency. Therefore, wavelet-transformation contains information similar to theshort-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. The difference in time resolution at ascending frequencies for theFourier transformand the wavelet transform is shown below. Note however, that the frequency resolution is decreasing for increasing frequencies while the temporal resolution increases. This consequence of theFourier uncertainty principleis not correctly displayed in the Figure.
This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varying functions, the frequency resolution is remarkable.
Another example: The analysis of three superposed sinusoidal signalsy(t)=sin(2πf0t)+sin(4πf0t)+sin(8πf0t){\displaystyle y(t)\;=\;\sin(2\pi f_{0}t)\;+\;\sin(4\pi f_{0}t)\;+\;\sin(8\pi f_{0}t)}with STFT and wavelet-transformation.
Wavelet compressionis a form ofdata compressionwell suited forimage compression(sometimes alsovideo compressionandaudio compression). Notable implementations areJPEG 2000,DjVuandECWfor still images,JPEG XS,CineForm, and the BBC'sDirac. The goal is to store image data in as little space as possible in afile. Wavelet compression can be eitherlosslessorlossy.[5]
Using a wavelet transform, the wavelet compression methods are adequate for representingtransients, such as percussion sounds in audio, or high-frequency components in two-dimensional images, for example an image of stars on a night sky. This means that the transient elements of a data signal can be represented by a smaller amount of information than would be the case if some other transform, such as the more widespreaddiscrete cosine transform, had been used.
Discrete wavelet transformhas been successfully applied for the compression of electrocardiograph (ECG) signals[6]In this work, the high correlation between the corresponding wavelet coefficients of signals of successive cardiac cycles is utilized employing linear prediction.
Wavelet compression is not effective for all kinds of data. Wavelet compression handles transient signals well. But smooth, periodic signals are better compressed using other methods, particularly traditionalharmonic analysisin thefrequency domainwithFourier-related transforms. Compressing data that has both transient and periodic characteristics may be done with hybrid techniques that use wavelets along with traditional harmonic analysis. For example, theVorbisaudio codecprimarily uses themodified discrete cosine transformto compress audio (which is generally smooth and periodic), however allows the addition of a hybrid waveletfilter bankfor improvedreproductionof transients.[7]
SeeDiary Of An x264 Developer: The problems with wavelets(2010) for discussion of practical issues of current methods using wavelets for video compression.
First a wavelet transform is applied. This produces as manycoefficientsas there arepixelsin the image (i.e., there is no compression yet since it is only a transform). These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients. This principle is calledtransform coding. After that, the coefficients arequantizedand the quantized values areentropy encodedand/orrun length encoded.
A few 1D and 2D applications of wavelet compression use a technique called "wavelet footprints".[8][9]
For most natural images, the spectrum density of lower frequency is higher.[10]As a result, information of the low frequency signal (reference signal) is generally preserved, while the information in the detail signal is discarded. From the perspective of image compression and reconstruction, a wavelet should meet the following criteria while performing image compression:
Wavelet image compression system involves filters and decimation, so it can be described as a linear shift-variant system. A typical wavelet transformation diagram is displayed below:
The transformation system contains two analysis filters (a low pass filterh0(n){\displaystyle h_{0}(n)}and a high pass filterh1(n){\displaystyle h_{1}(n)}), a decimation process, an interpolation process, and two synthesis filters (g0(n){\displaystyle g_{0}(n)}andg1(n){\displaystyle g_{1}(n)}). The compression and reconstruction system generally involves low frequency components, which is the analysis filtersh0(n){\displaystyle h_{0}(n)}for image compression and the synthesis filtersg0(n){\displaystyle g_{0}(n)}for reconstruction. To evaluate such system, we can input an impulseδ(n−ni){\displaystyle \delta (n-n_{i})}and observe its reconstructionh(n−ni){\displaystyle h(n-n_{i})}; The optimal wavelet are those who bring minimum shift variance and sidelobe toh(n−ni){\displaystyle h(n-n_{i})}. Even though wavelet with strict shift variance is not realistic, it is possible to select wavelet with only slight shift variance. For example, we can compare the shift variance of two filters:[11]
By observing the impulse responses of the two filters, we can conclude that the second filter is less sensitive to the input location (i.e. it is less shift variant).
Another important issue for image compression and reconstruction is the system's oscillatory behavior, which might lead to severe undesired artifacts in the reconstructed image. To achieve this, the wavelet filters should have a large peak to sidelobe ratio.
So far we have discussed about one-dimension transformation of the image compression system. This issue can be extended to two dimension, while a more general term - shiftable multiscale transforms - is proposed.[12]
As mentioned earlier, impulse response can be used to evaluate the image compression/reconstruction system.
For the input sequencex(n)=δ(n−ni){\displaystyle x(n)=\delta (n-n_{i})}, the reference signalr1(n){\displaystyle r_{1}(n)}after one level of decomposition isx(n)∗h0(n){\displaystyle x(n)*h_{0}(n)}goes through decimation by a factor of two, whileh0(n){\displaystyle h_{0}(n)}is a low pass filter. Similarly, the next reference signalr2(n){\displaystyle r_{2}(n)}is obtained byr1(n)∗h0(n){\displaystyle r_{1}(n)*h_{0}(n)}goes through decimation by a factor of two. After L levels of decomposition (and decimation), the analysis response is obtained by retaining one out of every2L{\displaystyle 2^{L}}samples:hA(L)(n,ni)=fh0(L)(n−ni/2L){\displaystyle h_{A}^{(L)}(n,n_{i})=f_{h0}^{(L)}(n-n_{i}/2^{L})}.
On the other hand, to reconstruct the signal x(n), we can consider a reference signalrL(n)=δ(n−nj){\displaystyle r_{L}(n)=\delta (n-n_{j})}. If the detail signalsdi(n){\displaystyle d_{i}(n)}are equal to zero for1≤i≤L{\displaystyle 1\leq i\leq L}, then the reference signal at the previous stage (L−1{\displaystyle L-1}stage) isrL−1(n)=g0(n−2nj){\displaystyle r_{L-1}(n)=g_{0}(n-2n_{j})}, which is obtained by interpolatingrL(n){\displaystyle r_{L}(n)}and convoluting withg0(n){\displaystyle g_{0}(n)}. Similarly, the procedure is iterated to obtain the reference signalr(n){\displaystyle r(n)}at stageL−2,L−3,....,1{\displaystyle L-2,L-3,....,1}. After L iterations, the synthesis impulse response is calculated:hs(L)(n,ni)=fg0(L)(n/2L−nj){\displaystyle h_{s}^{(L)}(n,n_{i})=f_{g0}^{(L)}(n/2^{L}-n_{j})}, which relates the reference signalrL(n){\displaystyle r_{L}(n)}and the reconstructed signal.
To obtain the overall L level analysis/synthesis system, the analysis and synthesis responses are combined as below:
hAS(L)(n,ni)=∑kfh0(L)(k−ni/2L)fg0(L)(n/2L−k){\displaystyle h_{AS}^{(L)}(n,n_{i})=\sum _{k}f_{h0}^{(L)}(k-n_{i}/2^{L})f_{g0}^{(L)}(n/2^{L}-k)}.
Finally, the peak to first sidelobe ratio and the average second sidelobe of the overall impulse responsehAS(L)(n,ni){\displaystyle h_{AS}^{(L)}(n,n_{i})}can be used to evaluate the wavelet image compression performance.
Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies. However, they are rarely more sensitive, and indeed, the commonMorlet waveletis mathematically identical to ashort-time Fourier transformusing a Gaussian window function.[13]The exception is when searching for signals of a known, non-sinusoidal shape (e.g., heartbeats); in that case, using matched wavelets can outperform standard STFT/Morlet analyses.[14]
The wavelet transform can provide us with the frequency of the signals and the time associated to those frequencies, making it very convenient for its application in numerous fields. For instance, signal processing of accelerations for gait analysis,[15]for fault detection,[16]for the analysis of seasonal displacements of landslides,[17]for design of low power pacemakers and also in ultra-wideband (UWB) wireless communications.[18][19][20]
Applied the following discretization of frequency and time:
Leading to wavelets of the form, the discrete formula for the basis wavelet:
Such discrete wavelets can be used for the transformation:
As apparent from wavelet-transformation representation (shown below)
wherec{\displaystyle c}is scaling factor,τ{\displaystyle \tau }represents time shift factor
and as already mentioned in this context, the wavelet-transformation corresponds to aconvolutionof a functiony(t){\displaystyle y(t)}and a wavelet-function. A convolution can be implemented as a multiplication in the frequency domain. With this the following approach of implementation results into:
For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al[23]and Lindeberg,[24]with the latter method also involving a memory-efficient time-recursive implementation.
Synchro-squeezed transform can significantly enhance temporal and frequency resolution of time-frequency representation obtained using conventional wavelet transform.[25][26]
|
https://en.wikipedia.org/wiki/Wavelet_compression
|
Seq2seqis a family ofmachine learningapproaches used fornatural language processing.[1]Applications includelanguage translation,[2]image captioning,[3]conversational models,[4]speech recognition,[5]andtext summarization.[6]Seq2seq usessequence transformation: it turns one sequence into another sequence.
One naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography. When I look at an article in Russian, I say: 'This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.
seq2seq is an approach to machine translation (or more generally,sequence transduction) with roots in information theory, where communication is understood as an encode-transmit-decode process, and machine translation can be studied as a special case of communication. This viewpoint was elaborated, for example, in thenoisy channel modelof machine translation.
In practice, seq2seq maps an input sequence into a real-numerical vector by using a neural network (theencoder), and then maps it back to an output sequence using another neural network (thedecoder).
The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see[2][1]for previous papers). The papers most commonly cited as the originators that produced seq2seq are two papers from 2014.[2][1]
In the seq2seq as proposed by them, both the encoder and the decoder wereLSTMs. This had the "bottleneck" problem, since the encoding vector has a fixed size, so for long input sequences, information would tend to be lost, as they are difficult to fit into the fixed-length encoding vector. Theattention mechanism, proposed in 2014,[7]resolved the bottleneck problem. They called their modelRNNsearch, as it "emulates searching through a source sentence during decoding a translation".
A problem with seq2seq models at this point was that recurrent neural networks are difficult to parallelize. The 2017 publication ofTransformers[8]resolved the problem by replacing the encodingRNNwith self-attention Transformer blocks ("encoder blocks"), and the decoding RNN with cross-attention causally-masked Transformer blocks ("decoder blocks").
One of the papers cited as the originator for seq2seq is (Sutskever et al 2014),[1]published at Google Brain while they were on Google's machine translation project. The research allowed Google to overhaulGoogle TranslateintoGoogle Neural Machine Translationin 2016.[1][9]Tomáš Mikolovclaims to have developed the idea (before joiningGoogle Brain) of using a "neural language model on pairs of sentences... and then [generating] translation after seeing the first sentence"—which he equates with seq2seq machine translation, and to have mentioned the idea toIlya SutskeverandQuoc Le(while at Google Brain), who failed to acknowledge him in their paper.[10]Mikolov had worked on RNNLM (using RNN for language modelling) for his PhD thesis,[11]and is more notable for developingword2vec.
The main reference for this section is.[12]
The encoder is responsible for processing the input sequence and capturing its essential information, which is stored as the hidden state of the network and, in a model with attention mechanism, a context vector. The context vector is the weighted sum of the input hidden states and is generated for every time instance in the output sequences.
The decoder takes the context vector and hidden states from the encoder and generates the final output sequence. The decoder operates in an autoregressive manner, producing one element of the output sequence at a time. At each step, it considers the previously generated elements, the context vector, and the input sequence information to make predictions for the next element in the output sequence. Specifically, in a model with attention mechanism, the context vector and the hidden state are concatenated together to form an attention hidden vector, which is used as an input for the decoder.
The seq2seq method developed in the early 2010s uses two neural networks: an encoder network converts an input sentence into numerical vectors, and a decoder network converts those vectors to sentences in the target language. The Attention mechanism was grafted onto this structure in 2014 and shown below. Later it was refined into the encoder-decoder Transformer architecture of 2017.
There is a subtle difference between training and prediction. During training time, both the input and the output sequences are known. During prediction time, only the input sequence is known, and the output sequence must be decoded by the network itself.
Specifically, consider an input sequencex1:n{\displaystyle x_{1:n}}and output sequencey1:m{\displaystyle y_{1:m}}. The encoder would process the inputx1:n{\displaystyle x_{1:n}}step by step. After that, the decoder would take the output from the encoder, as well as the <bos> as input, and produce a predictiony^1{\displaystyle {\hat {y}}_{1}}. Now, the question is: what should be input to the decoder in the next step?
A standard method for training is "teacher forcing". In teacher forcing, no matter what is output by the decoder, the next input to the decoder is always the reference. That is, even ify^1≠y1{\displaystyle {\hat {y}}_{1}\neq y_{1}}, the next input to the decoder is stilly1{\displaystyle y_{1}}, and so on.
During prediction time, the "teacher"y1:m{\displaystyle y_{1:m}}would be unavailable. Therefore, the input to the decoder must bey^1{\displaystyle {\hat {y}}_{1}}, theny^2{\displaystyle {\hat {y}}_{2}}, and so on.
It is found that if a model is trained purely by teacher forcing, its performance would degrade during prediction time, since generation based on the model's own output is different from generation based on the teacher's output. This is called exposure bias or a train/testdistribution shift. A 2015 paper recommends that, during training, randomly switch between teacher forcing and no teacher forcing.[13]
The attention mechanism is an enhancement introduced by Bahdanau et al. in 2014 to address limitations in the basic Seq2Seq architecture where a longer input sequence results in the hidden state output of the encoder becoming irrelevant for the decoder. It enables the model to selectively focus on different parts of the input sequence during the decoding process. At each decoder step, an alignment model calculates the attention score using the current decoder state and all of the attention hidden vectors as input. An alignment model is another neural network model that is trained jointly with the seq2seq model used to calculate how well an input, represented by the hidden state, matches with the previous output, represented by attention hidden state. Asoftmax functionis then applied to the attention score to get the attention weight.
In some models, the encoder states are directly fed into an activation function, removing the need for alignment model. An activation function receives one decoder state and one encoder state and returns a scalar value of their relevance.
Consider the seq2seq language English-to-French translation task. To be concrete, let us consider the translation of "the zone of international control <end>", which should translate to "la zone de contrôle international <end>". Here, we use the special <end> token as acontrol characterto delimit the end of input for both the encoder and the decoder.
An input sequence of textx0,x1,…{\displaystyle x_{0},x_{1},\dots }is processed by a neural network (which can be an LSTM, a Transformer encoder, or some other network) into a sequence of real-valued vectorsh0,h1,…{\displaystyle h_{0},h_{1},\dots }, whereh{\displaystyle h}stands for "hidden vector".
After the encoder has finished processing, the decoder starts operating over the hidden vectors, to produce an output sequencey0,y1,…{\displaystyle y_{0},y_{1},\dots }, autoregressively. That is, it always takes as input both the hidden vectors produced by the encoder, and what the decoder itself has produced before, to produce the next output word:
Here, we use the special <start> token as acontrol characterto delimit the start of input for the decoder. The decoding terminates as soon as "<end>" appears in the decoder output.
As hand-crafting weights defeats the purpose of machine learning, the model must compute the attention weights on its own. Taking analogy from the language ofdatabase queries, we make the model construct a triple of vectors: key, query, and value. The rough idea is that we have a "database" in the form of a list of key-value pairs. The decoder sends in aquery, and obtains a reply in the form of a weighted sum of thevalues, where the weight is proportional to how closely the query resembles eachkey.
The decoder first processes the "<start>" input partially, to obtain an intermediate vectorh0d{\displaystyle h_{0}^{d}}, the 0th hidden vector of decoder. Then, the intermediate vector is transformed by a linear mapWQ{\displaystyle W^{Q}}into aqueryvectorq0=h0dWQ{\displaystyle q_{0}=h_{0}^{d}W^{Q}}. Meanwhile, the hidden vectors outputted by the encoder are transformed by another linear mapWK{\displaystyle W^{K}}intokeyvectorsk0=h0WK,k1=h1WK,…{\displaystyle k_{0}=h_{0}W^{K},k_{1}=h_{1}W^{K},\dots }. The linear maps are useful for providing the model with enough freedom to find the best way to represent the data.
Now, the query and keys are compared by taking dot products:q0k0T,q0k1T,…{\displaystyle q_{0}k_{0}^{T},q_{0}k_{1}^{T},\dots }. Ideally, the model should have learned to compute the keys and values, such thatq0k0T{\displaystyle q_{0}k_{0}^{T}}is large,q0k1T{\displaystyle q_{0}k_{1}^{T}}is small, and the rest are very small. This can be interpreted as saying that the attention weight should be mostly applied to the 0th hidden vector of the encoder, a little to the 1st, and essentially none to the rest.
In order to make a properly weighted sum, we need to transform this list of dot products into a probability distribution over0,1,…{\displaystyle 0,1,\dots }. This can be accomplished by thesoftmax function, thus giving us the attention weights:(w00,w01,…)=softmax(q0k0T,q0k1T,…){\displaystyle (w_{00},w_{01},\dots )=\mathrm {softmax} (q_{0}k_{0}^{T},q_{0}k_{1}^{T},\dots )}This is then used to compute thecontext vector:c0=w00v0+w01v1+⋯{\displaystyle c_{0}=w_{00}v_{0}+w_{01}v_{1}+\cdots }
wherev0=h0WV,v1=h1WV,…{\displaystyle v_{0}=h_{0}W^{V},v_{1}=h_{1}W^{V},\dots }are thevaluevectors, linearly transformed by another matrix to provide the model with freedom to find the best way to represent values. Without the matricesWQ,WK,WV{\displaystyle W^{Q},W^{K},W^{V}}, the model would be forced to use the same hidden vector for both key and value, which might not be appropriate, as these two tasks are not the same.
This is the dot-attention mechanism. The particular version described in this section is "decoder cross-attention", as the output context vector is used by the decoder, and the input keys and values come from the encoder, but the query comes from the decoder, thus "cross-attention".
More succinctly, we can write it asc0=Attention(h0dWQ,HWK,HWV)=softmax((h0dWQ)(HWK)T)(HWV){\displaystyle c_{0}=\mathrm {Attention} (h_{0}^{d}W^{Q},HW^{K},HW^{V})=\mathrm {softmax} ((h_{0}^{d}W^{Q})\;(HW^{K})^{T})(HW^{V})}where the matrixH{\displaystyle H}is the matrix whose rows areh0,h1,…{\displaystyle h_{0},h_{1},\dots }. Note that the querying vector,h0d{\displaystyle h_{0}^{d}}, is not necessarily the same as the key-value vectorh0{\displaystyle h_{0}}. In fact, it is theoretically possible for query, key, and value vectors to all be different, though that is rarely done in practice.
In 2019,Facebookannounced its use insymbolic integrationandresolutionofdifferential equations. The company claimed that it could solve complex equations more rapidly and with greater accuracy than commercial solutions such asMathematica,MATLABandMaple. First, the equation is parsed into a tree structure to avoid notational idiosyncrasies. An LSTM neural network then applies its standardpattern recognitionfacilities to process the tree.[14][15]
In 2020, Google released Meena, a 2.6 billionparameterseq2seq-basedchatbottrained on a 341 GB data set. Google claimed that the chatbot has 1.7 times greater model capacity thanOpenAI'sGPT-2.[4]
In 2022,Amazonintroduced AlexaTM 20B, a moderate-sized (20 billion parameter) seq2seqlanguage model. It uses an encoder-decoder to accomplish few-shot learning. The encoder outputs a representation of the input that the decoder uses as input to perform a specific task, such as translating the input into another language. The model outperforms the much larger GPT-3 in language translation and summarization. Training mixesdenoising(appropriately inserting missing text in strings) and causal-language-modeling (meaningfully extending an input text). It allows adding features across different languages without massive training workflows. AlexaTM 20B achieved state-of-the-art performance in few-shot-learning tasks across all Flores-101 language pairs, outperforming GPT-3 on several tasks.[16]
|
https://en.wikipedia.org/wiki/Seq2seq
|
Perceiveris a variant of theTransformerarchitecture, adapted for processing arbitrary forms of data, such as images, sounds and video, andspatial data. Unlike previous notable Transformer systems such asBERTandGPT-3, which were designed for text processing, the Perceiver is designed as a general architecture that can learn from large amounts of heterogeneous data. It accomplishes this with an asymmetricattentionmechanism to distill inputs into a latent bottleneck.
Perceiver matches or outperforms specialized models on classification tasks.[1]
Perceiver was introduced in June 2021 byDeepMind.[1]It was followed byPerceiver IOin August 2021.[2]
Perceiver is designed withoutmodality-specific elements. For example, it does not have elements specialized to handle images, or text, or audio. Further it can handle multiple correlated input streams of heterogeneous types. It uses a small set of latent units that forms an attention bottleneck through which the inputs must pass. One benefit is to eliminate the quadratic scaling problem found in early transformers. Earlier work used customfeature extractorsfor each modality.[1]
It associates position and modality-specific features with every input element (e.g. every pixel, or audio sample). These features can belearnedorconstructedusing high-fidelityFourierfeatures.[1]
Perceiver uses cross-attention to produce linear complexity layers and to detach network depth from input size. This decoupling allows deeper architectures.[1]
A cross-attention module maps a (larger) byte array (e.g., a pixel array) and a latent array (smaller) to another latent array,reducing dimensionality. A transformer tower maps one latent array to another latent array, which is used to query the input again. The two components alternate. Both components use query-key-value (QKV) attention.QKV attentionapplies query, key, and value networks, which are typicallymultilayer perceptrons– to each element of an input array, producing three arrays that preserve the index dimensionality (or sequence length) of their inputs.
Perceiver IO can flexibly query the model's latent space to produce outputs of arbitrary size and semantics. It achieves results on tasks with structured output spaces, such asnatural languageandvisualunderstanding,StarCraft II, and multi-tasking. Perceiver IO matches a Transformer-based BERT baseline on theGLUE language benchmarkwithout the need for inputtokenizationand achieves state-of-the-art performance onSinteloptical flow estimation.[2]
Outputs are produced by attending to the latent array using a specific output query associated with that particular output. For example to predictoptical flowon one pixel a query would attend using the pixel’s xy coordinates plus an optical flow task embedding to produce a single flow vector. It is a variation on the encoder/decoder architecture used in other designs.[2]
Perceiver's performance is comparable toResNet-50 andViTonImageNetwithout 2Dconvolutions. It attends to 50,000pixels. It is competitive in all modalities inAudioSet.[1]
|
https://en.wikipedia.org/wiki/Perceiver
|
Avision transformer(ViT) is atransformerdesigned forcomputer vision.[1]A ViT decomposes an input image into a series of patches (rather than text intotokens), serializes each patch into a vector, and maps it to a smaller dimension with a singlematrix multiplication. These vectorembeddingsare then processed by atransformer encoderas if they were token embeddings.
ViTs were designed as alternatives toconvolutional neural networks(CNNs) in computer vision applications. They have different inductive biases, training stability, and data efficiency.[2]Compared to CNNs, ViTs are less data efficient, but have higher capacity. Some of the largest modern computer vision models are ViTs, such as one with 22B parameters.[3][4]
Subsequent to its publication, many variants were proposed, with hybrid architectures with both features of ViTs and CNNs. ViTs have found application inimage recognition,image segmentation,weather prediction, andautonomous driving.[5][6]
Transformers were introduced inAttention Is All You Need(2017),[7]and have found widespread use innatural language processing. A 2019 paper[8]applied ideas from the Transformer to computer vision. Specifically, they started with aResNet, a standardconvolutional neural networkused for computer vision, and replaced all convolutional kernels by the self-attention mechanism found in a Transformer. It resulted in superior performance. However, it is not a Vision Transformer.
In 2020, anencoder-only Transformerwas adapted for computer vision, yielding the ViT, which reached state of the art in image classification, overcoming the previous dominance of CNN.[1]The masked autoencoder (2022) extended ViT to work with unsupervised training. The vision transformer and the masked autoencoder, in turn, stimulated new developments in convolutional neural networks.[9][10]
Subsequently, there was cross-fertilization between the previous CNN approach and the ViT approach.
In 2021, some important variants of the Vision Transformers were proposed. These variants are mainly intended to be more efficient, more accurate or better suited to a specific domain. Two studies[11][12]improved efficiency and robustness of ViT by adding a CNN as a preprocessor. The Swin Transformer[13]achieved state-of-the-art results on some object detection datasets such asCOCO, by using convolution-like sliding windows of attention mechanism, and thepyramidprocess in classical computer vision.
The basic architecture, used by the original 2020 paper,[1]is as follows. In summary, it is a BERT-like encoder-only Transformer.
The input image is of typeRH×W×C{\displaystyle \mathbb {R} ^{H\times W\times C}}, whereH,W,C{\displaystyle H,W,C}are height, width, channel (RGB). It is then split into square-shaped patches of typeRP×P×C{\displaystyle \mathbb {R} ^{P\times P\times C}}.
For each patch, the patch is pushed through a linear operator, to obtain a vector ("patch embedding"). The position of the patch is also transformed into a vector by "position encoding". The two vectors are added, then pushed through several Transformer encoders.
The attention mechanism in a ViT repeatedly transforms representation vectors of image patches, incorporating more and more semantic relations between image patches in an image. This is analogous to how in natural language processing, as representation vectors flow through a transformer, they incorporate more and more semantic relations between words, from syntax to semantics.
The above architecture turns an image into a sequence of vector representations. To use these for downstream applications, an additional head needs to be trained to interpret them.
For example, to use it for classification, one can add a shallow MLP on top of it that outputs a probability distribution over classes. The original paper uses a linear-GeLU-linear-softmax network.[1]
The original ViT was an encoder-only Transformer supervise-trained to predict the image label from the patches of the image. As in the case ofBERT, it uses a special token<CLS>in the input side, and the corresponding output vector is used as the only input of the final output MLP head. The special token is an architectural hack to allow the model to compress all information relevant for predicting the image label into one vector.
Transformers found their initial applications innatural language processingtasks, as demonstrated bylanguage modelssuch asBERTandGPT-3. By contrast the typical image processing system uses aconvolutional neural network(CNN). Well-known projects include Xception,ResNet,EfficientNet,[14]DenseNet,[15]andInception.[16]
Transformers measure the relationships between pairs of input tokens (words in the case of text strings), termedattention. The cost is quadratic in the number of tokens. For images, the basic unit of analysis is thepixel. However, computing relationships for every pixel pair in a typical image is prohibitive in terms of memory and computation. Instead, ViT computes relationships among pixels in various small sections of the image (e.g., 16x16 pixels), at a drastically reduced cost. The sections (with positional embeddings) are placed in a sequence. The embeddings are learnable vectors. Each section is arranged into a linear sequence and multiplied by the embedding matrix. The result, with the position embedding is fed to the transformer.[16]
After the ViT processes an image, it produces some embedding vectors. These must be converted to a single class probability prediction by some kind of network. In the original ViT and Masked Autoencoder, they used a dummy[CLS]token , in emulation of theBERTlanguage model. The output at[CLS]is the classification token, which is then processed by aLayerNorm-feedforward-softmax module into a probability distribution.
Global average pooling (GAP)does not use the dummy token, but simply takes the average of all output tokens as the classification token. It was mentioned in the original ViT as being equally good.[1]
Multihead attention pooling (MAP)applies amultiheaded attention blockto pooling. Specifically, it takes as input a list of vectorsx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}, which might be thought of as the output vectors of a layer of a ViT. The output from MAP isMultiheadedAttention(Q,V,V){\displaystyle \mathrm {MultiheadedAttention} (Q,V,V)}, whereq{\displaystyle q}is a trainable query vector, andV{\displaystyle V}is the matrix with rows beingx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}.[17]This was first proposed in theSet Transformerarchitecture.[18]
Later papers demonstrated that GAP and MAP both perform better than BERT-like pooling.[17][19]A variant of MAP was proposed asclass attention, which applies MAP, then feedforward, then MAP again.[20]
Re-attentionwas proposed to allow training deep ViT. It changes the multiheaded attention module.[21]
TheMasked Autoencoder[22]took inspiration fromdenoising autoencodersand context encoders.[23]It has two ViTs put end-to-end. The first one ("encoder") takes in image patches with positional encoding, and outputs vectors representing each patch. The second one (called "decoder", even though it is still an encoder-only Transformer) takes in vectors with positional encoding and outputs image patches again. During training, both the encoder and the decoder ViTs are used. During inference, only the encoder ViT is used.
During training, each image is cut into patches, and with their positional embeddings added. Of these, only 25% of the patches are selected. The encoder ViT processes the selected patches. No mask tokens are used. Then, mask tokens are added back in, and positional embeddings added again. These are processed by the decoder ViT, which outputs a reconstruction of the full image. The loss is the total mean-squared loss in pixel-space for all masked patches (reconstruction loss is not computed for non-masked patches).
A similar architecture was BERT ViT (BEiT), published concurrently.[24]
Like the Masked Autoencoder, theDINO(self-distillation withnolabels) method is a way to train a ViT byself-supervision.[25]DINO is a form of teacher-studentself-distillation. In DINO, the student is the model itself, and the teacher is an exponential average of the student's past states. The method is similar to previous works like momentum contrast[26]and bootstrap your own latent (BYOL).[27]
The loss function used in DINO is thecross-entropy lossbetween the output of the teacher network (fθt′{\displaystyle f_{\theta '_{t}}}) and the output of the student network (fθt{\displaystyle f_{\theta _{t}}}). The teacher network is an exponentially decaying average of the student network's past parameters:θt′=αθt+α(1−α)θt−1+⋯{\displaystyle \theta '_{t}=\alpha \theta _{t}+\alpha (1-\alpha )\theta _{t-1}+\cdots }. The inputs to the networks are two different crops of the same image, represented asT(x){\displaystyle T(x)}andT′(x){\displaystyle T'(x)}, wherex{\displaystyle x}is the original image. The loss function is written asL(fθt′(T(x)),fθt(T′(x))){\displaystyle L(f_{\theta '_{t}}(T(x)),f_{\theta _{t}}(T'(x)))}One issue is that the network can "collapse" by always outputting the same value (y{\displaystyle y}), regardless of the input. To prevent this collapse, DINO employs two strategies:
In January 2024,Meta AIResearch released an updated version called DINOv2[28]with improvements in architecture, loss function, and optimization technique. It was trained on a larger and more diverse dataset. The features learned by DINOv2 were moretransferable, meaning it had better performance in downstream tasks.
TheSwin Transformer("Shiftedwindows")[13]took inspiration from standard CNNs:
It is improved by Swin Transformer V2,[29]which modifies upon the ViT by a different attention mechanism[13]: Figure 1:
TheTimeSformer[30]was designed for video understanding tasks, and it applied a factorized self-attention, similar to the factorized convolution kernels found in theInceptionCNN architecture.[31]Schematically, it divides a video into frames, and each frame into a square grid of patches (same as ViT). Let each patch coordinate be denoted byx,y,t{\displaystyle x,y,t}, denoting horizontal, vertical, and time.
The TimeSformer also considered other attention layer designs, such as the "height attention layer" where the requirement isx′=x,t′=t{\displaystyle x'=x,t'=t}. However, they found empirically that the best design interleaves one space attention layer and one time attention layer.
InViT-VQGAN,[32]there are two ViT encoders and a discriminator. One encodes 8x8 patches of an image into a list of vectors, one for each patch. The vectors can only come from a discrete set of "codebook", as invector quantization. Another encodes the quantized vectors back to image patches. The training objective attempts to make the reconstruction image (the output image) faithful to the input image. The discriminator (usually a convolutional network, but other networks are allowed) attempts to decide if an image is an original real image, or a reconstructed image by the ViT.
The idea is essentially the same as vector quantized variational autoencoder (VQVAE) plusgenerative adversarial network(GAN).
After such a ViT-VQGAN is trained, it can be used to code an arbitrary image into a list of symbols, and code an arbitrary list of symbols into an image. The list of symbols can be used to train into a standard autoregressive transformer (like GPT), for autoregressively generating an image. Further, one can take a list of caption-image pairs, convert the images into strings of symbols, and train a standard GPT-style transformer. Then at test time, one can just give an image caption, and have it autoregressively generate the image. This is the structure of Google Parti.[33]
Other examples include the visual transformer,[34]CoAtNet,[35]CvT,[36]the data-efficient ViT (DeiT),[37]etc.
In the Transformer in Transformer architecture, each layer applies a vision Transformer layer on each image patch embedding, add back the resulting tokens to the embedding, then applies another vision Transformer layer.[38]
Typically, ViT uses patch sizes larger than standard CNN kernels (3x3 to 7x7). ViT is more sensitive to the choice of the optimizer,hyperparameters, and network depth. Preprocessing with a layer of smaller-size, overlapping (stride < size) convolutional filters helps with performance and stability.[12]
This different behavior seems to derive from the differentinductive biasesthey possess.
CNN applies the same set of filters for processing the entire image. This allows them to be more data efficient and less sensitive to local perturbations.[2]ViT applies self-attention, allowing them to easily capture long-range relationships between patches. They also require more data to train, but they can ingest more training data compared to CNN, which might not improve after training on a large enough training dataset. ViT also appears more robust to input image distortions such as adversarial patches or permutations.[39]
ViT have been used in many Computer Vision tasks with excellent results and in some cases even state-of-the-art.Image Classification,Object Detection,Video Deepfake Detection,[40]Image segmentation,[41]Anomaly detection,Image Synthesis,Cluster analysis,Autonomous Driving.[5][6]
ViT had been used for image generation as backbones forGAN[42]and fordiffusion models(diffusion transformer, or DiT).[43]
DINO[25]has been demonstrated to learn useful representations for clustering images and exploring morphological profiles on biological datasets, such as images generated with theCell Paintingassay.[44]
In 2024, a 113 billion-parameter ViT model was proposed (the largest ViT to date) forweather and climate prediction, and trained on theFrontier supercomputerwith a throughput of 1.6exaFLOPs.[45]
|
https://en.wikipedia.org/wiki/Vision_transformer
|
Alarge language model(LLM) is a type ofmachine learningmodeldesigned fornatural language processingtasks such as languagegeneration. LLMs arelanguage modelswith many parameters, and are trained withself-supervised learningon a vast amount of text.
The largest and most capable LLMs aregenerative pretrained transformers(GPTs). Modern models can befine-tunedfor specific tasks or guided byprompt engineering.[1]These models acquirepredictive powerregardingsyntax,semantics, andontologies[2]inherent in humanlanguage corpora, but they also inherit inaccuracies andbiasespresent in thedatathey are trained in.[3]
Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, theIBM alignment modelspioneered statistical language modelling. A smoothedn-gram modelin 2001 trained on 0.3 billion words achieved state-of-the-artperplexityat the time.[4]In the 2000s, as Internet use became prevalent, some researchers constructed Internet-scale language datasets ("web as corpus"[5]), upon which they trained statistical language models.[6][7]In 2009, in most language processing tasks, statistical language models dominated over symbolic language models because they can usefully ingest large datasets.[8]
After neural networks became dominant in image processing around 2012,[9]they were applied to language modelling as well. Google converted its translation service toNeural Machine Translationin 2016. Because it preceded the existence oftransformers, it was done byseq2seqdeepLSTMnetworks.
At the 2017NeurIPSconference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology,[10]and was based mainly on theattentionmechanism developed by Bahdanau et al. in 2014.[11]The following year in 2018,BERTwas introduced and quickly became "ubiquitous".[12]Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks viaprompting.[13]
Although decoder-onlyGPT-1was introduced in 2018, it wasGPT-2in 2019 that caught widespread attention becauseOpenAIat first deemed it too powerful to release publicly, out of fear of malicious use.[14]GPT-3in 2020 went a step further and as of 2024[update]is available only viaAPIwith no offering of downloading the model to execute locally. But it was the 2022 consumer-facing browser-basedChatGPTthat captured the imaginations of the general population and caused some media hype and online buzz.[15]The 2023GPT-4was praised for its increased accuracy and as a "holy grail" for itsmultimodalcapabilities.[16]OpenAI did not reveal the high-level architecture and the number ofparametersof GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work.[13]In 2024 OpenAI released the reasoning modelOpenAI o1, which generates long chains of thought before returning a final answer.
Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters.[17]
Since 2022,source-availablemodels have been gaining popularity, especially at first withBLOOMandLLaMA, though both have restrictions on the field of use.Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissiveApache License. In January 2025,DeepSeekreleased DeepSeek R1, a 671-billion-parameter open-weight model that performs comparably to OpenAI o1 but at a much lower cost.[18]
Since 2023, many LLMs have been trained to bemultimodal, having the ability to also process or generate other types of data, such as images or audio. These LLMs are also called large multimodal models (LMMs).[19]
As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such asrecurrent neural networkvariants andMamba(astate spacemodel).[20][21][22]
Asmachine learningalgorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, anembeddingis associated to the integer index. Algorithms includebyte-pair encoding(BPE) andWordPiece. There are also special tokens serving ascontrol characters, such as[MASK]for masked-out token (as used inBERT), and[UNK]("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "Ġ" denotes a preceding whitespace in RoBERTa and GPT. "##" denotes continuation of a preceding word in BERT.[23]
For example, the BPE tokenizer used by GPT-3 (Legacy) would splittokenizer: texts -> series of numerical "tokens"as
Tokenization alsocompressesthe datasets. Because LLMs generally require input to be anarraythat is notjagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset.[24][25]
As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks andpunctuation marks) are treated as an initial set ofn-grams(i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged)n-grams that most frequently occur together are then again merged into even lengthiern-gram, until a vocabulary of prescribed size is obtained (in case ofGPT-3, the size is 50257).[26]After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams.[27]
A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. However, an average word in another language encoded by such an English-optimized tokenizer is split into a suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for theShan languagefromMyanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English.[25]
Greedy tokenization also causes subtle problems with text completion.[28]
In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data.[29]Cleaned datasets can increase training efficiency and lead to improved downstream performance.[30][31]A trained LLM can be used to clean datasets for training a further LLM.[32]
With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it).[33]
Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft'sPhiseries of LLMs is trained on textbook-like data generated by another LLM.[34]
Reinforcement learning from human feedback(RLHF) through algorithms, such asproximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences.[35]
Using "self-instruct" approaches, LLMs have been able tobootstrapcorrect responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction "Write an essay about the main themes represented inHamlet," an initial naive completion might be "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay," based on the frequency of this textual sequence in the corpus.[36]
The largest LLM may be too expensive to train and use directly. For such models,mixture of experts(MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 trillion parameters.[37][38][39]
Most results previously achievable only by (costly) fine-tuning, can be achieved throughprompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window).[40]
In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized)GPT-2model has had twelve attention heads and a context window of only 1k tokens.[42]In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized.[27]
The largest models, such as Google'sGemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested").[43]Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens.[44]Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens.[45]
Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example withChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation.
The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them is a matter of experimentation and domain-specific considerations.
A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset.[46]It can be either
Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus.[47]During training,regularizationloss is also used to stabilize training. However regularization loss is usually not used duringtestingand evaluation.
Substantial infrastructure is necessary for training the largest models.[48][49][50]
The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve.GPT-1of 2018 is usually considered the first LLM, even though it has only 0.117 billion parameters. The tendency towards larger models is visible in thelist of large language models.
As technology advanced, large sums have been invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million.[51]
For Transformer-based LLM, training cost is much higher than inference cost. It costs 6FLOPsper parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token.[52]
There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calculation in its training corpus.[dubious–discuss]In such cases, the LLM needs to resort to running program code that calculates the result, which can then be included in its response.[dubious–discuss]: Another example is "What is the time now? It is ", where a separate program interpreter would need to execute a code to get system time on the computer, so that the LLM can include it in its reply.[53][54]This basic strategy can be sophisticated with multiple attempts of generated programs, and other sampling strategies.[55]
Generally, in order to get an LLM to use tools, one must fine-tune it for tool-use. If the number of tools is finite, then fine-tuning may be done just once. If the number of tools can grow arbitrarily, as with onlineAPIservices, then the LLM can be fine-tuned to be able to read API documentation and call API correctly.[56][57]
Retrieval-augmented generation(RAG) is another approach that enhances LLMs by integrating them withdocument retrievalsystems. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in avector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents.[58]
An LLM is typically not anautonomous agentby itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions, but can be transformed into one by integrating modules like profiling, memory, planning, and action.[59]
TheReAct pattern, a portmanteau of "Reason + Act", constructs anagentout of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment.[60]The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment.[61]
In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives.[62]
The Reflexion method[63]constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are given to the agent in the subsequent episodes.[citation needed]
Monte Carlo tree searchcan use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model.[64]
For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent.[65]Alternatively, it canpropose increasingly difficult tasksforcurriculum learning.[66]Instead of outputting individual actions, an LLM planner can also construct "skills", orfunctionsfor complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning.[66]
LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially.[67]
Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics.[68]
Post-trainingquantization[69]aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance.[70][71]The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantizationcodebookper layer. Further improvement can be done by applyingdifferent precisionsto different parameters, with higher precision for particularly important parameters ("outlier weights").[72]See the visual guide to quantization by Maarten Grootendorst[73]for a visual depiction.
While quantized models are typically frozen, and only pre-quantized models are fine-tuned, quantized models can still be fine-tuned.[74]
Multimodality means "having several modalities", and a"modality"refers to a type of input or output, such as video, image, audio, text,proprioception, etc.[75]There have been many AI models trained specifically to ingest one modality and output another modality, such asAlexNetfor image to label,[76]visual question answeringfor image-text to text,[77]andspeech recognitionfor speech to text.
A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoderE{\displaystyle E}. Make a small multilayered perceptronf{\displaystyle f}, so that for any imagey{\displaystyle y}, the post-processed vectorf(E(y)){\displaystyle f(E(y))}has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability.[78]
Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch.[79]Google PaLMmodel was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control.[80]LLaMAmodels have also been turned multimodal using the tokenization method, to allow image inputs,[81]and video inputs.[82]
GPT-4can use both text and image as inputs[83](although the vision component was not released to the public until GPT-4V[84]);Google DeepMind'sGeminiis also multimodal.[85]Mistral introduced its own multimodal Pixtral 12B model in September 2024.[86]
In late 2024, a new direction emerged in LLM development with models specifically designed for complex reasoning tasks. These "reasoning models" were trained to spend more time generating step-by-step solutions before providing final answers, similar to human problem-solving processes.[87]OpenAI introduced this trend with theiro1model in September 2024, followed byo3in December 2024. These models showed significant improvements in mathematics, science, and coding tasks compared to traditional LLMs. For example, onInternational Mathematics Olympiadqualifying exam problems,GPT-4oachieved 13% accuracy while o1 reached 83%.[87][88]In January 2025, the Chinese company DeepSeek released DeepSeek-R1, a 671-billion-parameter open-weight reasoning model that achieved comparable performance to OpenAI's o1 while being significantly more cost-effective to operate. Unlike proprietary models from OpenAI, DeepSeek-R1's open-weight nature allowed researchers to study and build upon the algorithm, though its training data remained private.[89]These reasoning models typically require more computational resources per query compared to traditional LLMs, as they perform more extensive processing to work through problems step-by-step. However, they have shown superior capabilities in domains requiring structured logical thinking, such as mathematics, scientific research, and computer programming.[88]
Efforts to reduce or compensate for hallucinations have employedautomated reasoning, RAG (retrieval-augmented generation),fine-tuning, and other methods.[90]
The performance of an LLM after pretraining largely depends on the:
"Scaling laws" areempirical statistical lawsthat predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with alog-loglearning rateschedule, states that:[91]{C=C0NDL=ANα+BDβ+L0{\displaystyle {\begin{cases}C=C_{0}ND\\[6pt]L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}\end{cases}}}where the variables are
and the statistical hyper-parameters are
Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)"[92]in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities".[40][93]They arise from the complex interaction of the model's components and are not explicitly programmed or designed.[94]
Furthermore, recent research has demonstrated that AI systems, including large language models, can employ heuristic reasoning akin to human cognition. They balance between exhaustive logical processing and the use of cognitive shortcuts (heuristics), adapting their reasoning strategies to optimize between accuracy and effort. This behavior aligns with principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory.[95]
One of the emergent abilities isin-context learningfrom example demonstrations.[96]In-context learning is involved in tasks, such as:
Schaefferet. al.argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to asmooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well.[102]
Letx{\displaystyle x}be the number of parameter count, andy{\displaystyle y}be the performance of the model.
Large language models by themselves areblack boxes, and it is not clear how they can perform linguistic tasks. Similarly, it is unclear if or how LLMs should be viewed as models of the human brain and/or human mind.[103]
Various techniques have been developed to enhance the transparency and interpretability of LLMs. Mechanistic interpretability aims toreverse-engineerLLMs by discovering symbolic algorithms that approximate the inference performed by an LLM. In recent years, sparse coding models such as sparse autoencoders, transcoders, and crosscoders have emerged as promising tools for identifying interpretable features.
Transcoders, which are more interpretable than transformers, have been utilized to develop “replacement models.” In one such study involving the mechanistic interpretation of writing a rhyming poem by an LLM, it was shown that although they are believed to simply predict the next token, they can, in fact, plan ahead.[104]
A related concept isAI explainability, which focuses on understanding how an AI model arrives at a given result. Techniques such as partial dependency plots, SHAP (SHapley Additive exPlanations), and feature importance assessments allow researchers to visualize and understand the contributions of various input features to the model's predictions. These methods help ensure that AI models make decisions based on relevant and fair criteria, enhancing trust and accountability.
By integrating these techniques, researchers and practitioners can gain deeper insights into the operations of LLMs, fostering trust and facilitating the responsible deployment of these powerful models.
In another example, the authors trained small transformers onmodular arithmetic addition. The resulting models were reverse-engineered, and it turned out they useddiscrete Fourier transform.[105]
NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense".[106]Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to"understand"certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of anartificial general intelligencesystem": "Can one reasonably say that a system that passes exams for software engineering candidates is notreallyintelligent?"[107][108]Ilya Sutskeverargues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation.[109]Some researchers characterize LLMs as "alien intelligence".[110][111]For example, Conjecture CEOConnor Leahyconsiders untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."[112][113]
In contrast, some skeptics of LLM understanding believe that existing LLMs are "simply remixing and recombining existing writing",[111]a phenomenon known asstochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability.[106]For example, GPT-4 has natural deficits in planning and in real-time learning.[108]Generative LLMs have been observed to confidently assert claims of fact which do not seem to bejustifiedby theirtraining data, a phenomenon which has been termed "hallucination".[114]Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input.[115]NeuroscientistTerrence Sejnowskihas argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate".[106]
The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language.[106]These aspects of language as a model ofcognitionhave been developed in the field ofcognitive linguistics. American linguistGeorge Lakoffpresented Neural Theory of Language (NTL)[116]as acomputational basisfor using language as a model of learning tasks and understanding.The NTL Modeloutlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titledThe Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologistVyvyan Evansmapped out the role ofprobabilistic context-free grammar(PCFG) in enablingNLP to model cognitive patternsand generate human like language.[117][118]
The canonical measure of the performance of an LLM is itsperplexityon a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token.
log(Perplexity)=−1N∑i=1Nlog(Pr(tokeni∣context for tokeni)){\displaystyle \log({\text{Perplexity}})=-{\frac {1}{N}}\sum _{i=1}^{N}\log(\Pr({\text{token}}_{i}\mid {\text{context for token}}_{i}))}
Here,N{\displaystyle N}is the number of tokens in the text corpus, and "context for tokeni{\displaystyle i}" depends on the specific type of LLM. If the LLM is autoregressive, then "context for tokeni{\displaystyle i}" is the segment of text appearing before tokeni{\displaystyle i}. If the LLM is masked, then "context for tokeni{\displaystyle i}" is the segment of text surrounding tokeni{\displaystyle i}.
Because language models mayoverfitto training data, models are usually evaluated by their perplexity on atest set.[47]This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set.[1]
Ininformation theory, the concept ofentropyis intricately linked to perplexity, a relationship notably established byClaude Shannon.[119]This relationship is mathematically expressed asEntropy=log2(Perplexity){\displaystyle {\text{Entropy}}=\log _{2}({\text{Perplexity}})}.
Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization.
Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word.
In the evaluation and comparison of language models,cross-entropyis generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions.
Benchmarksare used to evaluate LLM performance on specific tasks. Tests evaluate capabilities such as general knowledge, bias,commonsense reasoning, question answering, and mathematical problem-solving. Composite benchmarks examine multiple capabilities. Results are often sensitive to the prompting method.[120][121]
A question answering benchmark is termed "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."[122]). Otherwise, the task is considered "closed book", and the model must draw solely on its training.[123]Examples include GLUE, SuperGLUE,MMLU, BIG-bench, HELM, andHLE (Humanity's Last Exam).[119][123]
LLM bias may be assessed through benchmarks such as CrowS-Pairs (Crowdsourced Stereotype Pairs),[124]Stereo Set,[125]and Parity Benchmark.[126]
Fact-checking and misinformation detection benchmarks are available. A 2023 study compared the fact-checking accuracy of LLMs including ChatGPT 3.5 and 4.0, Bard, and Bing AI against independent fact-checkers such as PolitiFact and Snopes. The results demonstrated moderate proficiency, with GPT-4 achieving the highest accuracy at 71%, lagging behind human fact-checkers.[127]
An earlier standard tested using a portion of the evaluation dataset. It became more common to evaluate a pre-trained model directly through prompting techniques. Researchers vary in how they formulate prompts for particular tasks, particularly with respect to the number of correct examples attached to the prompt (i.e. the value ofninn-shot prompting).
Typical datasets consist of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No").[122]Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD.[123]
Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ____".[1]
Datasets are of varying quality and may contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality.[128]
LLMs' rapid improvement regularly obsoletes benchmarks, with the models exceeding the performance of human annotators.[129]In addition, "shortcut learning" allows AIs to "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording to guess the correct responses, without considering the specific question.[106]
Some datasets are adversarial, focusing on problems that confound LLMs. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions that stump LLMs by mimicking falsehoods to which they were exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiomyou can't teach an old dog new tricks, even though this is not literally true.[130]
Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model. The resulting problems are trivial for humans but defeated LLMs. Sample questions:
We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man...
BERTselects b) as the most likely completion, though the correct answer is d).[131]
In 2023,Nature Biomedical Engineeringwrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."[132]Goldman Sachssuggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally.[133][134]Brinkmann et al. (2023)[135]also argue that LLMs are transforming processes ofcultural evolutionby shaping processes of variation, transmission, and selection.
Memorization is an emergent behavior in LLMs in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural nets. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates[136]or up to about 7%.[137]
A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data.[138]
Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse.[139]For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens.[140]
The potential presence of "sleeper agents" within LLMs is another emerging security concern. These are hidden functionalities built into the model that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions.[141]
LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study[142]proposed a method for circumventing LLM safety systems. In 2025, The American Sunlight Project, a non-profit, published a study[143]showing evidence that the so-calledPravda network, a pro-Russia propaganda aggregator, was strategically placing web content through mass publication and duplication with the intention of biasing LLM outputs. The American Sunlight Project coined this technique "LLM grooming," and pointed to it as a new tool of weaponizing AI to spread disinformation and harmful content.[143][144]Similarly,Yongge Wang[145]illustrated in 2024 how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation. External filters, circuit breakers and overrides have been posed as solutions.[citation needed]
While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups.[146]Since English data is overrepresented in current large language models' training data, it may also downplay non-English views.[147]
AI models can reinforce a wide range of stereotypes, including those based on gender, ethnicity, age, nationality, religion, or occupation. This can lead to outputs that homogenize, or unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.[148][149]
Notably, gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. Large language models often assign roles and characteristics based on traditional gender norms.[146]For example, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.[150]
Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as “A”) when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model’s performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings.[151][152]
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.[153]
The energy demands of LLMs have grown along with their size and capabilities.Data centersthat enable LLM training require substantial amounts of electricity. Much of that electricity is generated by non-renewable resources that create greenhouse gases and contribute toclimate change.[154]Nuclear powerandgeothermal energyare two options tech companies are exploring to meet the sizable energy demands of LLM training.[155]The significant expense of investing in geothermal solutions has led to major shale producers likeChevronandExxon Mobiladvocating for tech companies to use electricity produced vianatural gasto fuel their large energy demands.[156]
|
https://en.wikipedia.org/wiki/Large_language_model
|
Bidirectional encoder representations from transformers(BERT) is alanguage modelintroduced in October 2018 by researchers atGoogle.[1][2]It learns to represent text as a sequence of vectors usingself-supervised learning. It uses theencoder-only transformerarchitecture. BERT dramatically improved thestate-of-the-artforlarge language models. As of 2020[update], BERT is a ubiquitous baseline innatural language processing(NLP) experiments.[3]
BERT is trained by masked token prediction and next sentence prediction. As a result of this training process, BERT learns contextual,latent representationsof tokens in their context, similar toELMoandGPT-2.[4]It found applications for many natural language processing tasks, such ascoreference resolutionandpolysemyresolution.[5]It is an evolutionary step overELMo, and spawned the study of "BERTology", which attempts to interpret what is learned by BERT.[3]
BERT was originally implemented in the English language at two model sizes, BERTBASE(110 million parameters) and BERTLARGE(340 million parameters). Both were trained on the TorontoBookCorpus[6](800M words) andEnglish Wikipedia(2,500M words).[1]: 5The weights were released onGitHub.[7]On March 11, 2020, 24 smaller models were released, the smallest being BERTTINYwith just 4 million parameters.[7]
BERT is an "encoder-only"transformerarchitecture. At a high level, BERT consists of 4 modules:
The task head is necessary for pre-training, but it is often unnecessary for so-called "downstream tasks," such asquestion answeringorsentiment classification. Instead, one removes the task head and replaces it with a newly initialized module suited for the task, and finetune the new module. The latent vector representation of the model is directly fed into this new module, allowing for sample-efficienttransfer learning.[1][8]
This section describes the embedding used by BERTBASE. The other one, BERTLARGE, is similar, just larger.
The tokenizer of BERT is WordPiece, which is a sub-word strategy likebyte pair encoding. Its vocabulary size is 30,000, and any token not appearing in its vocabulary is replaced by[UNK]("unknown").
The first layer is the embedding layer, which contains three components: token type embeddings, position embeddings, and segment type embeddings.
The three embedding vectors are added together representing the initial token representation as a function of these three pieces of information. After embedding, the vector representation is normalized using aLayerNormoperation, outputting a 768-dimensional vector for each input token. After this, the representation vectors are passed forward through 12 Transformer encoder blocks, and are decoded back to 30,000-dimensional vocabulary space using a basic affine transformation layer.
The encoder stack of BERT has 2 free parameters:L{\displaystyle L}, the number of layers, andH{\displaystyle H}, thehidden size. There are alwaysH/64{\displaystyle H/64}self-attention heads, and the feed-forward/filter size is always4H{\displaystyle 4H}. By varying these two numbers, one obtains an entire family of BERT models.[9]
For BERT
The notation for encoder stack is written as L/H. For example, BERTBASEis written as 12L/768H, BERTLARGEas 24L/1024H, and BERTTINYas 2L/128H.
BERT was pre-trained simultaneously on two tasks.[10]
In masked language modeling, 15% of tokens would be randomly selected for masked-prediction task, and the training objective was to predict the masked token given its context. In more detail, the selected token is
The reason not all selected tokens are masked is to avoid the dataset shift problem. The dataset shift problem arises when the distribution of inputs seen during training differs significantly from the distribution encountered during inference. A trained BERT model might be applied to word representation (likeWord2Vec), where it would be run over sentences not containing any[MASK]tokens. It is later found that more diverse training objectives are generally better.[11]
As an illustrative example, consider the sentence "my dog is cute". It would first be divided into tokens like "my1dog2is3cute4". Then a random token in the sentence would be picked. Let it be the 4th one "cute4". Next, there would be three possibilities:
After processing the input text, the model's 4th output vector is passed to its decoder layer, which outputs a probability distribution over its 30,000-dimensional vocabulary space.
Given two spans of text, the model predicts if these two spans appeared sequentially in the training corpus, outputting either[IsNext]or[NotNext]. The first span starts with a special token[CLS](for "classify"). The two spans are separated by a special token[SEP](for "separate"). After processing the two spans, the 1-st output vector (the vector coding for[CLS]) is passed to a separate neural network for the binary classification into[IsNext]and[NotNext].
BERT is meant as a general pretrained model for various applications in natural language processing. That is, after pre-training, BERT can befine-tunedwith fewer resources on smaller datasets to optimize its performance on specific tasks such asnatural language inferenceandtext classification, and sequence-to-sequence-based language generation tasks such asquestion answeringand conversational response generation.[12]
The original BERT paper published results demonstrating that a small amount of finetuning (for BERTLARGE, 1 hour on 1 Cloud TPU) allowed it to achievedstate-of-the-artperformance on a number ofnatural language understandingtasks:[1]
In the original paper, all parameters of BERT are finetuned, and recommended that, for downstream applications that are text classifications, the output token at the[CLS]input token is fed into a linear-softmax layer to produce the label outputs.[1]
The original code base defined the final linear layer as a "pooler layer", in analogy withglobal poolingin computer vision, even though it simply discards all output tokens except the one corresponding to[CLS].[15]
BERT was trained on theBookCorpus(800M words) and a filtered version of English Wikipedia (2,500M words) without lists, tables, and headers.
Training BERTBASEon 4 cloudTPU(16 TPU chips total) took 4 days, at an estimated cost of 500 USD.[7]Training BERTLARGEon 16 cloud TPU (64 TPU chips total) took 4 days.[1]
Language models like ELMo, GPT-2, and BERT, spawned the study of "BERTology", which attempts to interpret what is learned by these models. Their performance on thesenatural language understandingtasks are not yet well understood.[3][16][17]Several research publications in 2018 and 2019 focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences,[18][19]analysis of internalvector representationsthrough probing classifiers,[20][21]and the relationships represented byattentionweights.[16][17]
The high performance of the BERT model could also be attributed to the fact that it is bidirectionally trained.[22]This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the wordfinecan have two different meanings depending on the context (I feelfinetoday,She hasfineblond hair). BERT considers the words surrounding the target wordfinefrom the left and right side.
However it comes at a cost: due toencoder-onlyarchitecture lacking a decoder, BERT can'tbe promptedand can'tgenerate text, while bidirectional models in general do not work effectively without the right side, thus being difficult to prompt. As an illustrative example, if one wishes to use BERT to continue a sentence fragment "Today, I went to", then naively one would mask out all the tokens as "Today, I went to[MASK][MASK][MASK]...[MASK]." where the number of[MASK]is the length of the sentence one wishes to extend to. However, this constitutes a dataset shift, as during training, BERT has never seen sentences with that many tokens masked out. Consequently, its performance degrades. More sophisticated techniques allow text generation, but at a high computational cost.[23]
BERT was originally published by Google researchers Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. The design has its origins from pre-training contextual representations, includingsemi-supervised sequence learning,[24]generative pre-training,ELMo,[25]and ULMFit.[26]Unlike previous models, BERT is a deeply bidirectional,unsupervisedlanguage representation, pre-trained using only a plaintext corpus. Context-free models such asword2vecorGloVegenerate a single word embedding representation for each word in the vocabulary, whereas BERT takes into account the context for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.[4]
On October 25, 2019,Googleannounced that they had started applying BERT models forEnglish languagesearch querieswithin theUS.[27]On December 9, 2019, it was reported that BERT had been adopted by Google Search for over 70 languages.[28][29]In October 2020, almost every single English-based query was processed by a BERT model.[30]
The BERT models were influential and inspired many variants.
RoBERTa(2019)[31]was an engineering improvement. It preserves BERT's architecture (slightly larger, at 355M parameters), but improves its training, changing key hyperparameters, removing thenext-sentence predictiontask, and using much largermini-batchsizes.
DistilBERT(2019)distillsBERTBASEto a model with just 60% of its parameters (66M), while preserving 95% of its benchmark scores.[32][33]Similarly,TinyBERT(2019)[34]is a distilled model with just 28% of its parameters.
ALBERT(2019)[35]used shared-parameter across layers, and experimented with independently varying the hidden size and the word-embedding layer's output size as two hyperparameters. They also replaced thenext sentence predictiontask with thesentence-order prediction(SOP) task, where the model must distinguish the correct order of two consecutive text segments from their reversed order.
ELECTRA(2020)[36]applied the idea ofgenerative adversarial networksto the MLM task. Instead of masking out tokens, a small language model generates random plausible substitutions, and a larger network identify these replaced tokens. The small model aims to fool the large model.
DeBERTa(2020)[37]is a significant architectural variant, withdisentangled attention. Its key idea is to treat the positional and token encodings separately throughout the attention mechanism. Instead of combining the positional encoding (xposition{\displaystyle x_{position}}) and token encoding (xtoken{\displaystyle x_{\text{token}}}) into a single input vector (xinput=xposition+xtoken{\displaystyle x_{input}=x_{position}+x_{token}}), DeBERTa keeps them separate as a tuple: ((xposition,xtoken){\displaystyle (x_{position},x_{token})}). Then, at each self-attention layer, DeBERTa computes three distinct attention matrices, rather than the single attention matrix used in BERT:[note 1]
The three attention matrices are added together element-wise, then passed through a softmax layer and multiplied by a projection matrix.
Absolute position encoding is included in the final self-attention layer as additional input.
|
https://en.wikipedia.org/wiki/BERT_(language_model)
|
T5 (Text-to-Text Transfer Transformer)is a series oflarge language modelsdeveloped byGoogle AIintroduced in 2019.[1][2]Like theoriginal Transformermodel,[3]T5 models areencoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text.
T5 models are usually pretrained on a massivedataset of textand code, after which they can perform the text-based tasks that are similar to their pretrained tasks. They can also be finetuned to perform other tasks.
T5 models have been employed in various applications, including chatbots, machine translation systems, text summarization tools, code generation, and robotics.[4]
The original T5 models are pre-trained on theColossal Clean Crawled Corpus(C4), containing text and codescraped from the internet. This pre-training process enables the models to learn general language understanding and generation abilities. T5 models can then be fine-tuned on specific downstream tasks, adapting their knowledge to perform well in various applications.
The T5 models were pretrained on many tasks, all in the format of<input text>-><output text>.
Some examples are:
The T5 series encompasses several models with varying sizes and capabilities, allencoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text.
These models are often distinguished by their parameter count, which indicates the complexity and potential capacity of the model. The original paper[1]reported the following 5 models:
*The encoder and the decoder have the same shape. So for example, the T5-small has 6 layers in the encoder and 6 layers in the decoder.
In the above table,
Note that unlike typical Transformers, the 3B and 11B models do not satisfydmodel=dkvnhead{\displaystyle d_{\text{model}}=d_{\text{kv}}n_{\text{head}}}.[6]
Compared to the original Transformer, it uses a few minor modifications: layer normalization with no additive bias; placing the layer normalization outside the residual path; relative positional embedding.[7]
For all experiments, they used a WordPiece tokenizer, with vocabulary size 32,000. The tokenizer is shared across both the input and output of each model. It was trained on a mixture ofEnglish,German,French, andRomaniandata from the C4 dataset, at a ratio of 10:1:1:1.
Several subsequent models used the T5 architecture, with non-standardized naming conventions used to differentiate them. This section attempts to collect the main ones. An exhaustive list of the variants released by Google Brain is on the GitHub repo for T5X.[8]
Some models are trained from scratch while others are trained by starting with a previous trained model. By default, each model is trained from scratch, except otherwise noted.
The T5 model itself is an encoder-decoder model, allowing it to be used for instruction following. The encoder encodes the instruction, and the decoder autoregressively generates the reply.
The T5 encoder can be used as a text encoder, much like BERT. It encodes a text into a sequence of real-number vectors, which can be used for downstream applications. For example, GoogleImagen[26]usesT5-XXLas text encoder, and the encoded text vectors are used as conditioning on adiffusion model. As another example, the AuraFlow diffusion model[27]usesPile-T5-XL.
|
https://en.wikipedia.org/wiki/T5_(language_model)
|
Recurrent neural networks(RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, andtime series,[1]where the order of elements is important. Unlikefeedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.
The fundamental building block of RNNs is therecurrent unit, which maintains ahidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connectedhandwriting recognition,[2]speech recognition,[3][4]natural language processing, andneural machine translation.[5][6]
However, traditional RNNs suffer from thevanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of thelong short-term memory(LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later,Gated Recurrent Units(GRUs) were introduced as a more computationally efficient alternative.
In recent years,transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.
One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901,Cajalobserved "recurrent semicircles" in thecerebellar cortexformed byparallel fiber,Purkinje cells, andgranule cells.[7][8]In 1933,Lorente de Nódiscovered "recurrent, reciprocal connections" byGolgi's method, and proposed that excitatory loops explain certain aspects of thevestibulo-ocular reflex.[9][10]During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure.Hebbconsidered "reverberating circuit" as an explanation for short-term memory.[11]The McCulloch and Pitts paper (1943), which proposed theMcCulloch-Pitts neuronmodel, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past.[12]They were both interested in closed loops as possible explanations for e.g.epilepsyandcausalgia.[13][14]Recurrent inhibitionwas proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at theMacy conferences.[15]See[16]for an extensive review of recurrent neural network models in neuroscience.
Frank Rosenblattin 1960 published "close-loop cross-coupled perceptrons", which are 3-layeredperceptronnetworks whose middle layer contains recurrent connections that change by aHebbian learningrule.[18]: 73–75Later, inPrinciples of Neurodynamics(1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,[17]: Chapter 19, 21and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.[17]: Section 19.11
Similar networks were published by Kaoru Nakano in 1971,[19][20]Shun'ichi Amariin 1972,[21]andWilliam A. Little[de]in 1974,[22]who was acknowledged by Hopfield in his 1982 paper.
Another origin of RNN wasstatistical mechanics. TheIsing modelwas developed byWilhelm Lenz[23]andErnst Ising[24]in the 1920s[25]as a simple statistical mechanical model of magnets at equilibrium.Glauberin 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time.[26]
TheSherrington–Kirkpatrick modelof spin glass, published in 1975,[27]is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions.[28]In a 1984 paper he extended this to continuous activation functions.[29]It became a standard model for the study of neural networks through statistical mechanics.[30][31]
Modern RNN networks are mainly based on two architectures: LSTM and BRNN.[32]
At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets".[33]Two early influential works were theJordan network(1986) and theElman network(1990), which applied RNN to studycognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequentlayersin an RNN unfolded in time.[34]
Long short-term memory(LSTM) networks were invented byHochreiterandSchmidhuberin 1995 and set accuracy records in multiple applications domains.[35][36]It became the default choice for RNN architecture.
Bidirectional recurrent neural networks(BRNN) uses two RNN that processes the same input in opposite directions.[37]These two are often combined, giving the bidirectional LSTM architecture.
Around 2006, bidirectional LSTM started to revolutionizespeech recognition, outperforming traditional models in certain speech applications.[38][39]They also improved large-vocabulary speech recognition[3][4]andtext-to-speechsynthesis[40]and was used inGoogle voice search, and dictation onAndroid devices.[41]They broke records for improvedmachine translation,[42]language modeling[43]and Multilingual Language Processing.[44]Also, LSTM combined withconvolutional neural networks(CNNs) improvedautomatic image captioning.[45]
The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014.[46][47]Aseq2seqarchitecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development ofattention mechanismsandtransformers.
An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.
RNNs come in many variants. Abstractly speaking, an RNN is a functionfθ{\displaystyle f_{\theta }}of type(xt,ht)↦(yt,ht+1){\displaystyle (x_{t},h_{t})\mapsto (y_{t},h_{t+1})}, where
In words, it is a neural network that maps an inputxt{\displaystyle x_{t}}into an outputyt{\displaystyle y_{t}}, with the hidden vectorht{\displaystyle h_{t}}playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing.
The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to belayersare, in fact, different steps in time, "unfolded" to produce the appearance oflayers.
Astacked RNN, ordeep RNN, is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows
Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN.
Abidirectional RNN(biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows:
The two output sequences are then concatenated to give the total output:((y0,y0′),(y1,y1′),…,(yN,yN′)){\displaystyle ((y_{0},y_{0}'),(y_{1},y_{1}'),\dots ,(y_{N},y_{N}'))}.
Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. TheELMomodel (2018)[48]is a stacked bidirectionalLSTMwhich takes character-level as inputs and produces word-level embeddings.
Two RNNs can be run front-to-back in anencoder-decoderconfiguration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optionalattention mechanism. This was used to construct state of the artneural machine translatorsduring the 2014–2017 period. This was an instrumental step towards the development oftransformers.[49]
An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions.[50]For example, the row-by-row direction processes ann×n{\displaystyle n\times n}grid of vectorsxi,j{\displaystyle x_{i,j}}in the following order:x1,1,x1,2,…,x1,n,x2,1,x2,2,…,x2,n,…,xn,n{\displaystyle x_{1,1},x_{1,2},\dots ,x_{1,n},x_{2,1},x_{2,2},\dots ,x_{2,n},\dots ,x_{n,n}}Thediagonal BiLSTMuses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processesxi,j{\displaystyle x_{i,j}}depending on its hidden state and cell state on the top and the left side:hi−1,j,ci−1,j{\displaystyle h_{i-1,j},c_{i-1,j}}andhi,j−1,ci,j−1{\displaystyle h_{i,j-1},c_{i,j-1}}. The other processes it from the top-right corner to the bottom-left.
Fully recurrent neural networks(FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is afully connected network. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons.
TheHopfield networkis an RNN in which all connections across layers are equally sized. It requiresstationaryinputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained usingHebbian learning, then the Hopfield network can perform asrobustcontent-addressable memory, resistant to connection alteration.
AnElmannetworkis a three-layer network (arranged horizontally asx,y, andzin the illustration) with the addition of a set of context units (uin the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one.[51]At each time step, the input is fed forward and alearning ruleis applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standardmultilayer perceptron.
Jordannetworksare similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves.[51]
Elman and Jordan networks are also known as "Simple recurrent networks" (SRN).
Variables and functions
Long short-term memory(LSTM) is the most widely used RNN architecture. It was designed to solve thevanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates".[54]LSTM prevents backpropagated errors from vanishing or exploding.[55]Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved.[56]LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components.
Many applications use stacks of LSTMs,[57]for which it is called "deep LSTM". LSTM can learn to recognizecontext-sensitive languagesunlike previous models based onhidden Markov models(HMM) and similar concepts.[58]
Gated recurrent unit(GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants.[59][60]They have fewer parameters than LSTM, as they lack an output gate.[61]
Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory.[62]There does not appear to be particular performance difference between LSTM and GRU.[62][63]
Introduced by Bart Kosko,[64]a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and itstranspose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models usingMarkovstepping were optimized for increased network stability and relevance to real-world applications.[65]
A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.[66]
Echo state networks(ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certaintime series.[67]A variant forspiking neuronsis known as aliquid state machine.[68]
Arecursive neural network[69]is created by applying the same set of weightsrecursivelyover a differentiable graph-like structure by traversing the structure intopological order. Such networks are typically also trained by the reverse mode ofautomatic differentiation.[70][71]They can processdistributed representationsof structure, such aslogical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied tonatural language processing.[72]The Recursive Neural Tensor Network uses atensor-based composition function for all nodes in the tree.[73]
Neural Turing machines(NTMs) are a method of extending recurrent neural networks by coupling them to externalmemoryresources with which they interact. The combined system is analogous to aTuring machineorVon Neumann architecturebut isdifferentiableend-to-end, allowing it to be efficiently trained withgradient descent.[74]
Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.[75]
Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers ofcontext free grammars(CFGs).[76]
Recurrent neural networks areTuring completeand can run arbitrary programs to process arbitrary sequences of inputs.[77]
An RNN can be trained into a conditionallygenerative modelof sequences, akaautoregression.
Concretely, let us consider the problem of machine translation, that is, given a sequence(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\dots ,x_{n})}of English words, the model is to produce a sequence(y1,…,ym){\displaystyle (y_{1},\dots ,y_{m})}of French words. It is to be solved by aseq2seqmodel.
Now, during training, the encoder half of the model would first ingest(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\dots ,x_{n})}, then the decoder half would start generating a sequence(y^1,y^2,…,y^l){\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\dots ,{\hat {y}}_{l})}. The problem is that if the model makes a mistake early on, say aty^2{\displaystyle {\hat {y}}_{2}}, then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shifty^2{\displaystyle {\hat {y}}_{2}}towardsy2{\displaystyle y_{2}}, but not the others.
Teacher forcingmakes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see(y1,…,yk){\displaystyle (y_{1},\dots ,y_{k})}in order to generatey^k+1{\displaystyle {\hat {y}}_{k+1}}.
Gradient descent is afirst-orderiterativeoptimizationalgorithmfor finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linearactivation functionsaredifferentiable.
The standard method for training RNN by gradient descent is the "backpropagation through time" (BPTT) algorithm, which is a special case of the general algorithm ofbackpropagation. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,[78][79]which is an instance ofautomatic differentiationin the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space.
In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.[80][81]
For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing theJacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.[82]An online hybrid between BPTT and RTRL with intermediate complexity exists,[83][84]along with variants for continuous time.[85]
A major problem with gradient descent for standard RNN architectures is thaterror gradients vanishexponentially quickly with the size of the time lag between important events.[55][86]LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.[36]This problem is also solved in the independently recurrent neural network (IndRNN)[87]by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problem.
The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.[88]It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback.
One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.[89]It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.[90]It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.[90]
Theconnectionist temporal classification(CTC)[91]is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.[92]
Training the weights in a neural network can be modeled as a non-linearglobal optimizationproblem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.
The most common global optimization method for training RNNs isgenetic algorithms, especially in unstructured networks.[93][94][95]
Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in thechromosomerepresents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is:
The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error.
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such assimulated annealingorparticle swarm optimization.
The independently recurrent neural network (IndRNN)[87]addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections.
The neural history compressor is an unsupervised stack of RNNs.[96]At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.
The system effectively minimizes the description length or the negativelogarithmof the probability of the data.[97]Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.
It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level).[96]Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.[96]
Agenerative modelpartially overcame thevanishing gradient problem[55]ofautomatic differentiationorbackpropagationin neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.[34]
Second-order RNNs use higher order weightswijk{\displaystyle w{}_{ijk}}instead of the standardwij{\displaystyle w{}_{ij}}weights, and states can be a product. This allows a direct mapping to afinite-state machineboth in training, stability, and representation.[98][99]Long short-term memory is an example of this but has no such formal mappings or proof of stability.
Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms.[96][100]Such hierarchical structures of cognition are present in theories of memory presented by philosopherHenri Bergson, whose philosophical views have inspired hierarchical models.[101]
Hierarchical recurrent neural networks are useful inforecasting, helping to predict disaggregated inflation components of theconsumer price index(CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various establishedinflationprediction methods.[102]
Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.[103]
A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.[104][105]With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in thememory-predictiontheory of brain function byHawkinsin his bookOn Intelligence.[citation needed]Such a hierarchy also agrees with theories of memory posited by philosopherHenri Bergson, which have been incorporated into an MTRNN model.[101][106]
Greg Snider ofHP Labsdescribes a system of cortical computing with memristive nanodevices.[107]Thememristors(memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film.DARPA'sSyNAPSE projecthas funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems.Memristive networksare a particular type ofphysical neural networkthat have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to theIsing model. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type ofneuromorphic engineeringin which the device behavior depends on the circuit wiring or topology.
The evolution of these networks can be studied analytically using variations of theCaravelli–Traversa–Di Ventraequation.[108]
A continuous-time recurrent neural network (CTRNN) uses a system ofordinary differential equationsto model the effects on a neuron of the incoming inputs. They are typically analyzed bydynamical systems theory. Many RNN models in neuroscience are continuous-time.[16]
For a neuroni{\displaystyle i}in the network with activationyi{\displaystyle y_{i}}, the rate of change of activation is given by:
Where:
CTRNNs have been applied toevolutionary roboticswhere they have been used to address vision,[109]co-operation,[110]and minimal cognitive behaviour.[111]
Note that, by theShannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalentdifference equations.[112]This transformation can be thought of as occurring after the post-synaptic node activation functionsyi(t){\displaystyle y_{i}(t)}have been low-pass filtered but prior to sampling.
They are in factrecursive neural networkswith a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
From a time-series perspective, RNNs can appear as nonlinear versions offinite impulse responseandinfinite impulse responsefilters and also as anonlinear autoregressive exogenous model(NARX).[113]RNN has infinite impulse response whereasconvolutional neural networkshavefinite impulseresponse. Both classes of networks exhibit temporaldynamic behavior.[114]A finite impulse recurrent network is adirected acyclic graphthat can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is adirected cyclic graphthat cannot be unrolled.
The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity.[115]
Additional stored states and the storage under direct control by the network can be added to bothinfinite-impulseandfinite-impulsenetworks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part oflong short-term memorynetworks (LSTMs) andgated recurrent units. This is also called Feedback Neural Network (FNN).
Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop byjust-in-time compilation.
Applications of recurrent neural networks include:
|
https://en.wikipedia.org/wiki/Recurrent_neural_network
|
Thetransformeris adeep learningarchitecture that was developed by researchers atGoogleand is based on the multi-headattentionmechanism, which was proposed in the 2017 paper "Attention Is All You Need".[1]Text is converted to numerical representations calledtokens, and each token is converted into a vector via lookup from aword embeddingtable.[1]At each layer, eachtokenis thencontextualizedwithin the scope of thecontext windowwith other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for keytokensto be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlierrecurrent neural architectures(RNNs) such aslong short-term memory(LSTM).[2]Later variations have been widely adopted for traininglarge language models(LLM) on large (language)datasets.[3]
Transformers were first developed as an improvement over previous architectures formachine translation,[4][5]but have found many applications since. They are used in large-scalenatural language processing,computer vision(vision transformers),reinforcement learning,[6][7]audio,[8]multimodal learning,robotics,[9]and even playingchess.[10]It has also led to the development ofpre-trained systems, such asgenerative pre-trained transformers(GPTs)[11]andBERT[12](bidirectional encoder representations from transformers).
For many years, sequence modelling and generation was done by using plainrecurrent neural networks(RNNs). A well-cited early example was theElman network(1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice thevanishing-gradient problemleaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough wasLSTM(1995),[note 1]a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of anattention mechanismwhich used neurons that multiply the outputs of other neurons, so-calledmultiplicative units.[13]Neural networks using multiplicative units were later calledsigma-pi networks[14]orhigher-order networks.[15]LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs.[note 2]Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern Transformers overcome this problem, but unlike RNNs, they require computation time that isquadraticin the size of the context window. The linearly scalingfast weightcontroller (1992) learns to compute a weight matrix for further processing depending on the input.[16]One of its two networks has "fast weights" or "dynamic links" (1981).[17][18][19]A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries.[16]This was later shown to be equivalent to the unnormalized linear Transformer.[20][21]
The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see previous papers[22][23]). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.[22][23]
A 380M-parameter model for machine translation uses twolong short-term memories(LSTM).[23]Its architecture consists of two parts. Theencoderis an LSTM that takes in a sequence of tokens and turns it into a vector. Thedecoderis another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model usedgated recurrent units(GRU) instead of LSTM.[22]Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.[24][25]
These early seq2seq models had no attention mechanism, and the state vector is accessible only after thelastword of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into afixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.[26]
TheRNNsearchmodel introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of thefixed-sizeoutput vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".[4]
The relative performances were compared between global (that ofRNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.[27]
In 2016,Google Translatewas revamped toGoogle Neural Machine Translation, which replaced the previous model based onstatistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM.[28]It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.[29]
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard toparallelize, which prevented them from being accelerated on GPUs. In 2016,decomposable attentionapplied a self-attention mechanism tofeedforward networks, which are easy to parallelize, and achievedSOTAresult intextual entailmentwith an order of magnitude fewer parameters than LSTMs.[30]One of its authors, Jakob Uszkoreit, suspected that attentionwithoutrecurrence is sufficient for language translation, thus the title "attention isallyou need".[31]That hypothesis was against conventional wisdom at the time, and even his fatherHans Uszkoreit, a well-known computational linguist, was skeptical.[31]In the same year, self-attention (calledintra-attention orintra-sentence attention) was proposed for LSTMs.[32]
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improvingseq2seqformachine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance.[1]This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.[33]
Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles.[34]Transformer architecture is now used alongside manygenerative modelsthat contribute to the ongoingAI boom.
In language modelling,ELMo(2018) was a bi-directional LSTM that produces contextualizedword embeddings, improving upon the line of research frombag of wordsandword2vec. It was followed byBERT(2018), an encoder-only Transformer model.[35]In 2019 October, Google started using BERT to process search queries.[36]In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.[37]
Starting in 2018, the OpenAIGPT seriesof decoder-only Transformers became state of the art innatural language generation. In 2022, a chatbot based on GPT-3,ChatGPT, became unexpectedly[38]popular, triggering a boom aroundlarge language models.[39][40]
Since 2020, Transformers have been applied in modalities beyond text, including thevision transformer,[41]speech recognition,[42]robotics,[6]andmultimodal.[43]The vision transformer, in turn, stimulated new developments inconvolutional neural networks.[44]Image and video generators likeDALL-E(2021),Stable Diffusion 3(2024),[45]andSora(2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
The plain transformer architecture had difficulty converging. In the original paper[1]the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that usinglayer normalizationbefore(instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.[46]
Transformers typically are first pretrained byself-supervised learningon a large generic dataset, followed bysupervisedfine-tuningon a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such asThe Pile. Tasks for pretraining and fine-tuning commonly include:
TheT5 transformerreport[47]documents a large number ofnatural languagepretraining tasks. Some examples are:
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
In general, there are 3 classes of language modelling tasks: "masked",[49]"autoregressive",[50]and "prefixLM".[51]These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task,[49]one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. Theloss functionfor the task is typically sum oflog-perplexitiesfor the masked-out tokens:Loss=−∑t∈masked tokensln(probability oftconditional on its context){\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})}and the model is trained to minimize this loss function. TheBERT series of modelsare trained for masked token prediction and another task.
In an autoregressive task,[50]the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. TheGPT series of modelsare trained by autoregressive tasks.
In a prefixLM task,[51]the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. TheT5 series of modelsare trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not"prefixLM" (prefix language model).
All transformers have the same primary components:
The following description follows exactly the Transformer as described in the original paper. There are variants, described in thefollowing section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, asxW{\displaystyle xW}.
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is atokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is thevocabulary sizenvocabulary{\displaystyle n_{\text{vocabulary}}}. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown".
Some commonly used tokenizers arebyte pair encoding, WordPiece, and SentencePiece.
Each token is converted into an embedding vector via alookup table. Equivalently stated, it multiplies aone-hotrepresentation of the token by an embedding matrixM{\displaystyle M}. For example, if the input token is3{\displaystyle 3}, then the one-hot representation is[0,0,0,1,0,0,…]{\displaystyle [0,0,0,1,0,0,\dots ]}, and its embedding vector isEmbed(3)=[0,0,0,1,0,0,…]M{\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M}The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors.
The number of dimensions in an embedding vector is calledhidden sizeorembedding sizeand written asdemb{\displaystyle d_{\text{emb}}}.[35]This size is written asdmodel{\displaystyle d_{\text{model}}}in the original Transformer paper.[1]
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmaxlayer:UnEmbed(x)=softmax(xW+b){\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)}The matrix has shape(demb,nvocabulary){\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})}. The embedding matrixM{\displaystyle M}and the un-embedding matrixW{\displaystyle W}are sometimes required to be transposes of each other, a practice called weight tying.[52]
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information aboutwherethe words are in the input sequence. This shall induce abiastowards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man".
The positional encoding is defined as a function of typef:R→Rd;d∈Z,d>0{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0}, whered{\displaystyle d}is a positive eveninteger. The full positional encoding defined in the original paper[1]is:(f(t)2k,f(t)2k+1)=(sin(θ),cos(θ))∀k∈{0,1,…,d/2−1}{\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}}whereθ=trk,r=N2/d{\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}}.
Here,N{\displaystyle N}is a free parameter that should be significantly larger than the biggestk{\displaystyle k}that would be input into the positional encoding function. The original paper usesN=10000{\displaystyle N=10000}.
The function is in a simpler form when written as a complex function of typef:R→Cd/2{\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}}f(t)=(eit/rk)k=0,1,…,d2−1{\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}}wherer=N2/d{\displaystyle r=N^{2/d}}.
The main reason for using this positional encoding function is that using it, shifts are linear transformations:f(t+Δt)=diag(f(Δt))f(t){\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)}whereΔt∈R{\displaystyle \Delta t\in \mathbb {R} }is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication.
By taking a linear sum, any convolution can also be implemented as linear transformations:∑jcjf(t+Δtj)=(∑jcjdiag(f(Δtj)))f(t){\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)}for any constantscj{\displaystyle c_{j}}. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in aconvolutional neural networklanguage model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position."
In typical implementations, all operations are done over the real numbers, not the complex numbers, but sincecomplex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
Like earlierseq2seqmodels, the original transformer model used anencoder-decoderarchitecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).[53][54]
Both the encoder and decoder layers have afeed-forward neural networkfor additional processing of their outputs and contain residual connections and layer normalization steps.[54]These feed-forward layers contain most of the parameters in a Transformer model.
The feedforward network (FFN) modules in a Transformer are 2-layeredmultilayer perceptrons:FFN(x)=ϕ(xW(1)+b(1))W(2)+b(2){\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}}whereW(1){\displaystyle W^{(1)}}andW(2){\displaystyle W^{(2)}}are weight matrices andb(1){\displaystyle b^{(1)}}andb(2){\displaystyle b^{(2)}}are bias vectors, andϕ{\displaystyle \phi }is its activation function. The original Transformer usedReLUactivation.
The number of neurons in the middle layer is calledintermediate size(GPT),[55]filter size(BERT),[35]orfeedforward size(BERT).[35]It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:dffn=4demb{\displaystyle d_{\text{ffn}}=4d_{\text{emb}}}.
The attention mechanism used in the Transformer architecture are scaleddot-productattentionunits. For each unit, the transformer model learns three weight matrices: the query weightsWQ{\displaystyle W^{Q}}, the key weightsWK{\displaystyle W^{K}}, and the value weightsWV{\displaystyle W^{V}}.
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of lengthℓseq, query{\displaystyle \ell _{\text{seq, query}}}, and each entry is a vector of dimensiondemb, query{\displaystyle d_{\text{emb, query}}}. Similarly for the key and value sequences.
For each vectorxi,query{\displaystyle x_{i,{\text{query}}}}in the query sequence, it is multiplied by a matrixWQ{\displaystyle W^{Q}}to produce a query vectorqi=xi,queryWQ{\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}}. The matrix of all query vectors is the query matrix:Q=XqueryWQ{\displaystyle Q=X_{\text{query}}W^{Q}}Similarly, we construct the key matrixK=XkeyWK{\displaystyle K=X_{\text{key}}W^{K}}and the value matrixV=XvalueWV{\displaystyle V=X_{\text{value}}W^{V}}.
It is usually the case that allWQ,WK,WV{\displaystyle W^{Q},W^{K},W^{V}}are square matrices, meaningdemb, query=dquery{\displaystyle d_{\text{emb, query}}=d_{\text{query}}}, etc.
Attention weights are calculated using the query and key vectors: the attention weightaij{\displaystyle a_{ij}}from tokeni{\displaystyle i}to tokenj{\displaystyle j}is thedot productbetweenqi{\displaystyle q_{i}}andkj{\displaystyle k_{j}}. The attention weights are divided by the square root of the dimension of the key vectors,dk{\displaystyle {\sqrt {d_{k}}}}, which stabilizes gradients during training, and passed through asoftmaxwhich normalizes the weights. The fact thatWQ{\displaystyle W^{Q}}andWK{\displaystyle W^{K}}are different matrices allows attention to be non-symmetric: if tokeni{\displaystyle i}attends to tokenj{\displaystyle j}(i.e.qi⋅kj{\displaystyle q_{i}\cdot k_{j}}is large), this does not necessarily mean that tokenj{\displaystyle j}will attend to tokeni{\displaystyle i}(i.e.qj⋅ki{\displaystyle q_{j}\cdot k_{i}}could be small). The output of the attention unit for tokeni{\displaystyle i}is the weighted sum of the value vectors of all tokens, weighted byaij{\displaystyle a_{ij}}, the attention from tokeni{\displaystyle i}to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using thesoftmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matricesQ{\displaystyle Q},K{\displaystyle K}andV{\displaystyle V}are defined as the matrices where thei{\displaystyle i}th rows are vectorsqi{\displaystyle q_{i}},ki{\displaystyle k_{i}}, andvi{\displaystyle v_{i}}respectively. Then we can represent the attention asAttention(Q,K,V)=softmax(QKTdk)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector isquery sizedquery{\displaystyle d_{\text{query}}}and similarly for thekey sizedkey{\displaystyle d_{\text{key}}}andvalue sizedvalue{\displaystyle d_{\text{value}}}. The output dimension of an attention head is itshead dimensiondhead{\displaystyle d_{\text{head}}}. The attention mechanism requires the following three equalities to hold:ℓseq, key=ℓseq, value,dquery=dkey,dvalue=dhead{\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}}but is otherwise unconstrained.
If the attention head is used in a self-attention fashion, thenXquery=Xkey=Xvalue{\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}}. If the attention head is used in a cross-attention fashion, then usuallyXquery≠Xkey=Xvalue{\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}}. It is theoretically possible for all three to be different, but that is rarely the case in practice.
One set of(WQ,WK,WV){\displaystyle \left(W^{Q},W^{K},W^{V}\right)}matrices is called anattention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices,WQ{\displaystyle W^{Q}}andWK{\displaystyle W^{K}}, which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrixWV{\displaystyle W^{V}}, in combination with the part of the output projection matrixWO{\displaystyle W^{O}}, determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects.[56]The computations for each attention head can be performed inparallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into thefeed-forward neural networklayers.
Concretely, let the multiple attention heads be indexed byi{\displaystyle i}, then we haveMultiheadedAttention(Q,K,V)=Concati∈[nheads](Attention(QWiQ,KWiK,VWiV))WO{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}))W^{O}}where the matrixX{\displaystyle X}is the concatenation of word embeddings, and the matricesWiQ,WiK,WiV{\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}}are "projection matrices" owned by individual attention headi{\displaystyle i}, andWO{\displaystyle W^{O}}is a final projection matrix owned by the whole multi-headed attention head.
It is theoretically possible for each attention head to have a different head dimensiondhead{\displaystyle d_{\text{head}}}, but that is rarely the case in practice.
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:demb=768,nhead=12,dhead=64{\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64}Since12×64=768{\displaystyle 12\times 64=768}, its output projection matrixWO∈R(12×64)×768{\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}}is a square matrix.
The Transformer architecture is constructed to calculate output tokens iteratively. Assumingt=0{\displaystyle t=0}refers to the calculation of the first output tokeni=0{\displaystyle i=0}, for stept>0{\displaystyle t>0}, the output tokeni=0{\displaystyle i=0}shall remain constant. This ensures properties of the model similar toautoregressive models.[1]Therefore, at every time stept{\displaystyle t}, the calculation for all outputsi{\displaystyle i}should not have access to tokens at positionj{\displaystyle j}forj>=i{\displaystyle j>=i}(as it naturally is the case for time stept=i{\displaystyle t=i}, when tokensj>t{\displaystyle j>t}are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrixM{\displaystyle M}that is−∞{\displaystyle -\infty }at entries where the attention link must be cut, and0{\displaystyle 0}at other places:MaskedAttention(Q,K,V)=softmax(M+QKTdk)V{\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}The following matrix is commonly used in decoder self-attention modules, called "causal masking":Mcausal=[0−∞−∞…−∞00−∞…−∞000…−∞⋮⋮⋮⋱⋮000…0]{\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}}
In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, theXLNetconsiders all masks of the formPMcausalP−1{\displaystyle PM_{\text{causal}}P^{-1}}, whereP{\displaystyle P}is a randompermutation matrix.[57]
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:given input vectorsh0,h1,…combine them into a matrixH=[h0h1⋮]EncoderLayer(H)=[FFN(MultiheadedAttention(H,H,H)0)FFN(MultiheadedAttention(H,H,H)1)⋮]{\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}}
whereFFN{\displaystyle {\text{FFN}}}stands for "feed-forward network". We can more succinctly write it asEncoderLayer(H)=FFN(MultiheadedAttention(H,H,H)){\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))}with the implicit convention that theFFN{\displaystyle {\text{FFN}}}is applied to each row of the matrix individually.
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called theencoder-decoder attention.[1][54]
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow.[1]This allows forautoregressivetext generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:H′=MaskedMultiheadedAttention(H,H,H)DecoderLayer(H)=FFN(MultiheadedAttention(H′,HE,HE)){\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}}whereHE{\displaystyle H^{E}}is the matrix with rows being the output vectors from the encoder.
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
Manylarge language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. EarlyGPTmodels are decoder-only models trained to predict the next token in a sequence.[58]BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.[35]
Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.
The final points of detail are theresidual connectionsandlayer normalization(LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence.
The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero.
Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
There are two common conventions in use: thepost-LNand thepre-LNconvention. In the post-LN convention, the output of each sublayer isLayerNorm(x+Sublayer(x)){\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))}whereSublayer(x){\displaystyle \mathrm {Sublayer} (x)}is the function implemented by the sublayer itself.
In the pre-LN convention, the output of each sublayer isx+Sublayer(LayerNorm(x)){\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))}The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,[59]was found to be easier to train, requiring no warm-up, leading to faster convergence.[46]
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from[60]
The Transformer architecture, being modular, allows variations. Several common variations are described here.[61]
An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding andrepresentation learningfor downstream applications.BERTis encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.[51]
A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used fortext generationandinstruction following. The models in theGPT seriesandChinchilla seriesare decoder-only.
An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such asalternative activation functions,changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in theT5 seriesare encoder-decoder.[61]
A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form[61]: Figure 3MprefixLM=[0−∞0Mcausal]{\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}}where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.[51]
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.[62]
The original transformer usesReLUactivation function. Other activation functions were developed. TheLlama seriesandPaLMused SwiGLU;[63]both GPT-1 and BERT[35]used GELU.[64]
Alternative activation functions are often used in combination withGated Linear Unitsin the feedforward module.[63]
The normalization used in the Transformer can be different from LayerNorm. One example isRMSNorm[65]which is used in theLlama series. Other examples include CapsuleNorm[66]ScaleNorm,[67]or FixNorm.[67]
Transformers may use other positional encoding methods than sinusoidal.[68]
The original Transformer paper reported using a learned positional encoding,[69]but finding it not superior to the sinusoidal one.[1]Later,[70]found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
RoPE (rotary positional embedding),[71]is best explained by considering a list of 2-dimensional vectors[(x1(1),x1(2)),(x2(1),x2(2)),(x3(1),x3(2)),...]{\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]}. Now pick some angleθ{\displaystyle \theta }. Then RoPE encoding isRoPE(xm(1),xm(2),m)=(cosmθ−sinmθsinmθcosmθ)(xm(1)xm(2))=(xm(1)cosmθ−xm(2)sinmθxm(2)cosmθ+xm(1)sinmθ){\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}}Equivalently, if we write the 2-dimensional vectors as complex numberszm:=xm(1)+ixm(2){\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}}, then RoPE encoding is just multiplication by an angle:RoPE(zm,m)=eimθzm{\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}}For a list of2n{\displaystyle 2n}-dimensional vectors, a RoPE encoder is defined by a sequence of anglesθ(1),...,θ(n){\displaystyle \theta ^{(1)},...,\theta ^{(n)}}. Then the RoPE encoding is applied to each pair of coordinates.
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:RoPE(x,m)TRoPE(y,n)=RoPE(x,m+k)TRoPE(y,n+k){\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}}for any integerk{\displaystyle k}.
ALiBi (Attention with Linear Biases)[72]is not areplacementfor the positional encoder on the original transformer. Instead, it is anadditionalpositional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism isAttention(Q,K,V)=softmax(QKTdk+sB)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}}Here,s{\displaystyle s}is a real number ("scalar"), andB{\displaystyle B}is thelinear biasmatrix defined byB=(0123⋯−1012⋯−2−101⋯−3−2−10⋯⋮⋮⋮⋮⋱){\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}}in other words,Bi,j=j−i{\displaystyle B_{i,j}=j-i}. The idea being that the linear bias matrix is a softened mask. Just as0{\displaystyle 0}represent full attention paid, and−∞{\displaystyle -\infty }represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction.
ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
Relative Position Encodings[73]is similar to ALiBi, but more generic:Attention(Q,K,V)=softmax(QKTdk+B)V{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}}whereB{\displaystyle B}is aToeplitz matrix, that is,Bi,j=Bi′,j′{\displaystyle B_{i,j}=B_{i',j'}}wheneveri−j=i′−j′{\displaystyle i-j=i'-j'}. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".[74]
The transformer model has been implemented in standard deep learningframeworkssuch asTensorFlowandPyTorch.Transformersis a library produced byHugging Facethat supplies transformer-based architectures and pretrained models.[11]
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. TheKV cachingmethod saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token.PagedAttentionappliesmemory pagingto KV caching.[75][76][77]
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
FlashAttention[78]is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performsmatrix multiplications in blocks, such that each block fits within thecacheof a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page onsoftmaxfor details.
An improved version, FlashAttention-2,[79][80][81]was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s onA100GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).[82]
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware likeH100GPUs and new data types like FP8.
Multi-Query Attention changes the multiheaded attention mechanism.[83]Whereas normally,
MultiheadedAttention(Q,K,V)=Concati∈[nheads](Attention(XWiQ,XWiK,XWiV))WO{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}}with Multi-Query Attention, there is just oneWK,WV{\displaystyle W^{K},W^{V}}, thus:
MultiQueryAttention(Q,K,V)=Concati∈[nheads](Attention(XWiQ,XWK,XWV))WO{\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}}
This has a neutral effect on model quality and training speed, but increases inference speed.
More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.[84]
Multihead Latent Attention (MLA) is alow-rank approximationto standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.[85]
Speculative decoding[86][87]is a method to accelerate token decoding. Similarly tospeculative executionin CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly.
The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense.
Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a tokenx1,x2,...,x512{\displaystyle x_{1},x_{2},...,x_{512}}, taking time512TGPT-3{\displaystyle 512T_{\text{GPT-3}}}. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that eachxt{\displaystyle x_{t}}is indeed the token with the largest log-likelihood in thet{\displaystyle t}-th output.
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens:x~1,x~2,x~3,x~4{\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}}. This only takes4TGPT-3-small{\displaystyle 4T_{\text{GPT-3-small}}}. These tokens are then run through the larger GPT-3 in one go. Suppose thatx~1{\displaystyle {\tilde {x}}_{1}}andx~2{\displaystyle {\tilde {x}}_{2}}are verified by GPT-3 as what it would have picked, then those are kept, butx~3{\displaystyle {\tilde {x}}_{3}}is not, sox~3,x~4{\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}}are discarded, and GPT-3 is run on those. This would take4TGPT-3-small+3TGPT-3{\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}}, which might be shorter than4TGPT-3{\displaystyle 4T_{\text{GPT-3}}}.
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.[86][88]
In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict thenexttoken, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.[89][90]
Training transformer-based architectures can be expensive, especially for long inputs.[91]Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows.[92]In the audio domain, SepTr decouples the attention in time and frequency domains.[93]Long Range Arena(2020)[94]is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
The standard attention graph is either all-to-all or causal, both of which scales asO(N2){\displaystyle O(N^{2})}whereN{\displaystyle N}is the number of tokens in a sequence.
Reformer (2020)[91][95]reduces the computational load fromO(N2){\displaystyle O(N^{2})}toO(NlnN){\displaystyle O(N\ln N)}by usinglocality-sensitive hashingand reversible layers.[96]
Sparse attention[97]uses attention graphs that grows slower thanO(N2){\displaystyle O(N^{2})}. For example, BigBird (2020)[98]uses randomsmall-world networkswhich grows asO(N){\displaystyle O(N)}.
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers[99]reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
Random Feature Attention (2021)[100]usesFourier random features:φ(x)=1D[cos⟨w1,x⟩,sin⟨w1,x⟩,⋯cos⟨wD,x⟩,sin⟨wD,x⟩]T{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}wherew1,...,wD{\displaystyle w_{1},...,w_{D}}are independent samples from the normal distributionN(0,σ2I){\displaystyle N(0,\sigma ^{2}I)}. This choice of parameters satisfyE[⟨φ(x),φ(y)⟩]=e−‖x−y‖22σ2{\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}}, ore⟨x,y⟩/σ2=E[⟨e‖x‖2/2σ2φ(x),e‖y‖2/2σ2φ(y)⟩]≈⟨e‖x‖2/2σ2φ(x),e‖y‖2/2σ2φ(y)⟩{\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle }Consequently, the one-headed attention, with one query, can be written asAttention(q,K,V)=softmax(qKTdk)V≈φ(q)T∑ie‖ki‖2/2σ2φ(ki)viTφ(q)T∑ie‖ki‖2/2σ2φ(ki){\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}}whereσ=dK1/4{\displaystyle \sigma =d_{K}^{1/4}}. Similarly for multiple queries, and for multiheaded attention.
This approximation can be computed in linear time, as we can compute the matrixφ(ki)viT{\displaystyle \varphi (k_{i})v_{i}^{T}}first, then multiply it with the query. In essence, we have managed to obtain a more precise version ofAttention(Q,K,V)=softmax(QKTdk)V≈Q(KTV/dk){\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})}Performer (2022)[101]uses the same Random Feature Attention, butw1,...,wD{\displaystyle w_{1},...,w_{D}}are first independently sampled from the normal distributionN(0,σ2I){\displaystyle N(0,\sigma ^{2}I)}, then they areGram-Schmidt processed.
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstratingtransfer learning.[102]The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[103]and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[104]
Vision transformers[41]adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer[42]and laterWhisper[105]follow the same pattern forspeech recognition, first turning the speech signal into aspectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers[106][107]are a variant of Transformers designed for multimodality.
For image generation, notable architectures areDALL-E 1(2021), Parti (2022),[108]Phenaki (2023),[109]and Muse (2023).[110]Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by avariational autoencoderto an image.[111]Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image.[112]Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted.[110]Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.[109]
The transformer has had great success innatural language processing(NLP). Manylarge language modelssuch asGPT-2,GPT-3,GPT-4,Gemini, AlbertAGPT,Claude,BERT,Grok,XLNet,RoBERTaandChatGPTdemonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including:
Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
|
https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)
|
Attentionorfocus, is the concentration ofawarenesson somephenomenonto the exclusion of other stimuli.[1]It is the selective concentration on discrete information, eithersubjectivelyorobjectively.William James(1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects ortrains of thought.Focalization, concentration, ofconsciousnessare of its essence."[2]Attention has also been described as theallocationof limited cognitive processing resources.[3]Attention is manifested by an attentionalbottleneck, in terms of the amount of data thebraincan process each second; for example, inhuman vision, less than 1% of the visual input data stream of 1MByte/sec can enter the bottleneck,[4][5]leading toinattentional blindness.
Attention remains a crucial area of investigation withineducation,psychology,neuroscience,cognitive neuroscience, andneuropsychology. Areas of active investigation involve determining the source of thesensory cuesand signals that generate attention, the effects of these sensory cues and signals on thetuningproperties of sensoryneurons, and the relationship between attention and other behavioral and cognitive processes, which may includeworking memoryandpsychological vigilance. A relatively new body of research, which expands upon earlier research within psychopathology, is investigating the diagnostic symptoms associated withtraumatic brain injuryand its effects on attention. Attention also varies across cultures.[6]For example, people from cultures that center aroundcollectivismpay greater attention to the big picture in the image given to them, rather than specific elements of the image. On the other hand, those involved in more individualistic cultures tend to pay greater attention to the most noticeable portion of the image.[7]
The relationships between attention and consciousness are complex enough that they have warranted philosophical exploration. Such exploration is both ancient and continually relevant, as it can have effects in fields ranging frommental healthand the study ofdisorders of consciousnesstoartificial intelligenceand its domains of research.
Prior to the founding ofpsychologyas a scientific discipline, attention was studied in the field ofphilosophy. Thus, many of the discoveries in the field of attention were made by philosophers. PsychologistJohn B. WatsoncallsJuan Luis Vivesthe father of modern psychology because, in his bookDe Anima et Vita(The Soul and Life), he was the first to recognize the importance of empirical investigation.[8]In his work on memory, Vives found that the more closely one attends to stimuli, the better they will be retained.
By the 1990s, psychologists began usingpositron emission tomography(PET) and laterfunctional magnetic resonance imaging(fMRI) to image the brain while monitoring tasks involving attention. Considering this expensive equipment was generally only available in hospitals, psychologists sought cooperation with neurologists. PsychologistMichael Posner(then already renowned for his influential work on visual selective attention) and neurologistMarcus Raichlepioneered brain imaging studies of selective attention.[9]Their results soon sparked interest from the neuroscience community, which until then had been focused on monkey brains. With the development of these technological innovations,neuroscientistsbecame interested in this type of research that combines sophisticated experimental paradigms fromcognitive psychologywith these new brain imaging techniques. Although the older technique ofelectroencephalography(EEG) had long been used to study the brain activity underlying selective attention bycognitive psychophysiologists, the ability of the newer techniques to measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. A growing body of suchneuroimagingresearch has identified afrontoparietal attention networkwhich appears to be responsible for control of attention.[10]
Adefinitionof apsychological constructforms a research approach to its study. In scientific works, attention often coincides and substitutes the notion ofintentionalitydue to the extent of semantic uncertainty in the linguistic explanations of these notions' definitions. Intentionality has in turn been defined as "the power of minds to be about something: to represent or to stand for things, properties and states of affairs".[11]Although these two psychological constructs (attention and intentionality) appear to be defined by similar terms, they are different notions. To clarify the definition of attention, it would be correct to consider the origin of this notion to review the meaning of the term given to it when the experimental study on attention was initiated. It is thought that the experimental approach began with famous experiments with a 4 x 4 matrix of sixteen randomly chosen letters – the experimental paradigm that informedWundt's theory of attention.[12]Wundtinterpreted the experimental outcome introducing the meaning of attention as "that psychical process, which is operative in the clear perception of the narrow region of the content of consciousness."[13]These experiments showed the physical limits of attention threshold, which were 3-6 letters observing the matrix during 1/10 s of their exposition.[12]"We shall call the entrance into the large region of consciousness - apprehension, and the elevation into the focus of attention - apperception."[14]Wundt's theory of attention postulated one of the main features of this notion that attention is an active, voluntary process realized during a certain time.[12]In contrast, neuroscience research shows that intentionality may emerge instantly, even unconsciously; research reported to register neuronal correlates of an intentional act that preceded this conscious act (also seeshared intentionality).[15][16]Therefore, while intentionality is a mental state (“the power of the mind to be about something”, arising even unconsciously), the description of the construct of attention should be understood in the dynamical sense as the ability to elevate the clear perception of the narrow region of the content of consciousness and to keep in mind this state for a time. The attention threshold would be the period of minimum time needed for employing perception to clearly apprehend the scope of intention. From this perspective, a scientific approach to attention is relevant when it considers the difference between these two concepts (first of all, between their statical and dynamical statuses).
The growing body of literature shows empirical evidence that attention is conditioned by the number of elements and the duration of exposition. Decades of research onsubitizinghave supported Wundt's findings about the limits of a human ability to concentrate awareness on a task.[17][18][19][20][21]Latvian prof. Sandra Mihailova and prof. Igor Val Danilov drew an essential conclusion from the Wundtian approach to the study of attention: the scope of attention is related to cognitive development.[22]As the mind grasps more details about an event, it also increases the number of reasonable combinations within that event, enhancing the probability of better understanding its features and particularity.[22]For example, three items in the focal point of consciousness have six possible combinations (3 factorial), and four items have 24 (4 factorial) combinations. This number of combinations becomes significantly prominent in the case of a focal point with six items with 720 possible combinations (6 factorial).[22]Empirical evidence suggests that the scope of attention in young children develops from two items in the focal point at age up to six months to five or more items in the focal point at age about five years.[22]As follows from the most recent studies in relation to teaching activities inschool, “attention” should be understood as “the state of concentration of an individual'sconsciousnesson the process of selecting by his own psyche the information he requires and on the process of choosing an algorithm for response actions, which involves the intensification of sensory and intellectual activities”.[23]
Incognitive psychologythere are at least two models which describe how visual attention operates. These models may be considered metaphors which are used to describe internal processes and to generate hypotheses that arefalsifiable. Generally speaking, visual attention is thought to operate as a two-stage process.[24]In the first stage, attention is distributed uniformly over the external visual scene and processing of information is performed in parallel. In the second stage, attention is concentrated to a specific area of the visual scene (i.e., it is focused), and processing is performed in a serial fashion.
The first of these models to appear in the literature is the spotlight model. The term "spotlight" was inspired by the work ofWilliam James, who described attention as having a focus, a margin, and a fringe.[25]The focus is an area that extracts information from the visual scene with a high-resolution, the geometric center of which being where visual attention is directed. Surrounding the focus is the fringe of attention, which extracts information in a much more crude fashion (i.e., low-resolution). This fringe extends out to a specified area, and the cut-off is called the margin.
The second model is called the zoom-lens model and was first introduced in 1986.[26]This model inherits all properties of the spotlight model (i.e., the focus, the fringe, and the margin), but it has the added property of changing in size. This size-change mechanism was inspired by thezoom lensone might find on a camera, and any change in size can be described by a trade-off in the efficiency of processing.[27]The zoom-lens of attention can be described in terms of an inverse trade-off between the size of focus and the efficiency of processing: because attention resources are assumed to be fixed, then it follows that the larger the focus is, the slower processing will be of that region of the visual scene, since this fixed resource will be distributed over a larger area. It is thought that the focus of attention can subtend a minimum of 1° ofvisual angle,[25][28]however the maximum size has not yet been determined.
A significant debate emerged in the last decade of the 20th century in which Treisman's 1993 Feature Integration Theory (FIT) was compared to Duncan and Humphrey's 1989 attentional engagement theory (AET).[29]: 5–7FIT posits that "objects are retrieved from scenes by means of selective spatial attention that picks out objects' features, forms feature maps, and integrates those features that are found at the same location into forming objects." Treismans's theory is based on a two-stage process to help solve the binding problem of attention. These two stages are the preattentive stage and the focused attention stage.
Through sequencing these steps, parallel and serial search is better exhibited through the formation of conjunctions of objects. Conjunctive searches, according to Treismans, are done through both stages[32]in order to create selective and focused attention on an object, though Duncan and Humphrey would disagree. Duncan and Humphrey's AET understanding of attention maintained that "there is an initial pre-attentive parallel phase of perceptual segmentation and analysis that encompasses all of the visual items present in a scene. At this phase, descriptions of the objects in a visual scene are generated into structural units; the outcome of this parallel phase is a multiple-spatial-scale structured representation. Selective attention intervenes after this stage to select information that will be entered into visual short-term memory."[29]: 5–7The contrast of the two theories placed a new emphasis on the separation of visual attention tasks alone and those mediated by supplementary cognitive processes. As Rastophopoulos summarizes the debate: "Against Treisman's FIT, which posits spatial attention as a necessary condition for detection of objects, Humphreys argues that visual elements are encoded and bound together in an initial parallel phase without focal attention, and that attention serves to select among the objects that result from this initial grouping."[29]: 8
In the twentieth century, the pioneering research of Lev Vygotsky and Alexander Luria led to the three-part model of neuropsychology defining the working brain as being represented by three co-active processes listed as Attention, Memory, and Activation. A.R. Luria published his well-known bookThe Working Brainin 1973 as a concise adjunct volume to his previous 1962 bookHigher Cortical Functions in Man. In this volume, Luria summarized his three-part global theory of the working brain as being composed of three constantly co-active processes which he described as the; (1) Attention system, (2) Mnestic (memory) system, and (3) Cortical activation system. The two books together are considered by Homskaya's account as "among Luria's major works in neuropsychology, most fully reflecting all the aspects (theoretical, clinical, experimental) of this new discipline."[33]The product of the combined research of Vygotsky and Luria have determined a large part of the contemporary understanding and definition of attention as it is understood at the start of the 21st-century.
Multitasking can be defined as the attempt to perform two or more tasks simultaneously; however, research shows that when multitasking, people make more mistakes or perform their tasks more slowly.[34]Attention must be divided among all of the component tasks to perform them. In divided attention, individuals attend or give attention to multiple sources of information at once or perform more than one task at the same time.[35]
Older research involved looking at the limits of people performing simultaneous tasks like reading stories, while listening and writing something else,[36]or listening to two separate messages through different ears (i.e.,dichotic listening). Generally, classical research into attention investigated the ability of people to learn new information when there were multiple tasks to be performed, or to probe the limits of our perception (cf.Donald Broadbent). There is also older literature on people's performance on multiple tasks performed simultaneously, such as driving a car while tuning a radio[37]or driving while being on the phone.[38]
The vast majority of current research on human multitasking is based on performance of doing two tasks simultaneously,[34]usually that involves driving while performing another task, such as texting, eating, or even speaking to passengers in the vehicle, or with a friend over a cellphone. This research reveals that the human attentional system has limits for what it can process: driving performance is worse while engaged in other tasks; drivers make more mistakes, brake harder and later, get into more accidents, veer into other lanes, and/or are less aware of their surroundings when engaged in the previously discussed tasks.[39][40][41]
There has been little difference found between speaking on a hands-free cell phone or a hand-held cell phone,[42][43]which suggests that it is the strain of attentional system that causes problems, rather than what the driver is doing with his or her hands. While speaking with a passenger is as cognitively demanding as speaking with a friend over the phone,[44]passengers are able to change the conversation based upon the needs of the driver. For example, if traffic intensifies, a passenger may stop talking to allow the driver to navigate the increasingly difficult roadway; a conversation partner over a phone would not be aware of the change in environment.
There have been multiple theories regarding divided attention. One, conceived by cognitive scientistDaniel Kahneman,[45]explains that there is a single pool of attentional resources that can be freely divided among multiple tasks. This model seems oversimplified, however, due to the different modalities (e.g., visual, auditory, verbal) that are perceived.[46]When the two simultaneous tasks use the same modality, such as listening to a radio station and writing a paper, it is much more difficult to concentrate on both because the tasks are likely to interfere with each other. The specific modality model was theorized by Cognitive Psychologists David Navon and DanielGopherin 1979. However, more recent research using well controlled dual-task paradigms points at the importance of tasks.[47]
As an alternative, resource theory has been proposed as a more accurate metaphor for explaining divided attention on complex tasks. Resource theory states that as each complex task is automatized, performing that task requires less of the individual's limited-capacity attentional resources.[46]Other variables play a part in our ability to pay attention to and concentrate on many tasks at once. These include, but are not limited to, anxiety, arousal, task difficulty, and skills.[46]
Simultaneous attention is a type of attention, classified by attending to multiple events at the same time. Simultaneous attention is demonstrated by children in Indigenous communities, wholearnthrough this type of attention to their surroundings.[48]Simultaneous attention is present in the ways in which children of indigenous backgrounds interact both with their surroundings and with other individuals. Simultaneous attention requires focus on multiple simultaneous activities or occurrences. This differs from multitasking, which is characterized by alternating attention and focus between multiple activities, or halting one activity before switching to the next.
Simultaneous attention involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. Indigenous heritage toddlers and caregivers inSan Pedrowere observed to frequently coordinate their activities with other members of a group in ways parallel to a model of simultaneous attention, whereas middle-class European-descent families in the U.S. would move back and forth between events.[6][49]Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially wide, keen observers.[50]This points to a strong cultural difference in attention management.
Attention may be differentiated into "overt" versus "covert" orienting.[51]
Overt orientingis the act of selectively attending to an item or location over others by moving the eyes to point in that direction.[52]Overt orienting can be directly observed in the form of eye movements. Although overt eye movements are quite common, there is a distinction that can be made between two types of eye movements; reflexive and controlled. Reflexive movements are commanded by thesuperior colliculusof themidbrain. These movements are fast and are activated by the sudden appearance of stimuli. In contrast, controlled eye movements are commanded by areas in thefrontal lobe. These movements are slow and voluntary.
Covert orientingis the act of mentally shifting one's focus without moving one's eyes.[25][52][53]Simply, it is changes in attention that are not attributable to overt eye movements. Covert orienting has the potential to affect the output of perceptual processes by governing attention to particular items or locations (for example, the activity of a V4 neuron whose receptive field lies on an attended stimuli will be enhanced by covert attention)[54]but does not influence the information that is processed by the senses. Researchers often use "filtering" tasks to study the role of covert attention of selecting information. These tasks often require participants to observe a number of stimuli, but attend to only one.The current view is that visual covert attention is a mechanism for quickly scanning the field of view for interesting locations. This shift in covert attention is linked to eye movement circuitry that sets up a slowersaccadeto that location.[55]
There are studies that suggest the mechanisms of overt and covert orienting may not be controlled separately and independently as previously believed. Central mechanisms that may control covert orienting, such as theparietal lobe, also receive input from subcortical centres involved in overt orienting.[52]In support of this, general theories of attention actively assume bottom-up (reflexive) processes and top-down (voluntary) processes converge on a common neural architecture, in that they control both covert and overt attentional systems.[56]For example, if individuals attend to the right hand corner field of view, movement of the eyes in that direction may have to be actively suppressed.
Covert attention has been argued to reflect the existence of processes "programming explicit ocular movement".[57]However, this has been questioned on the grounds thatN2, "a neural measure of covert attentional allocation—does not always precede eye movements".[58]However, the researchers acknowledge, "it may be impossible to definitively rule out the possibility that some kind of shift of covert attention precedes every shift of overt attention".[58]
Orienting attention is vital and can be controlled through external (exogenous) or internal (endogenous) processes. However, comparing these two processes is challenging because external signals do not operate completely exogenously, but will only summon attention and eye movements if they are important to the subject.[52]
Exogenous(fromGreekexo, meaning "outside", andgenein, meaning "to produce") orienting is frequently described as being under control of a stimulus.[59]Exogenous orienting is considered to be reflexive and automatic and is caused by a sudden change in the periphery. This often results in a reflexive saccade. Since exogenous cues are typically presented in the periphery, they are referred to asperipheral cues. Exogenous orienting can even be observed when individuals are aware that the cue will not relay reliable, accurate information about where a target is going to occur. This means that the mere presence of an exogenous cue will affect the response to other stimuli that are subsequently presented in the cue's previous location.[60]
Several studies have investigated the influence of valid and invalid cues.[52][61][62][63]They concluded that valid peripheral cues benefit performance, for instance when the peripheral cues are brief flashes at the relevant location before the onset of a visual stimulus. Psychologists Michael Posner and Yoav Cohen (1984) noted a reversal of this benefit takes place when the interval between the onset of the cue and the onset of the target is longer than about 300 ms.[64]The phenomenon of valid cues producing longer reaction times than invalid cues is calledinhibition of return.
Endogenous(fromGreekendo, meaning "within" or "internally") orienting is the intentional allocation of attentional resources to a predetermined location or space. Simply stated, endogenous orienting occurs when attention is oriented according to an observer's goals or desires, allowing the focus of attention to be manipulated by the demands of a task. In order to have an effect, endogenous cues must be processed by the observer and acted upon purposefully. These cues are frequently referred to ascentral cues. This is because they are typically presented at the center of a display, where an observer's eyes are likely to be fixated. Central cues, such as an arrow or digit presented at fixation, tell observers to attend to a specific location.[65]
When examining differences between exogenous and endogenous orienting, some researchers suggest that there are four differences between the two kinds of cues:
There exist both overlaps and differences in the areas of the brain that are responsible for endogenous and exogenous orientating.[67]Another approach to this discussion has been covered under the topic heading of "bottom-up" versus "top-down" orientations to attention. Researchers of this school have described two different aspects of how the mind focuses attention to items present in the environment. The first aspect is called bottom-up processing, also known as stimulus-driven attention orexogenousattention. These describe attentional processing which is driven by the properties of the objects themselves. Some processes, such as motion or a sudden loud noise, can attract our attention in a pre-conscious, or non-volitional way. We attend to them whether we want to or not.[68]These aspects of attention are thought to involveparietalandtemporalcortices, as well as thebrainstem.[69]More recent experimental evidence[70][71][72]support the idea that theprimary visual cortexcreates a bottom-up saliency map,[73][4]which is received by thesuperior colliculusin themidbrainarea to guide attention or gaze shifts.
The second aspect is called top-down processing, also known as goal-driven,endogenousattention,attentional controlorexecutiveattention. This aspect of our attentional orienting is under the control of the person who is attending. It is mediated primarily by thefrontalcortex andbasal ganglia[69][74]as one of theexecutive functions.[52][69]Research has shown that it is related to other aspects of the executive functions, such asworking memory,[75]and conflict resolution and inhibition.[76]
A "hugely influential"[77]theory regarding selective attention is theperceptual load theory, which states that there are two mechanisms that affect attention: cognitive and perceptual. The perceptual mechanism considers the subject's ability to perceive or ignore stimuli, both task-related and non task-related. Studies show that if there are many stimuli present (especially if they are task-related), it is much easier to ignore the non-task related stimuli, but if there are few stimuli the mind will perceive the irrelevant stimuli as well as the relevant. The cognitive mechanism refers to the actual processing of the stimuli. Studies regarding this showed that the ability to process stimuli decreased with age, meaning that younger people were able to perceive more stimuli and fully process them, but were likely to process both relevant and irrelevant information, while older people could process fewer stimuli, but usually processed only relevant information.[78]
Some people can process multiple stimuli, e.g. trained Morse code operators have been able to copy 100% of a message while carrying on a meaningful conversation. This relies on the reflexive response due to "overlearning" the skill of morse code reception/detection/transcription so that it is an autonomous function requiring no specific attention to perform. This overtraining of the brain comes as the "practice of a skill [surpasses] 100% accuracy," allowing the activity to become autonomic, while your mind has room to process other actions simultaneously.[79]
Based on the primary role of the perceptual load theory, assumptions regarding its functionality surrounding that attentional resources are that of limited capacity which signify the need for all of the attentional resources to be used.[80]This performance, however, is halted when put hand in hand with accuracy and reaction time (RT). This limitation arises through the measurement of literature when obtaining outcomes for scores. This affects both cognitive and perceptual attention because there is a lack of measurement surrounding distributions of temporal and spatial attention. Only a concentrated amount of attention on how effective one is completing the task and how long they take is being analyzed making a more redundant analysis on overall cognition of being able to process multiple stimuli through perception.[81]
Attention is best described as the sustained focus of cognitive resources on information while filtering or ignoring extraneous information. Attention is a very basic function that often is a precursor to all other neurological/cognitive functions. As is frequently the case, clinical models of attention differ from investigation models. One of the most used models for the evaluation of attention in patients with very differentneurologicpathologies is the model of Sohlberg and Mateer.[82]This hierarchic model is based in the recovering of attention processes ofbrain damagepatients aftercoma. Five different kinds of activities of growing difficulty are described in the model; connecting with the activities those patients could do as their recovering process advanced.
This model has been shown to be very useful in evaluating attention in very different pathologies, correlates strongly with daily difficulties and is especially helpful in designing stimulation programs such as attention process training, a rehabilitation program for neurological patients of the same authors.
Most experiments show that oneneural correlateof attention is enhanced firing. If a neuron has a different response to a stimulus when an animal is not attending to a stimulus, versus when the animal does attend to the stimulus, then the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same.
In a 2007 review, Professor Eric Knudsen[87]describes a more generalmodelwhich identifies four core processes of attention, withworking memoryat the center:
Neurally, at different hierarchical levels spatial maps can enhance or inhibit activity in sensory areas, and induce orienting behaviors like eye movement.
In many cases attention produces changes in theEEG. Many animals, including humans, producegamma waves(40–60 Hz) when focusing attention on a particular object or activity.[90][91][54][92]
Another commonly used model for the attention system has been put forth by researchers such asMichael Posner. He divides attention into three functional components: alerting, orienting, andexecutive attention[69][93]that can also interact and influence each other.[94][95][96]
Children appear to develop patterns of attention related to the cultural practices of their families, communities, and the institutions in which they participate.[100]
In 1955,Jules Henrysuggested that there are societal differences in sensitivity to signals from many ongoing sources that call for the awareness of several levels of attention simultaneously. He tied his speculation to ethnographic observations of communities in which children are involved in a complex social community with multiple relationships.[6]
ManyIndigenous children in the Americaspredominantly learn byobservingand pitching in. There are several studies to support that the use of keen attention towards learning is much more common in Indigenous Communities of North and Central America than in a middle-class European-American setting. This is a direct result of theLearning by Observing and Pitching Inmodel.
Keen attention is both a requirement and result of learning by observing and pitching-in. Incorporating the children in the community gives them the opportunity to keenly observe and contribute to activities that were not directed towards them. It can be seen from different Indigenous communities and cultures, such as theMayansofSan Pedro, that children can simultaneously attend to multiple events.[6]MostMayachildren have learned to pay attention to several events at once in order to make useful observations.[101]
One example is simultaneous attention which involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. San Pedro toddlers and caregivers frequently coordinated their activities with other members of a group in multiway engagements rather than in a dyadic fashion.[6][49]Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially keen observers.[50]
This learning by observing and pitching-in model requires active levels of attention management. The child is present while caretakers engage in daily activities and responsibilities such as: weaving, farming, and other skills necessary for survival. Being present allows the child to focus their attention on the actions being performed by their parents, elders, and/or older siblings. In order to learn in this way, keen attention and focus is required. Eventually the child is expected to be able to perform these skills themselves.
In one study, it was found that when looking at a picture, Americans focus more on the center figure than Japanese do, especially after 1 second has passed. Japanese individuals spent larger amounts of time focusing on parts in the background.[102]Miyamoto et al. compared pictures of landscapes in Japan and the US, noting that Japanese scenes contained more boundaries and edges than the American ones.[103]
In the domain ofcomputer vision, efforts have been made to model the mechanism of human attention, especially the bottom-up intentional mechanism[104]and its semantic significance in classification of video contents.[105][106]Bothspatial attentionandtemporal attentionhave been incorporated in such classification efforts.
Generally speaking, there are two kinds of models to mimic the bottom-up salience mechanism in static images. One is based on the spatial contrast analysis. For example, a center–surround mechanism has been used to define salience across scales, inspired by the putative neural mechanism.[107]It has also been hypothesized that some visual inputs are intrinsically salient in certain background contexts and that these are actually task-independent. This model has established itself as the exemplar for salience detection and consistently used for comparison in the literature;[104]the other kind of model is based on the frequency domain analysis. This method was first proposed by Hou et al..[108]This method was called SR. Then, the PQFT method was also introduced. Both SR and PQFT only use the phase information.[104]In 2012, the HFT method was introduced, and both the amplitude and the phase information are made use of.[109]The Neural Abstraction Pyramid[110]is a hierarchical recurrent convolutional model, which incorporates bottom-up and top-down flow of information to iteratively interpret images.
Hemispatial neglect, also calledunilateral neglect, often occurs when people have damage to the right hemisphere of their brain.[111]This damage often leads to a tendency to ignore the left side of one's body or even the left side of an object that can be seen. Damage to the left side of the brain (the left hemisphere) rarely yields significant neglect of the right side of the body or object in the person's local environments.[112]
The effects of spatial neglect, however, may vary and differ depending on what area of the brain was damaged. Damage to different neural substrates can result in different types of neglect. Attention disorders (lateralized and nonlaterized) may also contribute to the symptoms and effects.[112]Much research has asserted that damage to gray matter within the brain results in spatial neglect.[113]
New technology has yielded more information, such that there is a large, distributed network of frontal, parietal, temporal, and subcortical brain areas that have been tied to neglect.[114]This network can be related to other research as well; thedorsal attention networkis tied to spatial orienting.[115]The effect of damage to this network may result in patients neglecting their left side when distracted about their right side or an object on their right side.[111]
Social attentionis one special form of attention that involves the allocation of limited processing resources in a social context. Previous studies on social attention often regard how attention is directed toward socially relevant stimuli such as faces and gaze directions of other individuals.[116]In contrast to attending-to-others, a different line of researches has shown that self-related information such as own face and name automatically captures attention and is preferentially processed comparing to other-related information.[117]These contrasting effects between attending-to-others and attending-to-self prompt a synthetic view in a recent Opinion article[118]proposing that social attention operates at two polarizing states: In one extreme, individual tends to attend to the self and prioritize self-related information over others', and, in the other extreme, attention is allocated to other individuals to infer their intentions and desires. Attending-to-self and attending-to-others mark the two ends of an otherwise continuum spectrum of social attention. For a given behavioral context, the mechanisms underlying these two polarities might interact and compete with each other in order to determine a saliency map of social attention that guides our behaviors.[118]An imbalanced competition between these two behavioral and cognitive processes will cause cognitive disorders and neurological symptoms such asautism spectrumdisorders andWilliams syndrome.
According to Daniel Goleman's book,Focus: The Hidden Driver of Excellence, there are two types of distracting factors affecting focus – sensory and emotional.
A sensory distracting factor would be, for example, while a person is reading this article, they are neglecting the white field surrounding the text.
An emotional distracting factor would be when someone is focused on answering an email, and somebody shouts their name. It would be almost impossible to neglect the voice speaking it. Attention is immediately directed toward the source. Positive emotions have also been found to affect attention. Induction of happiness has led to increased response times and an increase in inaccurate responses in the face of irrelevant stimuli. Two possible theories as to why emotions might make one more susceptible to distracting stimuli is that emotions take up too much of one's cognitive resources and make it harder to control your focus of attention. The other theory is that emotions make it harder to filter out distractions, specifically with positive emotions due to a feeling of security.[119]
Another distracting factor to attention processes is insufficient sleep. Sleep deprivation is found to impair cognition, specifically performance in divided attention. Divided attention is possibly linked with the circadian processes.[120]
Inattentional blindnesswas first introduced in 1998 by Arien Mack and Irvic Rock. Their studies show that when people are focused on specific stimuli, they often miss other stimuli that are clearly present. Though actual blindness is not occurring here, the blindness that happens is due to the perceptual load of what is being attended to.[121]Based on the experiment performed by Mack and Rock, Ula Finch and Nilli Lavie tested participants with a perceptual task. They presented subjects with a cross, one arm being longer than the other, for 5 trials. On the sixth trial, a white square was added to the top left of the screen. The results conclude that out of 10 participants, only 2 (20%) actually saw the square. This would suggest that when a higher focus was attended to the length of the crossed arms, the more likely someone would altogether miss an object that was in plain sight.[122]
Change blindnesswas first tested by Rensink and coworkers in 1997. Their studies show that people have difficulty detecting changes from scene to scene due to the intense focus on one thing, or lack of attention overall. This was tested by Rensink through a presentation of a picture, and then a blank field, and then the same picture but with an item missing. The results showed that the pictures had to be alternated back and forth a good number of times for participants to notice the difference. This idea is greatly portrayed in films that have continuity errors. Many people do not pick up on differences when in reality, the changes tend to be significant.[123]
PsychologistDaniel E. Berlynecredits the first extended treatment of attention to philosopherNicolas Malebranchein his work "The Search After Truth". "Malebranche held that we have access to ideas, or mental representations of the external world, but not direct access to the world itself."[8]Thus in order to keep these ideas organized, attention is necessary.[124]Otherwise we will confuse these ideas. Malebranche writes in "The Search After Truth", "because it often happens that the understanding has only confused and imperfect perceptions of things, it is truly a cause of our errors.... It is therefore necessary to look for means to keep our perceptions from being confused and imperfect. And, because, as everyone knows, there is nothing that makes them clearer and more distinct than attentiveness, we must try to find the means to become more attentive than we are".[125]According to Malebranche, attention is crucial to understanding and keeping thoughts organized.
PhilosopherGottfried Wilhelm Leibnizintroduced the concept ofapperceptionto this philosophical approach to attention. Apperception refers to "the process by which new experience is assimilated to and transformed by the residuum of past experience of an individual to form a new whole."[126]Apperception is required for a perceived event to become a conscious event. Leibniz emphasized a reflexive involuntary view of attention known as exogenous orienting. However, there is also endogenous orienting which is voluntary and directed attention. PhilosopherJohann Friedrich Herbartagreed with Leibniz's view of apperception; however, he expounded on it in by saying that new experiences had to be tied to ones already existing in the mind. Herbart was also the first person to stress the importance of applying mathematical modeling to the study of psychology.[8]
Throughout the philosophical era, various thinkers made significant contributions to the field of attention studies, beginning with research on the extent of attention and how attention is directed. In the beginning of the 19th century, it was thought that people were not able to attend to more than one stimulus at a time. However, with research contributions bySir William Hamilton, 9th Baronetthis view was changed. Hamilton proposed a view of attention that likened its capacity to holding marbles. You can only hold a certain number of marbles at a time before it starts to spill over. His view states that we can attend to more than one stimulus at once.William Stanley Jevonslater expanded this view and stated that we can attend to up to four items at a time.[127]
This period of attention research took the focus from conceptual findings to experimental testing. It also involved psychophysical methods that allowed measurement of the relation between physical stimulus properties and the psychological perceptions of them. This period covers the development of attentional research from the founding of psychology to 1909.
Wilhelm Wundtintroduced the study of attention to the field of psychology. Wundt measured mental processing speed by likening it to differences in stargazing measurements. Astronomers in this time would measure the time it took for stars to travel. Among these measurements when astronomers recorded the times, there were personal differences in calculation. These different readings resulted in different reports from each astronomer. To correct for this, apersonal equationwas developed. Wundt applied this to mental processing speed. Wundt realized that the time it takes to see the stimulus of the star and write down the time was being called an "observation error" but actually was the time it takes to switch voluntarily one's attention from one stimulus to another. Wundt called his school of psychologyvoluntarism. It was his belief that psychological processes can only be understood in terms of goals and consequences.
Franciscus Dondersusedmental chronometryto study attention and it was considered a major field of intellectual inquiry by authors such asSigmund Freud. Donders and his students conducted the first detailed investigations of the speed of mental processes. Donders measured the time required to identify a stimulus and to select a motor response. This was the time difference between stimulus discrimination and response initiation. Donders also formalized the subtractive method which states that the time for a particular process can be estimated by adding that process to a task and taking the difference in reaction time between the two tasks. He also differentiated betweenthree types of reactions: simple reaction, choice reaction, and go/no-go reaction.
Hermann von Helmholtzalso contributed to the field of attention relating to the extent of attention. Von Helmholtz stated that it is possible to focus on one stimulus and still perceive or ignore others. An example of this is being able to focus on the letter u in the word house and still perceiving the letters h, o, s, and e.
One major debate in this period was whether it was possible to attend to two things at once (split attention).Walter Benjamindescribed this experience as "reception in a state ofdistraction." This disagreement could only be resolved through experimentation.
In 1890,William James, in his textbookThe Principles of Psychology, remarked:
Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.[128]
James differentiated between sensorial attention and intellectual attention. Sensorial attention is when attention is directed to objects of sense, stimuli that are physically present. Intellectual attention is attention directed to ideal or represented objects; stimuli that are not physically present. James also distinguished between immediate or derived attention: attention to the present versus to something not physically present. According to James, attention has five major effects. Attention works to make us perceive, conceive, distinguish, remember, and shorten reactions time.
During this period, research in attention waned and interest in behaviorism flourished, leading some to believe, likeUlric Neisser, that in this period, "There was no research on attention". However, Jersild published very important work on "Mental Set and Shift" in 1927. He stated, "The fact of mental set is primary in all conscious activity. The same stimulus may evoke any one of a large number of responses depending upon the contextual setting in which it is placed".[129]This research found that the time to complete a list was longer for mixed lists than for pure lists. For example, if a list was names of animals versus a list of the same size with names of animals, books, makes and models of cars, and types of fruits, it takes longer to process the second list. This istask switching.
In 1931, Telford discovered thepsychological refractory period. The stimulation of neurons is followed by a refractory phase during which neurons are less sensitive to stimulation. In 1935John Ridley Stroopdeveloped the Stroop Task which elicited theStroop Effect. Stroop's task showed that irrelevant stimulus information can have a major impact on performance. In this task, subjects were to look at a list of colors. This list of colors had each color typed in a color different from the actual text. For example, the word Blue would be typed in Orange, Pink in Black, and so on.
Example:BluePurpleRedGreenPurpleGreen
Subjects were then instructed to say the name of the ink color and ignore the text. It took 110 seconds to complete a list of this type compared to 63 seconds to name the colors when presented in the form of solid squares.[8]The naming time nearly doubled in the presence of conflicting color words, an effect known as the Stroop Effect.
In the 1950s,research psychologistsrenewed their interest in attention when the dominant epistemology shifted from positivism (i.e.,behaviorism) torealismduring what has come to be known as the "cognitive revolution".[130]The cognitive revolution admitted unobservable cognitive processes like attention as legitimate objects of scientific study.
Modern research on attention began with the analysis of the "cocktail party problem" byColin Cherryin 1953. At a cocktail party how do people select the conversation that they are listening to and ignore the rest? This problem is at times called "focused attention", as opposed to "divided attention". Cherry performed a number of experiments which became known asdichotic listeningand were extended byDonald Broadbentand others.[131]: 112In a typical experiment, subjects would use a set ofheadphonesto listen to two streams of words in differentearsand selectively attend to one stream. After the task, the experimenter would question the subjects about the content of the unattended stream.
Broadbent's Filter Model of Attentionstates that information is held in a pre-attentive temporary store, and only sensory events that have some physical feature in common are selected to pass into the limited capacity processing system. This implies that the meaning of unattended messages is not identified. Also, a significant amount of time is required to shift the filter from one channel to another. Experiments by Gray and Wedderburn and laterAnne Treismanpointed out various problems in Broadbent's early model and eventually led to the Deutsch–Norman model in 1968. In this model, no signal is filtered out, but all are processed to the point of activating their stored representations in memory. The point at which attention becomes "selective" is when one of the memory representations is selected for further processing. At any time, only one can be selected, resulting in theattentional bottleneck.[131]: 115–116
This debate became known as the early-selection vs. late-selection models. In the early selection models (first proposed byDonald Broadbent), attention shuts down (inBroadbent's model) or attenuates (inTreisman's refinement) processing in the unattended ear before the mind can analyze its semantic content. In the late selection models (first proposed by J. Anthony Deutsch andDiana Deutsch), the content in both ears is analyzed semantically, but the words in the unattended ear cannot access consciousness.[132]Lavie'sperceptual load theory, however, "provided elegant solution to" what had once been a "heated debate".[133]
|
https://en.wikipedia.org/wiki/Attention
|
There are manytypes of artificial neural networks(ANN).
Artificial neural networksarecomputational modelsinspired bybiological neural networks, and are used toapproximatefunctionsthat are generally unknown. Particularly, they are inspired by the behaviour ofneuronsand the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.[1][2][3][4]Most artificial neural networks bear only some resemblance to their more complex biological counterparts, but are very effective at their intended tasks (e.g. classification or segmentation).
Some artificial neural networks areadaptive systemsand are used for example tomodel populationsand environments, which constantly change.
Neural networks can be hardware- (neurons are represented by physical components) orsoftware-based(computer models), and can use a variety of topologies and learning algorithms.
In feedforward neural networks the information moves from the input to output directly in every layer. There can be hidden layers with or without cycles/loops to sequence inputs. Feedforward networks can be constructed with various types of units, such as binaryMcCulloch–Pitts neurons, the simplest of which is theperceptron. Continuous neurons, frequently with sigmoidalactivation, are used in the context ofbackpropagation.
The Group Method of Data Handling (GMDH)[5]features fully automatic structural and parametric model optimization. The node activation functions areKolmogorov–Gabor polynomialsthat permit additions and multiplications. It uses a deep multilayerperceptronwith eight layers.[6]It is asupervised learningnetwork that grows layer by layer, where each layer is trained byregression analysis. Useless items are detected using avalidation set, and pruned throughregularization. The size and depth of the resulting network depends on the task.[7]
An autoencoder, autoassociator or Diabolo network[8]: 19is similar to themultilayer perceptron(MLP) – with an input layer, an output layer and one or more hidden layers connecting them. However, the output layer has the same number of units as the input layer. Its purpose is to reconstruct its own inputs (instead of emitting a target value). Therefore, autoencoders areunsupervised learningmodels. An autoencoder is used forunsupervised learningofefficient codings,[9][10]typically for the purpose ofdimensionality reductionand for learninggenerative modelsof data.[11][12]
A probabilistic neural network (PNN) is a four-layer feedforward neural network. The layers are Input, hidden pattern/summation, and output. In the PNN algorithm, the parent probability distribution function (PDF) of each class is approximated by aParzen windowand a non-parametric function. Then, using PDF of each class, the class probability of a new input is estimated and Bayes’ rule is employed to allocate it to the class with the highest posterior probability.[13]It was derived from theBayesian network[14]and a statistical algorithm calledKernel Fisher discriminant analysis.[15]It is used for classification and pattern recognition.
A time delay neural network (TDNN) is a feedforward architecture for sequential data that recognizesfeaturesindependent of sequence position. In order to achieve time-shift invariance, delays are added to the input so that multiple data points (points in time) are analyzed together.
It usually forms part of a larger pattern recognition system. It has been implemented using aperceptronnetwork whose connection weights were trained with back propagation (supervised learning).[16]
A convolutional neural network (CNN, or ConvNet or shift invariant or space invariant) is a class of deep network, composed of one or moreconvolutionallayers with fully connected layers (matching those in typical ANNs) on top.[17][18]It uses tied weights andpooling layers. In particular, max-pooling.[19]It is often structured via Fukushima's convolutional architecture.[20]They are variations ofmultilayer perceptronsthat use minimalpreprocessing.[21]This architecture allows CNNs to take advantage of the 2D structure of input data.
Its unit connectivity pattern is inspired by the organization of thevisual cortex. Units respond to stimuli in a restricted region of space known as thereceptive field. Receptive fields partially overlap, over-covering the entirevisual field. Unit response can be approximated mathematically by aconvolutionoperation.[22]
CNNs are suitable for processing visual and other two-dimensional data.[23][24]They have shown superior results in both image and speech applications. They can be trained with standard backpropagation. CNNs are easier to train than other regular, deep, feed-forward neural networks and have many fewer parameters to estimate.[25]
Capsule Neural Networks(CapsNet) add structures called capsules to a CNN and reuse output from several capsules to form more stable (with respect to various perturbations) representations.[26]
Examples of applications in computer vision includeDeepDream[27]androbot navigation.[28]They have wide applications inimage and video recognition,recommender systems[29]andnatural language processing.[30]
A deep stacking network (DSN)[31](deep convex network) is based on a hierarchy of blocks of simplified neural network modules. It was introduced in 2011 by Deng and Yu.[32]It formulates the learning as aconvex optimization problemwith aclosed-form solution, emphasizing the mechanism's similarity tostacked generalization.[33]Each DSN block is a simple module that is easy to train by itself in asupervisedfashion without backpropagation for the entire blocks.[8]
Each block consists of a simplifiedmulti-layer perceptron(MLP) with a single hidden layer. The hidden layerhhas logisticsigmoidalunits, and the output layer has linear units. Connections between these layers are represented by weight matrixU;input-to-hidden-layer connections have weight matrixW. Target vectorstform the columns of matrixT, and the input data vectorsxform the columns of matrixX.The matrix of hidden units isH=σ(WTX){\displaystyle {\boldsymbol {H}}=\sigma ({\boldsymbol {W}}^{T}{\boldsymbol {X}})}. Modules are trained in order, so lower-layer weightsWare known at each stage. The function performs the element-wiselogistic sigmoidoperation. Each block estimates the same final label classy, and its estimate is concatenated with original inputXto form the expanded input for the next block. Thus, the input to the first block contains the original data only, while downstream blocks' input adds the output of preceding blocks. Then learning the upper-layer weight matrixUgiven other weights in the network can be formulated as a convex optimization problem:
which has a closed-form solution.[31]
Unlike other deep architectures, such asDBNs, the goal is not to discover the transformedfeaturerepresentation. The structure of the hierarchy of this kind of architecture makes parallel learning straightforward, as a batch-mode optimization problem. In purelydiscriminative tasks, DSNs outperform conventional DBNs.
This architecture is a DSN extension. It offers two important improvements: it uses higher-order information fromcovariancestatistics, and it transforms thenon-convex problemof a lower-layer to a convex sub-problem of an upper-layer.[34]TDSNs use covariance statistics in abilinear mappingfrom each of two distinct sets of hidden units in the same layer to predictions, via a third-ordertensor.
While parallelization and scalability are not considered seriously in conventionalDNNs,[35][36][37]all learning forDSNs andTDSNs is done in batch mode, to allow parallelization.[32][31]Parallelization allows scaling the design to larger (deeper) architectures and data sets.
The basic architecture is suitable for diverse tasks such asclassificationandregression.
Such aneural networkis designed for the numerical solution ofmathematical equations, such as differential, integral, delay, fractional and others. As input parameters, PINN[38]accepts variables (spatial, temporal, and others), transmits them through the network block. At the output, it produces an approximate solution and substitutes it into the mathematical model, considering the initial and boundary conditions. If the solution does not satisfy the required accuracy, one uses thebackpropagationand rectify the solution.
Besides PINN, there exists deep neural operator (DeepONet)[39]and Fourier neural operator (FNO).[40]
Regulatory feedback networks account for feedback found throughout brain recognition processing areas. Instead of recognition-inference being feedforward (inputs-to-output) as in neural networks, regulatory feedback assumes inference iteratively compares inputs to outputs & neurons inhibit their own inputs, collectively evaluating how important and unique each input is for the next iteration. This ultimately finds neuron activations minimizing mutual input overlap, estimating distributions during recognition and offloading the need for complex neural network training & rehearsal.[41]
Regulatory feedback processing suggests an important real-time recognition processing role for ubiquitous feedback found between brain pre and post synaptic neurons, which is meticulously maintained byhomeostatic plasticity: found to be kept in balance through multiple, often redundant, mechanisms. RF also inherently shows neuroscience phenomena such as Excitation-Inhibition balance, network-wideburstingfollowed by quieting, and human cognitive search phenomena ofdifficulty with similarityand pop-out when multiple inputs are present, without additional parameters.
A regulatory feedback network makes inferences usingnegative feedback.[42]The feedback is used to find the optimal activation of units. It is most similar to anon-parametric methodbut is different fromK-nearest neighborin that it mathematically emulates feedforward networks.
Radial basis functions are functions that have a distance criterion with respect to a center. Radial basis functions have been applied as a replacement for the sigmoidal hidden layer transfer characteristic in multi-layer perceptrons. RBF networks have two layers: In the first, input is mapped onto each RBF in the 'hidden' layer. The RBF chosen is usually a Gaussian. In regression problems the output layer is a linear combination of hidden layer values representing mean predicted output. The interpretation of this output layer value is the same as aregression modelin statistics. In classification problems the output layer is typically asigmoid functionof a linear combination of hidden layer values, representing a posterior probability. Performance in both cases is often improved byshrinkagetechniques, known asridge regressionin classical statistics. This corresponds to a prior belief in small parameter values (and therefore smooth output functions) in aBayesianframework.
RBF networks have the advantage of avoiding local minima in the same way as multi-layer perceptrons. This is because the only parameters that are adjusted in the learning process are the linear mapping from hidden layer to output layer. Linearity ensures that the error surface is quadratic and therefore has a single easily found minimum. In regression problems this can be found in one matrix operation. In classification problems the fixed non-linearity introduced by the sigmoid output function is most efficiently dealt with usingiteratively re-weighted least squares.
RBF networks have the disadvantage of requiring good coverage of the input space by radial basis functions. RBF centres are determined with reference to the distribution of the input data, but without reference to the prediction task. As a result, representational resources may be wasted on areas of the input space that are irrelevant to the task. A common solution is to associate each data point with its own centre, although this can expand the linear system to be solved in the final layer and requires shrinkage techniques to avoidoverfitting.
Associating each input datum with an RBF leads naturally to kernel methods such assupport vector machines(SVM) and Gaussian processes (the RBF is thekernel function). All three approaches use a non-linear kernel function to project the input data into a space where the learning problem can be solved using a linear model. Like Gaussian processes, and unlike SVMs, RBF networks are typically trained in a maximum likelihood framework by maximizing the probability (minimizing the error). SVMs avoid overfitting by maximizing instead amargin. SVMs outperform RBF networks in most classification applications. In regression applications they can be competitive when the dimensionality of the input space is relatively small.
RBF neural networks are conceptually similar toK-nearest neighbor(k-NN) models. The basic idea is that similar inputs produce similar outputs.
Assume that each case in a training set has two predictor variables, x and y, and the target variable has two categories, positive and negative. Given a new case with predictor values x=6, y=5.1, how is the target variable computed?
The nearest neighbor classification performed for this example depends on how many neighboring points are considered. If 1-NN is used and the closest point is negative, then the new point should be classified as negative. Alternatively, if 9-NN classification is used and the closest 9 points are considered, then the effect of the surrounding 8 positive points may outweigh the closest 9-th (negative) point.
An RBF network positions neurons in the space described by the predictor variables (x,y in this example). This space has as many dimensions as predictor variables. The Euclidean distance is computed from the new point to the center of each neuron, and a radial basis function (RBF, also called a kernel function) is applied to the distance to compute the weight (influence) for each neuron. The radial basis function is so named because the radius distance is the argument to the function.
The value for the new point is found by summing the output values of the RBF functions multiplied by weights computed for each neuron.
The radial basis function for a neuron has a center and a radius (also called a spread). The radius may be different for each neuron, and, in RBF networks generated by DTREG, the radius may be different in each dimension.
With larger spread, neurons at a distance from a point have a greater influence.
RBF networks have three layers:
The following parameters are determined by the training process:
Various methods have been used to train RBF networks. One approach first usesK-means clusteringto find cluster centers which are then used as the centers for the RBF functions. However, K-means clustering is computationally intensive and it often does not generate the optimal number of centers. Another approach is to use a random subset of the training points as the centers.
DTREG uses a training algorithm that uses an evolutionary approach to determine the optimal center points and spreads for each neuron. It determines when to stop adding neurons to the network by monitoring the estimated leave-one-out (LOO) error and terminating when the LOO error begins to increase because of overfitting.
The computation of the optimal weights between the neurons in the hidden layer and the summation layer is done using ridge regression. An iterative procedure computes the optimal regularization Lambda parameter that minimizes the generalized cross-validation (GCV) error.
A GRNN is an associative memory neural network that is similar to theprobabilistic neural networkbut it is used for regression and approximation rather than classification.
A deep belief network (DBN) is a probabilistic,generative modelmade up of multiple hidden layers. It can be considered acompositionof simple learning modules.[43]
A DBN can be used to generatively pre-train a deep neural network (DNN) by using the learned DBN weights as the initial DNN weights. Various discriminative algorithms can then tune these weights. This is particularly helpful when training data are limited, because poorly initialized weights can significantly hinder learning. These pre-trained weights end up in a region of the weight space that is closer to the optimal weights than random choices. This allows for both improved modeling and faster ultimate convergence.[44]
Recurrent neural networks(RNN) propagate data forward, but also backwards, from later processing stages to earlier stages. RNN can be used as general sequence processors.
This architecture was developed in the 1980s. Its network creates a directed connection between every pair of units. Each has a time-varying, real-valued (more than just zero or one) activation (output). Each connection has a modifiable real-valued weight. Some of the nodes are called labeled nodes, some output nodes, the rest hidden nodes.
Forsupervised learningin discrete time settings, training sequences of real-valued input vectors become sequences of activations of the input nodes, one input vector at a time. At each time step, each non-input unit computes its current activation as a nonlinear function of the weighted sum of the activations of all units from which it receives connections. The system can explicitly activate (independent of incoming signals) some output units at certain time steps. For example, if the input sequence is a speech signal corresponding to a spoken digit, the final target output at the end of the sequence may be a label classifying the digit. For each sequence, its error is the sum of the deviations of all activations computed by the network from the corresponding target signals. For a training set of numerous sequences, the total error is the sum of the errors of all individual sequences.
To minimize total error,gradient descentcan be used to change each weight in proportion to its derivative with respect to the error, provided the non-linear activation functions aredifferentiable. The standard method is called "backpropagation through time" or BPTT, a generalization of back-propagation for feedforward networks.[45][46]A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL.[47][48]Unlike BPTT this algorithm islocal in time but not local in space.[49][50]An online hybrid between BPTT and RTRL with intermediate complexity exists,[51][52]with variants for continuous time.[53]A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events.[54][55]TheLong short-term memoryarchitecture overcomes these problems.[56]
Inreinforcement learningsettings, no teacher provides target signals. Instead afitness functionorreward functionorutility functionis occasionally used to evaluate performance, which influences its input stream through output units connected to actuators that affect the environment. Variants ofevolutionary computationare often used to optimize the weight matrix.
TheHopfield network(like similar attractor-based networks) is of historic interest although it is not a general RNN, as it is not designed to process sequences of patterns. Instead it requires stationary inputs. It is an RNN in which all connections are symmetric. It guarantees that it will converge. If the connections are trained usingHebbian learningthe Hopfield network can perform as robustcontent-addressable memory, resistant to connection alteration.
TheBoltzmann machinecan be thought of as a noisy Hopfield network. It is one of the first neural networks to demonstrate learning oflatent variables(hidden units). Boltzmann machine learning was at first slow to simulate, but the contrastive divergence algorithm speeds up training for Boltzmann machines andProducts of Experts.
The self-organizing map (SOM) usesunsupervised learning. A set of neurons learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and SOM attempts to preserve these.
Learning vector quantization(LVQ) can be interpreted as a neural network architecture. Prototypical representatives of the classes parameterize, together with an appropriate distance measure, in a distance-based classification scheme.
Simple recurrent networks have three layers, with the addition of a set of "context units" in the input layer. These units connect from the hidden layer or the output layer with a fixed weight of one.[57]At each time step, the input is propagated in a standard feedforward fashion, and then a backpropagation-like learning rule is applied (not performinggradient descent). The fixed back connections leave a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied).
Reservoir computing is a computation framework that may be viewed as an extension ofneural networks.[58]Typically an input signal is fed into a fixed (random)dynamical systemcalled areservoirwhose dynamics map the input to a higher dimension. Areadoutmechanism is trained to map the reservoir to the desired output. Training is performed only at the readout stage.Liquid-state machines[59]are a type of reservoir computing.[60]
The echo state network (ESN) employs a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that are trained. ESN are good at reproducing certaintime series.[61]
Thelong short-term memory(LSTM)[56]avoids thevanishing gradient problem. It works even when with long delays between inputs and can handle signals that mix low and high frequency components. LSTM RNN outperformed other RNN and other sequence learning methods such asHMMin applications such as language learning[62]and connected handwriting recognition.[63]
Bi-directional RNN, or BRNN, use a finite sequence to predict or label each element of a sequence based on both the past and future context of the element.[64]This is done by adding the outputs of two RNNs: one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique proved to be especially useful when combined with LSTM.[65]
Hierarchical RNN connects elements in various ways to decompose hierarchical behavior into useful subprograms.[66][67]
A district from conventional neural networks, stochastic artificial neural network used as an approximation to
random functions.
A RNN (often a LSTM) where a series is decomposed into a number of scales where every scale informs the primary length between two consecutive points. A first order scale consists of a normal RNN, a second order consists of all points separated by two indices and so on. The Nth order RNN connects the first and last node. The outputs from all the various scales are treated as aCommittee of Machinesand the associated scores are used genetically for the next iteration.
Biological studies have shown that the human brain operates as a collection of small networks. This realization gave birth to the concept ofmodular neural networks, in which several small networks cooperate or compete to solve problems.
A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result than individual networks. Because neural networks suffer from local minima, starting with the same architecture and training but using randomly different initial weights often gives vastly different results.[citation needed]A CoM tends to stabilize the result.
The CoM is similar to the generalmachine learningbaggingmethod, except that the necessary variety of machines in the committee is obtained by training from different starting weights rather than training on different randomly selected subsets of the training data.
The associative neural network (ASNN) is an extension of committee of machines that combines multiple feedforward neural networks and the k-nearest neighbor technique. It uses the correlation between ensemble responses as a measure of distance amid the analyzed cases for the kNN. This corrects the Bias of the neural network ensemble. An associative neural network has a memory that can coincide with the training set. If new data become available, the network instantly improves its predictive ability and provides data approximation (self-learns) without retraining. Another important feature of ASNN is the possibility to interpret neural network results by analysis of correlations between data cases in the space of models.[68]
A physical neural network includes electrically adjustable resistance material to simulate artificial synapses. Examples include theADALINEmemristor-based neural network.[69]Anoptical neural networkis a physical implementation of anartificial neural networkwithoptical components.
Unlike static neural networks, dynamic neural networks adapt their structure and/or parameters to the input during inference[70]showing time-dependent behaviour, such as transient phenomena and delay effects.
Dynamic neural networks in which the parameters may change over time are related to thefast weightsarchitecture (1987),[71]where one neural network outputs the weights of another neural network.
Cascade correlation is an architecture andsupervised learningalgorithm. Instead of just adjusting the weights in a network of fixed topology,[72]Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages: It learns quickly, determines its own size and topology, retains the structures it has built even if the training set changes and requires nobackpropagation.
A neuro-fuzzy network is afuzzyinference systemin the body of an artificial neural network. Depending on the FIS type, several layers simulate the processes involved in a fuzzy inference-likefuzzification, inference, aggregation anddefuzzification. Embedding an FIS in a general structure of an ANN has the benefit of using available ANN training methods to find the parameters of a fuzzy system.
Compositional pattern-producing networks (CPPNs) are a variation of artificial neural networks which differ in their set ofactivation functionsand how they are applied. While typical artificial neural networks often contain onlysigmoid functions(and sometimesGaussian functions), CPPNs can include both types of functions and many others. Furthermore, unlike typical artificial neural networks, CPPNs are applied across the entire space of possible inputs so that they can represent a complete image. Since they are compositions of functions, CPPNs in effect encode images at infinite resolution and can be sampled for a particular display at whatever resolution is optimal.
Memory networks[73][74]incorporatelong-term memory. The long-term memory can be read and written to, with the goal of using it for prediction. These models have been applied in the context ofquestion answering(QA) where the long-term memory effectively acts as a (dynamic) knowledge base and the output is a textual response.[75]
Insparse distributed memoryorhierarchical temporal memory, the patterns encoded by neural networks are used as addresses forcontent-addressable memory, with "neurons" essentially serving as address encoders anddecoders. However, the early controllers of such memories were not differentiable.[76]
This type of network can add new patterns without re-training. It is done by creating a specific memory structure, which assigns each new pattern to an orthogonal plane using adjacently connected hierarchical arrays.[77]The network offers real-time pattern recognition and high scalability; this requires parallel processing and is thus best suited for platforms such aswireless sensor networks,grid computing, andGPGPUs.
Hierarchical temporal memory (HTM) models some of the structural andalgorithmicproperties of theneocortex. HTM is abiomimeticmodel based onmemory-predictiontheory. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world.
HTM combines existing ideas to mimic the neocortex with a simple design that provides many capabilities. HTM combines and extends approaches used inBayesian networks, spatial and temporal clustering algorithms, while using a tree-shaped hierarchy of nodes that is common inneural networks.
Holographic Associative Memory (HAM) is an analog, correlation-based, associative, stimulus-response system. Information is mapped onto the phase orientation of complex numbers. The memory is effective forassociativememorytasks, generalization and pattern recognition with changeable attention. Dynamic search localization is central to biological memory. In visual perception, humans focus on specific objects in a pattern. Humans can change focus from object to object without learning. HAM can mimic this ability by creating explicit representations for focus. It uses a bi-modal representation of pattern and a hologram-like complex spherical weight state-space. HAMs are useful for optical realization because the underlying hyper-spherical computations can be implemented with optical computation.[78]
Apart fromlong short-term memory(LSTM), other approaches also added differentiable memory to recurrent functions. For example:
Neural Turing machines (NTM)[86]couple LSTM networks to external memory resources, with which they can interact by attentional processes. The combined system is analogous to aTuring machinebut is differentiable end-to-end, allowing it to be efficiently trained bygradient descent. Preliminary results demonstrate that neural Turing machines can infer simple algorithms such as copying, sorting and associative recall from input and output examples.
Differentiable neural computers(DNC) are an NTM extension. They out-performed Neural turing machines,long short-term memorysystems and memory networks on sequence-processing tasks.[87][88][89][90][91]
Approaches that represent previous experiences directly anduse a similar experience to form a local modelare often callednearest neighbourork-nearest neighborsmethods.[92]Deep learning is useful in semantic hashing[93]where a deepgraphical modelthe word-count vectors[94]obtained from a large set of documents.[clarification needed]Documents are mapped to memory addresses in such a way that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by accessing all the addresses that differ by only a few bits from the address of the query document. Unlikesparse distributed memorythat operates on 1000-bit addresses, semantic hashing works on 32 or 64-bit addresses found in a conventional computer architecture.
Deep neural networks can be potentially improved by deepening and parameter reduction, while maintaining trainability. While training extremely deep (e.g., 1 million layers) neural networks might not be practical,CPU-like architectures such as pointer networks[95]and neural random-access machines[96]overcome this limitation by using externalrandom-access memoryand other components that typically belong to acomputer architecturesuch asregisters,ALUandpointers. Such systems operate onprobability distributionvectors stored in memory cells and registers. Thus, the model is fully differentiable and trains end-to-end. The key characteristic of these models is that their depth, the size of their short-term memory, and the number of parameters can be altered independently.
Encoder–decoder frameworks are based on neural networks that map highlystructuredinput to highly structured output. The approach arose in the context ofmachine translation,[97][98][99]where the input and output are written sentences in two natural languages. In that work, an LSTM RNN or CNN was used as an encoder to summarize a source sentence, and the summary was decoded using a conditional RNNlanguage modelto produce the translation.[100]These systems share building blocks: gated RNNs and CNNs and trained attention mechanisms.
Instantaneously trained neural networks(ITNN) were inspired by the phenomenon of short-term learning that seems to occur instantaneously. In these networks the weights of the hidden and the output layers are mapped directly from the training vector data. Ordinarily, they work on binary data, but versions for continuous data that require small additional processing exist.
Spiking neural networks(SNN) explicitly consider the timing of inputs. The network input and output are usually represented as a series of spikes (delta functionor more complex shapes). SNN can process information in thetime domain(signals that vary over time). They are often implemented as recurrent networks. SNN are also a form ofpulse computer.[101]
Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a very large memory capacity.[102]
SNN and the temporal correlations of neural assemblies in such networks—have been used to model figure/ground separation and region linking in the visual system.
Spatial neural networks (SNNs) constitute a supercategory of tailoredneural networks (NNs)for representing and predicting geographic phenomena. They generally improve both the statisticalaccuracyandreliabilityof the a-spatial/classic NNs whenever they handlegeo-spatial datasets, and also of the other spatial(statistical) models(e.g. spatial regression models) whenever the geo-spatialdatasets' variables depictnon-linear relations.[103][104][105]Examples of SNNs are the OSFA spatial neural networks, SVANNs and GWNNs.
Theneocognitronis a hierarchical, multilayered network that was modeled after thevisual cortex. It uses multiple types of units, (originally two, calledsimpleandcomplexcells), as a cascading model for use in pattern recognition tasks.[106][107][108]Local features are extracted by S-cells whose deformation is tolerated by C-cells. Local features in the input are integrated gradually and classified at higher layers.[109]Among the various kinds of neocognitron[110]are systems that can detect multiple patterns in the same input by using back propagation to achieveselective attention.[111]It has been used forpattern recognitiontasks and inspiredconvolutional neural networks.[112]
Compound hierarchical-deep models compose deep networks with non-parametricBayesian models.Featurescan be learned using deep architectures such asDBNs,[113]deep Boltzmann machines(DBM),[114]deep auto encoders,[115]convolutional variants,[116][117]ssRBMs,[118]deep coding networks,[119]DBNs with sparse feature learning,[120]RNNs,[121]conditional DBNs,[122]denoising autoencoders.[123]This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data. However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (adistributed representation) and must be adjusted together (highdegree of freedom). Limiting the degree of freedom reduces the number of parameters to learn, facilitating learning of new classes from few examples.Hierarchical Bayesian (HB)modelsallow learning from few examples, for example[124][125][126][127][128]forcomputer vision,statisticsandcognitive science.
Compound HD architectures aim to integrate characteristics of both HB and deep networks. The compound HDP-DBM architecture is ahierarchical Dirichlet process(HDP)as a hierarchical model, incorporating DBM architecture. It is a fullgenerative model, generalized from abstract concepts flowing through the model layers, which is able to synthesize new examples in novel classes that look "reasonably" natural. All the levels are learned jointly by maximizing a jointlog-probabilityscore.[129]
In a DBM with three hidden layers, the probability of a visible input ''ν''is:
whereh={h(1),h(2),h(3)}{\displaystyle {\boldsymbol {h}}=\{{\boldsymbol {h}}^{(1)},{\boldsymbol {h}}^{(2)},{\boldsymbol {h}}^{(3)}\}}is the set of hidden units, andψ={W(1),W(2),W(3)}{\displaystyle \psi =\{{\boldsymbol {W}}^{(1)},{\boldsymbol {W}}^{(2)},{\boldsymbol {W}}^{(3)}\}}are the model parameters, representing visible-hidden and hidden-hidden symmetric interaction terms.
A learned DBM model is an undirected model that defines thejoint distributionP(ν,h1,h2,h3){\displaystyle P(\nu ,h^{1},h^{2},h^{3})}. One way to express what has been learned is theconditional modelP(ν,h1,h2∣h3){\displaystyle P(\nu ,h^{1},h^{2}\mid h^{3})}and apriortermP(h3){\displaystyle P(h^{3})}.
HereP(ν,h1,h2∣h3){\displaystyle P(\nu ,h^{1},h^{2}\mid h^{3})}represents a conditional DBM model, which can be viewed as a two-layer DBM but with bias terms given by the states ofh3{\displaystyle h^{3}}:
A deep predictive coding network (DPCN) is apredictivecoding scheme that uses top-down information to empirically adjust the priors needed for a bottom-upinferenceprocedure by means of a deep, locally connected,generative model. This works by extracting sparsefeaturesfrom time-varying observations using a linear dynamical model. Then, a pooling strategy is used to learn invariant feature representations. These units compose to form a deep architecture and are trained bygreedylayer-wiseunsupervised learning. The layers constitute a kind ofMarkov chainsuch that the states at any layer depend only on the preceding and succeeding layers.
DPCNs predict the representation of the layer, by using a top-down approach using the information in upper layer and temporal dependencies from previous states.[130]
DPCNs can be extended to form aconvolutional network.[130]
Multilayer kernel machines (MKM) are a way of learning highly nonlinear functions by iterative application of weakly nonlinear kernels. They usekernel principal component analysis(KPCA),[131]as a method for theunsupervisedgreedy layer-wise pre-training step of deep learning.[132]
Layerℓ+1{\displaystyle \ell +1}learns the representation of the previous layerℓ{\displaystyle \ell }, extracting thenl{\displaystyle n_{l}}principal component(PC) of the projection layerl{\displaystyle l}output in the feature domain induced by the kernel. To reduce thedimensionaliityof the updated representation in each layer, asupervised strategyselects the best informative features among features extracted by KPCA. The process is:
Some drawbacks accompany the KPCA method for MKMs.
A more straightforward way to use kernel machines for deep learning was developed for spoken language understanding.[133]The main idea is to use a kernel machine to approximate a shallow neural net with an infinite number of hidden units, then use adeep stacking networkto splice the output of the kernel machine and the raw input in building the next, higher level of the kernel machine. The number of levels in the deep convex network is ahyper-parameterof the overall system, to be determined bycross validation.
|
https://en.wikipedia.org/wiki/Dynamic_neural_network
|
In mathematics, for example in the study of statistical properties ofgraphs, anull modelis a type of random object that matches one specific object in some of its features, or more generally satisfies a collection of constraints, but which is otherwise taken to be an unbiasedly random structure. The null model is used as a term of comparison, to verify whether the object in question displays some non-trivial features (properties that wouldn't be expected on the basis of chance alone or as a consequence of the constraints), such ascommunity structurein graphs. An appropriate null model behaves in accordance with a reasonablenull hypothesisfor the behavior of the system under investigation.
One null model of utility in the study ofcomplex networksis that proposed byNewmanandGirvan, consisting of a randomized version of an original graphG{\displaystyle G}, produced through edges being rewired at random, under the constraint that the expected degree of each vertex matches the degree of the vertex in the original graph.[1]
The null model is the basic concept behind the definition ofmodularity, a function which evaluates the goodness of partitions of a graph into clusters. In particular, given a graphG{\displaystyle G}and a specific community partitionσ:V(G)→{1,...,b}{\displaystyle \sigma :V(G)\rightarrow \{1,...,b\}}(an assignment of a community-indexσ(v){\displaystyle \sigma (v)}(here taken as an integer from1{\displaystyle 1}tob{\displaystyle b}) to each vertexv∈V(G){\displaystyle v\in V(G)}in the graph), the modularity measures the difference between the number of links from/to each pair of communities, from that expected in a graph that is completely random in all respects other than the set of degrees of each of the vertices (thedegree sequence). In other words, the modularity contrasts the exhibited community structure inG{\displaystyle G}with that of a null model, which in this case is theconfiguration model(the maximally random graph subject to a constraint on the degree of each vertex).
Thisgraph theory-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Null_model
|
Instatistical physicsandmathematics,percolation theorydescribes the behavior of a network when nodes or links are added. This is a geometric type ofphase transition, since at a critical fraction of addition the network of small, disconnected clusters merge into significantly largerconnected, so-called spanning clusters. The applications of percolation theory tomaterials scienceand in many other disciplines are discussed here and in the articlesNetwork theoryandPercolation (cognitive psychology).
A representative question (and thesourceof the name) is as follows. Assume that some liquid is poured on top of someporousmaterial. Will the liquid be able to make its way from hole to hole and reach the bottom? This physical question ismodelledmathematically as athree-dimensional networkofn×n×nvertices, usually called "sites", in which theedgeor "bonds" between each two neighbors may be open (allowing the liquid through) with probabilityp, or closed with probability1 –p, and they are assumed to be independent. Therefore, for a givenp, what is the probability that an open path (meaning a path, each of whose links is an "open" bond) exists from the top to the bottom? The behavior for largenis of primary interest. This problem, called nowbond percolation, was introduced in the mathematics literature byBroadbent & Hammersley (1957),[1]and has been studied intensively by mathematicians and physicists since then.
In a slightly different mathematical model for obtaining a random graph, a site is "occupied" with probabilitypor "empty" (in which case its edges are removed) with probability1 –p; the corresponding problem is calledsite percolation. The question is the same: for a givenp, what is the probability that a path exists between top and bottom? Similarly, one can ask, given a connected graph at what fraction1 –pof failures the graph will become disconnected (no large component).
The same questions can be asked for any lattice dimension. As is quite typical, it is actually easier to examineinfinitenetworks than just large ones. In this case the corresponding question is: does an infinite open cluster exist? That is, is there a path of connected points of infinite length "through" the network? ByKolmogorov's zero–one law, for any givenp, the probability that an infinite cluster exists is either zero or one. Since this probability is an increasing function ofp(proof viacouplingargument), there must be acriticalp(denoted bypc) below which the probability is always 0 and above which the probability is always 1. In practice, this criticality is very easy to observe. Even fornas small as 100, the probability of an open path from the top to the bottom increases sharply from very close to zero to very close to one in a short span of values ofp.
TheFlory–Stockmayer theorywas the first theory investigating percolation processes.[2]
The history of the percolation model as we know it has its root in the coal industry. Since the industrial revolution, the economical importance of this source of energy fostered many scientific studies to understand its composition and optimize its use. During the 1930s and 1940s, the qualitative analysis by organic chemistry left more and more room to more quantitative studies.[3]
In this context, theBritish Coal Utilisation Research Association(BCURA) was created in 1938. It was a research association funded by the coal mines owners. In 1942,Rosalind Franklin, who then recently graduated in chemistry from the university of Cambridge, joined the BCURA. She started research on the density and porosity of coal. During the Second World War, coal was an important strategic resource. It was used as a source of energy, but also was the main constituent of gas masks.
Coal is a porous medium. To measure its 'real' density, one was to sink it in a liquid or a gas whose molecules are small enough to fill its microscopic pores. While trying to measure the density of coal using several gases (helium, methanol, hexane, benzene), and as she found different values depending on the gas used, Rosalind Franklin showed that the pores of coal are made of microstructures of various lengths that act as a microscopic sieve to discriminate the gases. She also discovered that the size of these structures depends on the temperature of carbonation during the coal production. With this research, she obtained a PhD degree and left the BCURA in 1946.[4]
In the mid fifties, Simon Broadbent worked in the BCURA as a statistician. Among other interests, he studied the use of coal in gas masks. One question is to understand how a fluid can diffuse in the coal pores, modeled as a random maze of open or closed tunnels. In 1954, during a symposium onMonte Carlo methods, he asks questions toJohn Hammersleyon the use of numerical methods to analyze this model.[5]
Broadbent and Hammersley introduced in their article of 1957 a mathematical model to model this phenomenon, that is percolation.
For most infinite lattice graphs,pccannot be calculated exactly, though in some casespcthere is an exact value. For example:
pc=11−C1g1′(1).{\displaystyle p_{c}={\frac {1}{1-C}}{\frac {1}{g_{1}'(1)}}.}[11]
This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable.[12]
The main fact in the subcritical phase is "exponential decay". That is, whenp<pc, the probability that a specific point (for example, the origin) is contained in an open cluster (meaning a maximal connected set of "open" edges of the graph) of sizerdecays to zeroexponentiallyinr. This was proved for percolation in three and more dimensions byMenshikov (1986)and independently byAizenman & Barsky (1987). In two dimensions, it formed part of Kesten's proof thatpc=1/2.[13]
Thedual graphof the square latticeℤ2is also the square lattice. It follows that, in two dimensions, the supercritical phase is dual to a subcritical percolation process. This provides essentially full information about the supercritical model withd= 2. The main result for the supercritical phase in three and more dimensions is that, for sufficiently largeN, there is almost certainly an infinite open cluster in the two-dimensional slabℤ2× [0,N]d− 2. This was proved byGrimmett & Marstrand (1990).[14]
In two dimensions withp<1/2, there is with probability one a unique infinite closed cluster (a closed cluster is a maximal connected set of "closed" edges of the graph). Thus the subcritical phase may be described as finite open islands in an infinite closed ocean. Whenp>1/2just the opposite occurs, with finite closed islands in an infinite open ocean. The picture is more complicated whend≥ 3sincepc<1/2, and there is coexistence of infinite open and closed clusters forpbetweenpcand1 −pc.
Percolation has asingularityat the critical pointp=pcand many properties behave as of a power-law withp−pc{\displaystyle p-p_{c}}, nearpc{\displaystyle p_{c}}.Scaling theorypredicts the existence ofcritical exponents, depending on the numberdof dimensions, that determine the class of the singularity. Whend= 2these predictions are backed up by arguments fromconformal field theoryandSchramm–Loewner evolution, and include predicted numerical values for the exponents. Most of these predictions are conjectural except when the numberdof dimensions satisfies eitherd= 2ord≥ 6. They include:
SeeGrimmett (1999).[15]In 11 or more dimensions, these facts are largely proved using a technique known as thelace expansion. It is believed that a version of the lace expansion should be valid for 7 or more dimensions, perhaps with implications also for the threshold case of 6 dimensions. The connection of percolation to the lace expansion is found inHara & Slade (1990).[16]
In two dimensions, the first fact ("no percolation in the critical phase") is proved for many lattices, using duality. Substantial progress has been made on two-dimensional percolation through the conjecture ofOded Schrammthat thescaling limitof a large cluster may be described in terms of aSchramm–Loewner evolution. This conjecture was proved bySmirnov (2001)[17]in the special case of site percolation on the triangular lattice.
Percolation theory has been used to successfully predict the fragmentation of biological virus shells (capsids),[19][20]with the fragmentation threshold ofHepatitis Bviruscapsidpredicted and detected experimentally.[21]When a critical number of subunits has been randomly removed from the nanoscopic shell, it fragments and this fragmentation may be detected using Charge Detection Mass Spectroscopy (CDMS) among other single-particle techniques. This is a molecular analog to the common board gameJenga, and has relevance to the broader study of virus disassembly. More stable viral particles (tilings with greater fragmentation thresholds) are found in greater abundance in nature.[19]
Percolation theory has been applied to studies of how environment fragmentation impacts animal habitats[22]and models of how the plague bacteriumYersinia pestisspreads.[23]
|
https://en.wikipedia.org/wiki/Percolation_theory
|
Actor–network theory(ANT) is a theoretical and methodological approach tosocial theorywhere everything in the social and natural worlds exists in constantly shifting networks of relationships. It posits that nothing exists outside those relationships. All the factors involved in a social situation are on the same level, and thus there are no external social forces beyond what and how the network participants interact at present. Thus, objects, ideas, processes, and any other relevant factors are seen as just as important in creatingsocialsituations as humans.
ANT holds that social forces do not exist in themselves, and therefore cannot be used to explain social phenomena. Instead, strictly empirical analysis should be undertaken to "describe" rather than "explain" social activity. Only after this can one introduce the concept of social forces, and only as an abstract theoretical concept, not something which genuinely exists in the world.[1]
Although it is best known for its controversial insistence on the capacity ofnonhumansto act or participate in systems ornetworksor both, ANT is also associated with forceful critiques of conventional andcritical sociology. Developed byscience and technology studies(STS) scholarsMichel Callon,Madeleine AkrichandBruno Latour, the sociologistJohn Law, and others, it can more technically be described as a "material-semiotic" method. This means that it maps relations that are simultaneously material (between things) andsemiotic(between concepts). It assumes that many relations are both material and semiotic. The term actor-network theory was coined by John Law in 1992 to describe the work being done across case studies in different areas at theCentre de Sociologie de l'Innovationat the time.[2]
The theory demonstrates that everything in the social and natural worlds, human and nonhuman, interacts in shifting networks of relationships without any other elements out of the networks. ANT challenges many traditional approaches by defining nonhumans as actors equal to humans. This claim provides a new perspective when applying the theory in practice.
Broadly speaking, ANT is aconstructivistapproach in that it avoidsessentialistexplanations of events or innovations (i.e. ANT explains a successful theory by understanding the combinations and interactions of elements that make it successful, rather than saying it is true and the others are false).[3]Likewise, it is not a cohesive theory in itself. Rather, ANT functions as a strategy that assists people in being sensitive to terms and the often unexplored assumptions underlying them.[4]It is distinguished from many otherSTSand sociologicalnetwork theoriesfor its distinct material-semiotic approach.
ANT was first developed at theCentre de Sociologie de l'Innovation(CSI) of theÉcole nationale supérieure des mines de Parisin the early 1980s by staff (Michel Callon,Madeleine Akrich,Bruno Latour) and visitors (includingJohn Law).[3]The 1984 book co-authored byJohn Lawand fellow-sociologistPeter Lodge(Science for Social Scientists; London: Macmillan Press Ltd.) is a good example of early explorations of how the growth and structure of knowledge could be analyzed and interpreted through the interactions of actors and networks. Initially created in an attempt to understand processes of innovation and knowledge-creation in science and technology, the approach drew on existing work inSTS, on studies oflarge technological systems, and on a range of French intellectual resources including the semiotics ofAlgirdas Julien Greimas, the writing of philosopherMichel Serres, and theAnnales Schoolof history.
ANT appears to reflect many of the preoccupations of Frenchpost-structuralism, and in particular a concern with non-foundational and multiple material-semiotic relations.[3]At the same time, it was much more firmly embedded in English-language academic traditions than most post-structuralist-influenced approaches. Its grounding in (predominantly English)science and technology studieswas reflected in an intense commitment to the development of theory through qualitative empirical case-studies. Its links with largely US-originated work on large technical systems were reflected in its willingness to analyse large scale technological developments in an even-handed manner to include political, organizational, legal, technical and scientific factors.
Many of the characteristic ANT tools (including the notions of translation, generalized symmetry and the "heterogeneous network"), together with ascientometrictool for mapping innovations in science and technology ("co-word analysis") were initially developed during the 1980s, predominantly in and around the CSI. The "state of the art" of ANT in the late 1980s is well-described in Latour's 1987 text,Science in Action.[5]
From about 1990 onwards, ANT started to become popular as a tool for analysis in a range of fields beyond STS. It was picked up and developed by authors in parts oforganizational analysis,informatics, health studies,geography,sociology,anthropology,archaeology,feminist studies,technical communication, andeconomics.
As of 2008[update], ANT is a widespread, if controversial, range of material-semiotic approaches for the analysis of heterogeneous relations. In part because of its popularity, it is interpreted and used in a wide range of alternative and sometimes incompatible ways. There is no orthodoxy in current ANT, and different authors use the approach in substantially different ways. Some authors talk of "after-ANT" to refer to "successor projects" blending together different problem-focuses with those of ANT.[6]
An actor (actant) is something that acts or to which activity is granted by others. It implies no motivation of human individual actors nor of humans in general. An actant can literally be anything provided it is granted to be the source of action.[7]In another word, an actor, in this circumstance, is considered as any entity that does things. For example, in the "Pasteur Network",microorganismsare not inert, they cause unsterilized materials to ferment while leaving behind sterilized materials not affected. If they took other actions, that is, if they did not cooperate withPasteur– if they did not take action (at least according to Pasteur's intentions) – then Pasteur's story may be a bit different. It is in this sense that Latour can refer to microorganisms as actors.[7]
Under the framework of ANT, theprinciple of generalized symmetry[8]requires all entities must be described in the same terms before a network is considered. Any differences between entities are generated in the network of relations, and do not exist before any network is applied.
Human normally refers to human beings and theirhuman behaviors.
Traditionally,nonhumanentities are creatures including plants, animals, geology, and natural forces, as well as a collective human making of arts, languages.[9]In ANT, nonhuman covers multiple entities including things, objects, animals, natural phenomena, material structures, transportation devices, texts, and economic goods. Butnonhuman actorsdo not cover entities such as humans, supernatural beings, and other symbolic objects in nature.[10]
As the term implies, the actor-network is the central concept in ANT. The term "network" is somewhat problematic in that it, as Latour[1][11][12]notes, has a number of unwanted connotations. Firstly, it implies that what is described takes the shape of a network, which is not necessarily the case. Secondly, it implies "transportation without deformation," which, in ANT, is not possible since any actor-network involves a vast number oftranslations. Latour,[12]however, still contends that network is a fitting term to use, because "it has no a priori order relation; it is not tied to the axiological myth of a top and of a bottom of society; it makes absolutely no assumption whether a specific locus is macro- or micro- and does not modify the tools to study the element 'a' or the element 'b'." This use of the term "network" is very similar to Deleuze and Guattari'srhizomes; Latour[11]even remarks tongue-in-cheek that he would have no objection to renaming ANT "actant-rhizome ontology" if it only had sounded better, which hints at Latour's uneasiness with the word "theory".
Actor–network theory tries to explain how material–semiotic networks come together to act as a whole; the clusters of actors involved in creating meaning are both material and semiotic. As a part of this it may look at explicit strategies for relating different elements together into a network so that they form an apparently coherent whole. These networks are potentially transient, existing in a constant making and re-making.[1]This means that relations need to be repeatedly "performed" or the network will dissolve. They also assume that networks of relations are not intrinsically coherent, and may indeed contain conflicts. Social relations, in other words, are only ever in process, and must beperformedcontinuously.
The Pasteur story that was mentioned above introduced the patterned network of diverse materials, which is called the idea of 'heterogenous networks'.[7]The basic idea of patterned network is that human is not the only factor or contributor in the society, or in any social activities and networks. Thus, the network composes machines, animals, things, and any other objects.[13]For thosenonhumanactors, it might be hard for people to imagine their roles in the network. For example, say two people, Jacob and Mike, are speaking through texts. Within the current technology, they are able to communicate with each other without seeing each other in person. Therefore, when typing or writing, the communication is basically not mediated by either of them, but instead by a network of objects, like their computers or cell phones.[13]
If taken to its logical conclusion, then, nearly any actor can be considered merely a sum of other, smaller actors. A car is an example of a complicated system. It contains many electronic andmechanicalcomponents, all of which are essentially hidden from view to the driver, who simply deals with the car as a single object. This effect is known aspunctualisation,[13]and is similar to the idea ofencapsulationinobject-oriented programming.
When an actor network breaks down, the punctualisation effect tends to cease as well.[13]In the automobile example above, a non-working engine would cause the driver to become aware of the car as a collection of parts rather than just a vehicle capable of transporting him or her from place to place. This can also occur when elements of a network act contrarily to the network as a whole. In his bookPandora's Hope,[14]Latour likens depunctualization to the opening of a black box. When closed, the box is perceived simply as a box, although when it is opened all elements inside it become visible.
Central to ANT is the concept of translation which is sometimes referred to associology of translation, in which innovators attempt to create aforum, a central network in which all the actors agree that the network is worth building and defending. In his widely debated 1986 study of how marine biologists tried to restock theSt BrieucBay in order to produce more scallops,Michel Callondefined 4 moments of translation:[8]
Also important to the notion is the role of network objects in helping to smooth out the translation process by creating equivalencies between what would otherwise be very challenging people, organizations or conditions to mesh together. Bruno Latour spoke about this particular task of objects in his workReassembling the Social.[1]
For the rethinking of social relations as networks, Latour mobilizes a concept from Michel Serres[15]and expands on it in order “to locate the position of these strange new hybrids”.[16]Quasi-objects are simultaneously quasi-subjects – the prefixquasidenotes that neither ontological status as subject or object is pure or permanent, but that these are dynamic entities whose status shifts, depending on their respective momentous activity and their according position in a collective or network.[17]What is decisive is circulation and participation, from which networks emerge, examples for quasi-objects are language, money, bread, love, or the ball in a soccer game: all of these human or non-human, material or immaterial actants have no agency (and thus, subject-status) in themselves, however, they can be seen as the connective tissue underlying – or even acticating – the interactions in which they are enmeshed.[18]InReassembling the Social, Latour refers to these in-between actants as “the mediators whose proliferation generates, among many other entities, what could be called quasi-objects and quasi-subjects.”[1]
Actor–network theory refers to these creations astokensorquasi-objectswhich are passed between actors within the network. As the token is increasingly transmitted or passed through the network, it becomes increasingly punctualized and also increasinglyreified. When the token is decreasingly transmitted, or when an actor fails to transmit the token (e.g., the oil pump breaks), punctualization and reification are decreased as well.
Although it is called a "theory", ANT does not usually explain "why" a network takes the form that it does.[1]Rather, ANT is a way of thoroughly exploring the relational ties within a network (which can be a multitude of different things). As Latour notes,[11]"explanation does not follow from description; it is description taken that much further." It is not, in other words, a theory "of" anything, but rather a method, or a "how-to book" as Latour[1]puts it.
The approach is related to other versions of material-semiotics (notably the work of philosophersGilles Deleuze,Michel Foucault, and feminist scholarDonna Haraway). It can also be seen as a way of being faithful to the insights ofethnomethodologyand its detailed descriptions of how common activities, habits and procedures sustain themselves. Similarities between ANT andsymbolic interactionistapproaches such as the newer forms ofgrounded theorylike situational analysis, exist,[19]although Latour[20]objects to such a comparison.
Although ANT is mostly associated with studies of science and technology and with the sociology of science, it has been making steady progress in other fields of sociology as well. ANT is adamantly empirical, and as such yields useful insights and tools for sociological inquiry in general. ANT has been deployed in studies of identity and subjectivity, urban transportation systems, and passion and addiction.[21]It also makes steady progress in political and historical sociology.[22]
The distinction between intermediaries and mediators is key to ANT sociology. Intermediaries are entities which make no difference (to some interesting state of affairs which we are studying) and so can be ignored. They transport the force of some other entity more or less without transformation and so are fairly uninteresting. Mediators are entities which multiply difference and so should be the object of study. Their outputs cannot be predicted by their inputs. From an ANT point of view sociology has tended to treat too much of the world as intermediaries.
For instance, a sociologist might take silk and nylon as intermediaries, holding that the former "means", "reflects", or "symbolises" the upper classes and the latter the lower classes. In such a view the real world silk–nylon difference is irrelevant– presumably many other material differences could also, and do also, transport this class distinction. But taken as mediators these fabrics would have to be engaged with by the analyst in their specificity: the internal real-world complexities of silk and nylon suddenly appear relevant, and are seen as actively constructing the ideological class distinction which they once merely reflected.
For the committed ANT analyst, social things—like class distinctions in taste in the silk and nylon example, but also groups and power—must constantly be constructed or performed anew through complex engagements with complex mediators. There is no stand-alone social repertoire lying in the background to be reflected off, expressed through, or substantiated in, interactions (as in an intermediary conception).[1]
Bruno Latour's articulation of reflexivity in Actor-Network Theory (ANT) reframes it as an opportunity rather than a problem.[12]His argument addresses the limitations of reflexivity as traditionally conceived in relativist epistemologies and replaces it with a pragmatic, relational approach tied to ANT's broader principles. Latour argues that the observer is merely one actor among many within the network, eliminating the problem of reflexivity as a paradox of status. Reflexivity instead emerges through the tangible work of navigating and translating between networks, requiring the observer to engage actively, like any other actor, in the labour of connection and translation. This grounded form of reflexivity enhances the observer's role as a "world builder" and reinforces ANT's emphasis on the relational and dynamic nature of knowledge creation.
The belief that neither a human nor a nonhuman is pure, in the sense that neither is human or nonhuman in an absolute sense, but rather beings created via interactions between the two. Humans are thus regarded as quasi-subjects, while nonhumans are regarded as quasi-objects.[7]
Recently, there has been a movement to introduce actor network theory as an analytical tool to a range of applied disciplines outside of sociology, including nursing, public health, urban studies (Farias and Bender, 2010), and community, urban, and regional planning (Beauregard, 2012;[23]Beauregard and Lieto, 2015; Rydin, 2012;[24]Rydin and Tate, 2016, Tate, 2013).[25]
Actor–network theory has become increasingly prominent within the discipline ofinternational relationsandpolitical science.
Theoretically, scholars within IR have employed ANT in order to disrupt traditional world political binaries (civilised/barbarian, democratic/autocratic, etc.),[26]consider the implications of a posthuman understanding of IR,[27]explore the infrastructures of world politics,[28]and consider the effects of technological agency.[29]
Empirically, IR scholars have drawn on insights from ANT in order to study phenomena including political violences like the use of torture and drones,[26]piracy and maritime governance,[30]and garbage.[31]
The actor–network theory can also be applied to design, using a perspective that is not simply limited to an analysis of an object's structure. From the ANT viewpoint, design is seen as a series of features that account for a social, psychological, and economical world. ANT argues that objects are designed to shape human action and mold or influence decisions. In this way, the objects' design serves to mediate human relationships and can even impact our morality, ethics, and politics.[32]
The literary criticRita Felskihas argued that ANT offers the fields ofliterary criticismandcultural studiesvital new modes of interpreting and engaging with literary texts. She claims that Latour's model has the capacity to allow "us to wiggle out of the straitjacket of suspicion," and to offer meaningful solutions to the problems associated withcritique.[33]The theory has been crucial to her formulation ofpostcritique. Felski suggests that the purpose of applying ANT to literary studies "is no longer to diminish or subtract from the reality of the texts we study but to amplify their reality, as energetic coactors and vital partners."[34]
In the study of Christianity by anthropologists, the ANT has been employed in a variety of ways of understanding how humans interact with nonhuman actors. Some have been critical of the field ofAnthropology of Religionin its tendency to presume that God is not a social actor. The ANT is used to problematize the role of God, as a nonhuman actor, and speak of how They affect religious practice.[35]Others have used the ANT to speak of the structures and placements of religious buildings, especially in cross-cultural contexts, which can see architecture as agents making God's presence tangible.[36]
ANT has been considered more than just a theory, but also amethodology. In fact, ANT is a useful method that can be applied in different studies. Moreover, with the development of the digital communication, ANT now is popular in being applied in science field likeISresearch. In addition, it widen the horizon of researchers from arts field as well.
ANT is a big influencer in the development ofdesign. In the past, researchers or scholars from design field mainly view the world as a human interactive situation. No matter what design we [who?] applied, it is for human's action. However, the idea of ANT now applies into design principle, where design starts to be viewed as a connector. As the view of design itself has changed, the design starts to be considered more important in daily lives. Scholars [who?] analyze how design shapes, connects, reflects, interacts our daily activities.[37]
ANT has also been widely applied in museums. ANT proposes that it is difficult to discern the 'hard' from the 'soft' components of the apparatus in curatorial practice; that the object 'in progress' of being curated is slick and difficult to separate from the setting of the experiment or the experimenter's identity.[38]
In recent years, actor-network theory has gained a lot of traction, and a growing number ofISacademics are using it explicitly in their research. Despite the fact that these applications vary greatly, all of the scholars cited below agree that the theory provides new notions and ideas for understanding the socio-technical character of information systems.[39]Bloomfield present an intriguing case study of the development of a specific set of resource management information systems in the UK National Health Service, and they evaluate their findings using concepts from actor-network theory. The actor-network approach does not prioritize social or technological aspects, which mirrors the situation in the case study, where arguments about social structures and technology are intertwined within actors' discourse as they try to persuade others to align with their own goals. The research emphasizes the interpretative flexibility of information technology and systems, in the sense that seemingly similar systems produce drastically different outcomes in different locales as a result of the specific translation and network-building processes that occurred. They show how the boundary between the technological and the social, as well as the link between them, is the topic of constant battles and trials of strength in the creation of facts, rather than taking technology for granted.[39]
There are at least four contributions of nonhumans as actors in their ANT positions.[10]
Some critics[48]have argued that research based on ANT perspectives remains entirely descriptive and fails to provide explanations for social processes. ANT—like comparable social scientific methods—requires judgement calls from the researcher as to which actors are important within a network and which are not. Critics[who?]argue that the importance of particular actors cannot be determined in the absence of "out-of-network" criteria, such as is a logically proven fact about deceptively coherent systems givenGödel's incompleteness theorems. Similarly, others[who?]argue that actor-networks risk degenerating into endless chains of association (six degrees of separation—we are all networked to one another). Other research perspectives such associal constructionism,social shaping of technology,social network theory,normalization process theory, anddiffusion of innovationstheory are held to be important alternatives to ANT approaches.
Key early criticism came from other members of the STS community, in particular the "Epistemological Chicken" debate between Collins and Yearley with responses from Latour and Callon as well as Woolgar. In an article inScience as Practice and Culture, sociologistHarry Collinsand his co-writerSteven Yearleyargue that the ANT approach is a step backwards towards thepositivistandrealistpositions held by early theory of science.[49]Collins and Yearley accused ANTs approach of collapsing into an endless relativist regress.[50]
Whittle andorganization studiesprofessorAndré Spicernote that "ANT has also sought to move beyond deterministic models that trace organizational phenomena back to powerful individuals, social structures, hegemonic discourses or technological effects. Rather, ANT prefers to seek out complex patterns of causality rooted in connections between actors." They argue that ANT's ontological realism makes it "less well equipped for pursuing a critical account of organizations—that is, one which recognises the unfolding nature of reality, considers the limits of knowledge and seeks to challenge structures of domination."[51]This implies that ANT does not account for pre-existing structures, such as power, but rather sees these structures as emerging from the actions of actors within the network and their ability to align in pursuit of their interests. Accordingly, ANT can be seen as an attempt to re-introduceWhig historyintoscience and technology studies; like the myth of theheroic inventor, ANT can be seen as an attempt to explain successful innovators by saying only that they were successful. Likewise, for organization studies, Whittle and Spicer assert that ANT is, "ill-suited to the task of developing political alternatives to the imaginaries of market managerialism."
Actor–network theory insists on the capacity ofnonhumansto be actors or participants in networks and systems. Critics including figures such asLangdon Winnermaintain that such properties asintentionalityfundamentally distinguish humans from animals or from "things" (seeActivity Theory). ANT scholars [who?] respond with the following arguments:
ANT has been criticized as amoral.Wiebe Bijkerhas responded to this criticism by stating that the amorality of ANT is not a necessity. Moral and political positions are possible, but one must first describe the network before taking up such positions. This position has been further explored by Stuart Shapiro who contrasts ANT with the history of ecology, and argues that research decisions are moral rather than methodological, but this moral dimension has been sidelined.[52]
In a workshop called "On Recalling ANT", Latourhimselfstated that there are four things wrong with actor-network theory: "actor", "network", "theory" and the hyphen.[53]In a later book, however, Latour reversed himself, accepting the wide use of the term, "includingthe hyphen."[1]: 9He further remarked how he had been helpfully reminded that the ANT acronym "was perfectly fit for a blind, myopic, workaholic, trail-sniffing, and collective traveler"—qualitative hallmarks of actor-networkepistemology.[1]
|
https://en.wikipedia.org/wiki/Actor-network_theory
|
Blockmodelingis a set or a coherentframework, that is used for analyzingsocial structureand also for setting procedure(s) for partitioning (clustering)social network's units (nodes,vertices,actors), based on specific patterns, which form a distinctive structure through interconnectivity.[1][2]It is primarily used instatistics,machine learningandnetwork science.
As anempirical procedure, blockmodeling assumes that all the units in a specific network can be grouped together to such extent to which they are equivalent. Regarding equivalency, it can be structural, regular or generalized.[3]Using blockmodeling, anetworkcan be analyzed using newly createdblockmodels, which transforms large and complex network into a smaller and more comprehensible one. At the same time, the blockmodeling is used to operationalizesocial roles.
While some contend that the blockmodeling is just clustering methods,BonacichandMcConaghystate that "it is a theoretically grounded and algebraic approach to the analysis of the structure of relations". Blockmodeling's unique ability lies in the fact that it considers the structure not just as a set of direct relations, but also takes into account all other possible compound relations that are based on the direct ones.[4]
The principles of blockmodeling were first introduced byFrancois LorrainandHarrison C. Whitein 1971.[2]Blockmodeling is considered as "an important set of network analytic tools" as it deals with delineation of role structures (the well-defined places in social structures, also known as positions) and the discerning the fundamental structure of social networks.[5]: 2, 3According toBatagelj, the primary "goal of blockmodeling is to reduce a large, potentially incoherent network to a smaller comprehensible structure that can be interpreted more readily".[6]Blockmodeling was at first used for analysis insociometryandpsychometrics, but has now spread also to other sciences.[7]
A network as a system is composed of (or defined by) two different sets: one set of units (nodes, vertices, actors) and one set of links between the units. Using both sets, it is possible to create agraph, describing the structure of the network.[8]
During blockmodeling, the researcher is faced with two problems: how to partition the units (e.g., how to determine theclusters(or classes), that then form vertices in a blockmodel) and then how to determine the links in the blockmodel (and at the same time the values of these links).[9]
In thesocial sciences, the networks are usuallysocial networks, composed of several individuals (units) and selectedsocial relationshipsamong them (links). Real-world networks can be large and complex; blockmodeling is used to simplify them into smaller structures that can be easier to interpret. Specifically, blockmodeling partitions the units into clusters and then determines the ties among the clusters. At the same time, blockmodeling can be used to explain thesocial rolesexisting in the network, as it is assumed that the created cluster of units mimics (or is closely associated with) the units' social roles.[8]
Blockmodeling can thus be defined as a set of approaches for partitioning units into clusters (also known as positions) and links into blocks, which are further defined by the newly obtained clusters. A block (also blockmodel) is defined as a submatrix, that shows interconnectivity (links) between nodes, present in the same or different clusters.[8]Each of these positions in the cluster is defined by a set of (in)direct ties to and from other social positions.[10]These links (connections) can be directed or undirected; there can be multiple links between the same pair of objects or they can have weights on them. If there are not any multiple links in a network, it is called a simple network.[11]: 8
Amatrixrepresentation of a graph is composed of ordered units, in rows and columns, based on their names. The ordered units with similar patterns of links are partitioned together in the same clusters. Clusters are then arranged together so that units from the same clusters are placed next to each other, thus preserving interconnectivity. In the next step, the units (from the same clusters) are transformed into a blockmodel. With this, several blockmodels are usually formed, one being core cluster and others being cohesive; a core cluster is always connected to cohesive ones, while cohesive ones cannot be linked together. Clustering of nodes is based on theequivalence, such as structural and regular.[8]The primary objective of the matrix form is to visually present relations between the persons included in the cluster. These ties are coded dichotomously (as present or absent), and the rows in the matrix form indicate the source of the ties, while the columns represent the destination of the ties.[10]
Equivalence can have two basic approaches: the equivalent units have the same connection pattern to the same neighbors or these units have same or similar connection pattern to different neighbors. If the units are connected to the rest of network in identical ways, then they are structurally equivalent.[3]Units can also be regularly equivalent, when they are equivalently connected to equivalent others.[2]
With blockmodeling, it is necessary to consider the issue of results being affected by measurement errors in the initial stage of acquiring the data.[12]
Regarding what kind of network is undergoing blockmodeling, a different approach is necessary. Networks can be one–mode or two–mode. In the former all units can be connected to any other unit and where units are of the same type, while in the latter the units are connected only to the unit(s) of a different type.[5]: 6–10Regarding relationships between units, they can be single–relational or multi–relational networks. Further more, the networks can be temporal or multilevel and also binary (only 0 and 1) or signed (allowing negative ties)/values (other values are possible) networks.
Different approaches to blockmodeling can be grouped into two main classes:deterministic blockmodelingandstochastic blockmodelingapproaches. Deterministic blockmodeling is then further divided into direct and indirect blockmodeling approaches.[8]
Among direct blockmodeling approaches are:structural equivalenceandregular equivalence.[2]Structural equivalence is a state, when units are connected to the rest of the network in an identical way(s), while regular equivalence occurs when units are equally related to equivalent others (units are not necessarily sharing neighbors, but have neighbour that are themselves similar).[3][5]: 24
Indirect blockmodeling approaches, where partitioning is dealt with as a traditional cluster analysis problem (measuring (dis)similarityresults in a (dis)similarity matrix), are:[8][2]
According to Brusco and Steinley (2011),[14]the blockmodeling can be categorized (using a number of dimensions):[15]
Blockmodels(sometimes alsoblock models) are structures in which:
Computer programs can partition the social network according to pre-set conditions.[17]: 333When empirical blocks can be reasonably approximated in terms of ideal blocks, such blockmodels can be reduced to ablockimage, which is a representation of the original network, capturing its underlying 'functional anatomy'.[18]Thus, blockmodels can "permit the data to characterize their own structure", and at the same time not seek to manifest a preconceived structure imposed by the researcher.[19]
Blockmodels can be created indirectly or directly, based on the construction of thecriterion function. Indirect construction refers to a function, based on "compatible (dis)similarity measure between paris of units", while the direct construction is "a function measuring the fit of real blocks induced by a givenclusteringto the corresponding ideal blocks with perfect relations within each cluster and between clusters according to the considered types of connections (equivalence)".[20]
Blockmodels can be specified regarding theintuition,substanceor the insight into the nature of the studied network; this can result in such models as follows:[5]: 16–24
Blockmodeling is done with specializedcomputer programs, dedicated to the analysis of networks or blockmodeling in particular, as:
|
https://en.wikipedia.org/wiki/Blockmodeling
|
Digital humanities(DH) is an area of scholarly activity at the intersection ofcomputingordigital technologiesand the disciplines of thehumanities. It includes the systematic use of digital resources in the humanities, as well as the analysis of their application.[1][2]DH can be defined as new ways of doing scholarship that involve collaborative, transdisciplinary, and computationally engaged research, teaching, and publishing.[3]It brings digital tools and methods to the study of the humanities with the recognition that the printed word is no longer the main medium for knowledge production and distribution.[3]
By producing and using new applications and techniques, DH makes new kinds of teaching possible, while at the same time studying and critiquing how these impact cultural heritage and digital culture.[2]A distinctive feature of DH is its cultivation of a two-way relationship between the humanities and the digital: the field both employs technology in the pursuit of humanities research and subjects technology to humanistic questioning and interrogation.
The definition of the digital humanities is being continually formulated by scholars and practitioners. Since the field is constantly growing and changing, specific definitions can quickly become outdated or unnecessarily limit future potential.[4]The second volume ofDebates in the Digital Humanities(2016) acknowledges the difficulty in defining the field: "Along with the digital archives, quantitative analyses, and tool-building projects that once characterized the field, DH now encompasses a wide range of methods and practices: visualizations of large image sets, 3D modeling of historical artifacts, 'born digital' dissertations,hashtag activismand the analysis thereof,alternate reality games, mobile makerspaces, and more. In what has been called 'big tent' DH, it can at times be difficult to determine with any specificity what, precisely, digital humanities work entails."[5]
Historically, the digital humanities developed out of humanities computing and has become associated with other fields, such as humanistic computing, social computing, and media studies. In concrete terms, the digital humanities embraces a variety of topics, from curating online collections of primary sources (primarily textual) to thedata miningof large cultural data sets totopic modeling. Digital humanities incorporates both digitized (remediated) andborn-digitalmaterials and combines the methodologies from traditional humanities disciplines (such asrhetoric,history,philosophy,linguistics,literature,art,archaeology,music, andcultural studies) and social sciences,[6]with tools provided bycomputing(such ashypertext,hypermedia,data visualisation,information retrieval, data mining,statistics,text mining,digital mapping), anddigital publishing. Related subfields of digital humanities have emerged likesoftware studies, platform studies, andcritical code studies. Fields that parallel the digital humanities includenew media studiesandinformation scienceas well asmedia theory of composition,game studies, particularly in areas related to digital humanities project design and production, andcultural analytics. Each disciplinary field and each country has its own unique history of digital humanities.[7]
Berry and Fagerjord have suggested that a way to reconceptualise digital humanities could be through a "digital humanities stack". They argue that "this type of diagram is common in computation and computer science to show how technologies are 'stacked' on top of each other in increasing levels of abstraction. Here, [they] use the method in a more illustrative and creative sense of showing the range of activities, practices, skills, technologies and structures that could be said to make up the digital humanities, with the aim of providing a high-level map."[8]Indeed, the "diagram can be read as the bottom levels indicating some of the fundamental elements of the digital humanities stack, such as computational thinking and knowledge representation, and then other elements that later build on these."[9]
In practical terms, a major distinction within digital humanities is the focus on the data being processed. For processing textual data, digital humanities builds on a long and extensive history ofdigital edition,computational linguisticsandnatural language processingand developed an independent and highly specialized technology stack (largely cumulating in the specifications of theText Encoding Initiative). This part of the field is sometimes thus set apart from Digital Humanities in general as 'digital philology' or 'computational philology'. For the creation and analysis of digital editions of objects or artifacts, digital philologists have access to digital practices, methods, and technologies such asoptical character recognitionthat are providing opportunities to adapt the field to the digital age.[10]
Digital humanities descends from the field of humanities computing, whose origins reach back to 1940s and 50s, in the pioneering work of Jesuit scholarRoberto Busa, which began in 1946,[11]and of English professorJosephine Miles, beginning in the early 1950s.[12][13][14][15]In collaboration withIBM, Busa and his team created a computer-generated concordance toThomas Aquinas' writings known as theIndex Thomisticus.[3]Busa's works have been collected and translated byJulianne Nyhanand Marco Passarotti.[16]Other scholars began using mainframe computers to automate tasks like word-searching, sorting, and counting, which was much faster than processing information from texts with handwritten or typed index cards.[3]Similar first advances were made by Gerhard Sperl inAustriausing computers byZusefor DigitalAssyriology.[17]In the decades which followed archaeologists, classicists, historians, literary scholars, and a broad array of humanities researchers in other disciplines applied emerging computational methods to transform humanities scholarship.[18][19]
As Tara McPherson has pointed out, the digital humanities also inherit practices and perspectives developed through many artistic and theoretical engagements with electronic screen culture beginning the late 1960s and 1970s. These range from research developed by organizations such asSIGGRAPHto creations by artists such asCharles and Ray Eamesand the members ofE.A.T.(Experiments in Art and Technology). The Eames and E.A.T. explored nascent computer culture and intermediality in creative works that dovetailed technological innovation with art.[20]
The first specialized journal in the digital humanities wasComputers and the Humanities, which debuted in 1966. TheComputer Applications and Quantitative Methods in Archaeology(CAA) association was founded in 1973. The Association for Literary and Linguistic Computing (ALLC) and the Association for Computers and the Humanities (ACH) were then founded in 1977 and 1978, respectively.[3]
Soon, there was a need for a standardized protocol for tagging digital texts, and theText Encoding Initiative(TEI) was developed.[3]The TEI project was launched in 1987 and published the first full version of theTEI Guidelinesin May 1994.[14]TEI helped shape the field of electronic textual scholarship and led toExtensible Markup Language(XML), which is a tag scheme for digital editing. Researchers also began experimenting with databases and hypertextual editing, which are structured around links and nodes, as opposed to the standard linear convention of print.[3]In the nineties, major digital text and image archives emerged at centers of humanities computing in the U.S. (e.g. theWomen Writers Project, theRossetti Archive,[21]andThe William Blake Archive[22]), which demonstrated the sophistication and robustness of text-encoding for literature.[23]The advent of personal computing and the World Wide Web meant that Digital Humanities work could become less centered on text and more on design. The multimedia nature of the internet has allowed Digital Humanities work to incorporate audio, video, and other components in addition to text.[3]
The terminological change from "humanities computing" to "digital humanities" has been attributed toJohn Unsworth, Susan Schreibman, and Ray Siemens who, as editors of the anthologyA Companion to Digital Humanities(2004), tried to prevent the field from being viewed as "mere digitization".[24]Consequently, the hybrid term has created an overlap between fields likerhetoricand composition, which use "the methods of contemporary humanities in studying digital objects",[24]and digital humanities, which uses "digital technology in studying traditional humanities objects".[24]The use of computational systems and the study of computational media within thehumanities, arts and social sciencesmore generally has been termed the 'computational turn'.[25]
In 2006 theNational Endowment for the Humanities(NEH) launched the Digital Humanities Initiative (renamed Office of Digital Humanities in 2008), which made widespread adoption of the term "digital humanities" in the United States.[26]
Digital humanities emerged from its former niche status and became "big news"[26]at the 2009MLA conventionin Philadelphia, where digital humanists made "some of the liveliest and most visible contributions"[27]and had their field hailed as "the first 'next big thing' in a long time."[28]
Although digital humanities projects and initiatives are diverse, they often reflect common values and methods.[29]These can help in understanding this hard-to-define field.
Values[29]
Methods[29]
In keeping with the value of being open and accessible, many digital humanities projects and journals areopen accessand/or underCreative Commonslicensing, showing the field's "commitment toopen standardsandopen source."[30]Open access is designed to enable anyone with an internet-enabled device and internet connection to view a website or read an article without having to pay, as well as share content with the appropriate permissions.
Digital humanities scholars use computational methods either to answer existing research questions or to challenge existing theoretical paradigms, generating new questions and pioneering new approaches. One goal is to systematically integrate computer technology into the activities of humanities scholars,[31]as is done in contemporary empiricalsocial sciences. Yet despite the significant trend in digital humanities towards networked and multimodal forms of knowledge, a substantial amount of digital humanities focuses on documents and text in ways that differentiate the field's work from digital research inmedia studies,information studies,communication studies, andsociology. Another goal of digital humanities is to create scholarship that transcends textual sources. This includes the integration ofmultimedia,metadata, and dynamic environments (seeThe Valley of the Shadowproject at theUniversity of Virginia, theVectors Journal of Culture and Technology in a Dynamic VernacularatUniversity of Southern California, or Digital Pioneers projects at Harvard[32]). A growing number of researchers in digital humanities are using computational methods for the analysis of large cultural data sets such as theGoogle Bookscorpus.[33]Examples of such projects were highlighted by the Humanities High Performance Computing competition sponsored by the Office of Digital Humanities in 2008,[34]and also by the Digging Into Data challenge organized in 2009[35]and 2011[36]by NEH in collaboration with NSF,[37]and in partnership withJISCin the UK, andSSHRCin Canada.[38]In addition to books, historical newspapers can also be analyzed with big data methods. The analysis of vast quantities of historical newspaper content has showed how periodic structures can be automatically discovered, and a similar analysis was performed on social media.[39][40]As part of the big data revolution,gender bias,readability, content similarity, reader preferences, and even mood have been analyzed based ontext miningmethods over millions of documents[41][42][43][44][45]and historical documents written in literary Chinese.[46]
Digital humanities is also involved in the creation of software, providing "environments and tools for producing, curating, and interacting with knowledge that is 'born digital' and lives in various digital contexts."[47]In this context, the field is sometimes known as computational humanities.
Digital humanities scholars use a variety of digital tools for their research, which may take place in an environment as small as a mobile device or as large as avirtual realitylab. Environments for "creating, publishing and working with digital scholarship include everything from personal equipment to institutes and software to cyberspace."[49]Some scholars use advanced programming languages and databases, while others use less complex tools, depending on their needs. DiRT (Digital Research Tools Directory[50]) offers a registry of digital research tools for scholars. TAPoR (Text Analysis Portal for Research[51]) is a gateway to text analysis and retrieval tools. An accessible, free example of an online textual analysis program isVoyant Tools,[52]which only requires the user to copy and paste either a body of text or a URL and then click the 'reveal' button to run the program. There is also an online list[53]of online or downloadable Digital Humanities tools that are largely free, aimed toward helping students and others who lack access to funding or institutional servers. Free, open source web publishing platforms likeWordPressandOmekaare also popular tools.
Digital humanities projects are more likely than traditional humanities work to involve a team or a lab, which may be composed of faculty, staff, graduate or undergraduate students, information technology specialists, and partners in galleries, libraries, archives, and museums. Credit and authorship are often given to multiple people to reflect this collaborative nature, which is different from the sole authorship model in the traditional humanities (and more like the natural sciences).[3]
There are thousands of digital humanities projects, ranging from small-scale ones with limited or no funding to large-scale ones with multi-year financial support. Some are continually updated while others may not be due to loss of support or interest, though they may still remain online in either abeta versionor a finished form. The following are a few examples of the variety of projects in the field:[54]
TheWomen Writers Project(begun in 1988) is a long-term research project to make pre-Victorian women writers more accessible through an electronic collection of rare texts. The Walt Whitman Archive[55](begun in the 1990s) sought to create a hypertext and scholarly edition ofWhitman's works and now includes photographs, sounds, and the only comprehensive current bibliography of Whitman criticism. The Emily Dickinson Archive (begun in 2013)[56]is a collection of high-resolution images ofDickinson's poetry manuscripts as well as a searchable lexicon of over 9,000 words that appear in the poems.
The Slave Societies Digital Archive[58](formerly Ecclesiastical and Secular Sources for Slave Societies), directed by Jane Landers[59]and hosted at Vanderbilt University, preserves endangered ecclesiastical and secular documents related to Africans and African-descended peoples in slave societies. This Digital Archive currently holds 500,000 unique images, dating from the 16th to the 20th centuries, and documents the history of between 6 and 8 million individuals. They are the most extensive serial records for the history of Africans in the Atlantic World and also include valuable information on the indigenous, European, and Asian populations who lived alongside them. Another example of a digital humanities project focused on the Americas is at theNational Autonomous University of Mexico, which has the digitization of 17th-century manuscripts, an electronic corpus of Mexican history from the 16th to 19th century, and the visualization of pre-Hispanic archaeological sites in3-D.[60]A rare example of a digital humanities project focused on the cultural heritage of Africa is the Princeton Ethiopian, Eritrean, and Egyptian Miracles of Mary project, which documents African medieval stories, paintings, and manuscripts about the Virgin Mary from the 1300s into the 1900s.[61][62]
The involvement of librarians and archivists plays an important part in digital humanities projects because of the recent expansion of their role so that it now coversdigital curation, which is critical in the preservation, promotion, and access to digital collections, as well as the application of scholarly orientation to digital humanities projects.[63]A specific example involves the case of initiatives where archivists help scholars and academics build their projects through their experience in evaluating, implementing, and customizing metadata schemas for library collections.[64]
"Cultural analytics" refers to the use of computational method for exploration and analysis of large visual collections and also contemporary digital media. The concept was developed in 2005 byLev Manovichwho then established the Cultural Analytics Lab in 2007 at Qualcomm Institute at California Institute for Telecommunication and Information (Calit2). The lab has been using methods from the field of computer science called Computer Vision to analyze many types of both historical and contemporary visual media—for example, all covers ofTimemagazine published between 1923 and 2009,[65]20,000 historical art photographs from the collection in Museum of Modern Art (MoMA) in New York,[66]one million pages from Manga books,[67]and 16 million images shared on Instagram in 17 global cities.[68]Cultural analytics also includes using methods from media design and data visualization to create interactive visual interfaces for exploration of large visual collections e.g., Selfiecity and On Broadway.
Cultural analytics research is also addressing a number of theoretical questions. How can we "observe" giant cultural universes of both user-generated and professional media content created today, without reducing them to averages, outliers, or pre-existing categories? How can work with large cultural data help us question our stereotypes and assumptions about cultures? What new theoretical cultural concepts and models are required for studying global digital culture with its new mega-scale, speed, and connectivity?[69]
The term "cultural analytics" (or "culture analytics") is now used by many other researchers, as exemplified by two academic symposiums,[70]a four-month long research program at UCLA that brought together 120 leading researchers from university and industry labs,[71]an academic peer-reviewJournal of Cultural Analytics: CAestablished in 2016,[72]and academic job listings.
WordHoard (begun in 2004) is a free application that enables scholarly but non-technical users to read and analyze, in new ways, deeply-tagged texts, including the canon of Early Greek epic,Chaucer,Shakespeare, andSpenser. The Republic of Letters (begun in 2008)[73]seeks to visualize the social network of Enlightenment writers through an interactive map and visualization tools. Network analysis and data visualization is also used for reflections on the field itself – researchers may produce network maps of social media interactions or infographics from data on digital humanities scholars and projects.
Document in Context of its Time (DICT) analysis style[75]and an onlinedemo toolallow users to discover if the vocabulary used by an author of an input text was frequent at the time of text creation, if the author used anachronisms or neologisms, and enables detecting terms in text that underwent considerable semantic change.
Culturomicsis a form ofcomputational lexicologythat studieshuman behaviorandcultural trendsthrough thequantitative analysisof digitized texts.[76][77]Researchersdata minelargedigital archivesto investigate cultural phenomena reflected in language and word usage.[78]The term is an Americanneologismfirst described in a 2010Sciencearticle calledQuantitative Analysis of Culture Using Millions of Digitized Books, co-authored by Harvard researchers Jean-Baptiste Michel andErez Lieberman Aiden.[79]
A 2017 study[45]published in theProceedings of the National Academy of Sciences of the United States of Americacompared the trajectory of n-grams over time in both digitised books from the 2010Sciencearticle[79]with those found in a large corpus of regional newspapers from the United Kingdom over the course of 150 years. The study further went on to use more advancednatural language processingtechniques to discover macroscopic trends in history and culture, including gender bias, geographical focus, technology, and politics, along with accurate dates for specific events.
The applications of digital humanities may be used along with other non humanities subject areas such as pure sciences, agriculture, management etc. to produce great variants of practical solutions to solve issues in industry as well as society.[80]
TheStanford Encyclopedia of Philosophy(begun in 1995) is a dynamic reference work of terms, concepts, and people from philosophy maintained by scholars in the field. MLA Commons[81]offers an open peer-review site (where anyone can comment) for their ongoing curated collection of teaching artifacts inDigital Pedagogy in the Humanities: Concepts, Models, and Experiments(2016).[82]TheDebates in the Digital Humanitiesplatform contains volumes of the open-access book of the same title (2012 and 2016 editions) and allows readers to interact with material by marking sentences as interesting or adding terms to a crowdsourced index.
Some research institutions work with theWikimedia Foundationor volunteers of the community, for example, to make freely licensed media files available viaWikimedia Commonsor to link or load data sets withWikidata. Text analysis has been performed on the contribution history of articles onWikipediaor its sister projects.[83]
The 'South African Centre for Digital Language Resources'(SADiLaR) was set up at a time when a global definition of Open Education Resources(OER)was being drafted and accepted by UNESCO[84]SADiLaR saw this an opportunity to stimulate activism and research around the use and creation of OERs for Digital Humanities. They initiated and launched the Digital Humanities OER (DH-OER) project[85]to raise consciousness about the costs of materials, foster the adoption of open principles and practices and support the growth of open education resources and digital humanities in South African Higher education institutions. DH-OER began with 26 projects and an introduction to openness in April 2022.[86]It concluded in November 2023, when 16 projects showcased their efforts in a public event.[87]
In 2012, Matthew K. Gold identified a range of perceived criticisms of the field of digital humanities: "a lack of attention to issues of race, class, gender, and sexuality; a preference for research-driven projects over pedagogical ones; an absence of political commitment; an inadequate level of diversity among its practitioners; an inability to address texts under copyright; and an institutional concentration in well-funded research universities".[88]Similarly Berry and Fagerjord have argued that a digital humanities should "focus on the need to think critically about the implications of computational imaginaries, and raise some questions in this regard. This is also to foreground the importance of the politics and norms that are embedded in digital technology, algorithms and software. We need to explore how to negotiate between close and distant readings of texts and how micro-analysis and macro-analysis can be usefully reconciled in humanist work."[89]Alan Liuhas argued, "while digital humanists develop tools, data, and metadata critically, therefore (e.g., debating the 'ordered hierarchy of content objects' principle; disputing whether computation is best used for truth finding or, asLisa Samuelsand Jerome McGann put it, 'deformance'; and so on) rarely do they extend their critique to the full register of society, economics, politics, or culture."[90]Some of these concerns have given rise to the emergent subfield of Critical Digital Humanities (CDH):
Some key questions include: how do we make the invisible become visible in the study of software? How is knowledge transformed when mediated through code and software? What are the critical approaches to Big Data, visualization, digital methods, etc.? How does computation create new disciplinary boundaries and gate-keeping functions? What are the new hegemonic representations of the digital – 'geons', 'pixels', 'waves', visualization, visual rhetorics, etc.? How do media changes create epistemic changes, and how can we look behind the 'screen essentialism' of computational interfaces? Here we might also reflect on the way in which the practice of making-visible also entails the making-invisible – computation involves making choices about what is to be captured.[89]
Lauren F. Klein and Gold note that many appearances of the digital humanities in public media are often in a critical fashion. Armand Leroi, writing inThe New York Times, discusses the contrast between the algorithmic analysis of themes in literary texts and the work of Harold Bloom, who qualitatively and phenomenologically analyzes the themes of literature over time. Leroi questions whether or not the digital humanities can provide a truly robust analysis of literature and social phenomena or offer a novel alternative perspective on them. The literary theoristStanley Fishclaims that the digital humanities pursue a revolutionary agenda and thereby undermine the conventional standards of "pre-eminence, authority and disciplinary power".[91]However, digital humanities scholars note that "Digital Humanities is an extension oftraditional knowledgeskills and methods, not a replacement for them. Its distinctive contributions do not obliterate the insights of the past, but add and supplement the humanities' long-standing commitment to scholarly interpretation, informed research, structured argument, and dialogue within communities of practice".[3]
Some have hailed the digital humanities as a solution to the apparent problems within the humanities, namely a decline in funding, a repeat of debates, and a fading set of theoretical claims and methodological arguments.[92]Adam Kirsch, writing in theNew Republic, calls this the "False Promise" of the digital humanities.[93]While the rest of humanities and many social science departments are seeing a decline in funding or prestige, the digital humanities has been seeing increasing funding and prestige. Burdened with the problems of novelty, the digital humanities is discussed as either a revolutionary alternative to the humanities as it is usually conceived or as simply new wine in old bottles. Kirsch believes that digital humanities practitioners suffer from problems of being marketers rather than scholars, who attest to the grand capacity of their research more than actually performing new analysis and when they do so, only performing trivial parlor tricks of research. This form of criticism has been repeated by others, such as in Carl Staumshein, writing inInside Higher Education, who calls it a "Digital Humanities Bubble".[94]Later in the same publication, Straumshein alleges that the digital humanities is a 'Corporatist Restructuring' of the Humanities.[95]Some see the alliance of the digital humanities with business to be a positive turn that causes the business world to pay more attention, thus bringing needed funding and attention to the humanities.[96]If it were not burdened by the title of digital humanities, it could escape the allegations that it is elitist and unfairly funded.[97]
There has also been critique of the use of digital humanities tools by scholars who do not fully understand what happens to the data they input and place too much trust in the "black box" of software that cannot be sufficiently examined for errors.[98]Johanna Drucker, a professor atUCLADepartment of Information Studies, has criticized the "epistemological fallacies" prevalent in popular visualization tools and technologies (such asGoogle's n-gram graph) used by digital humanities scholars and the general public, calling some network diagramming and topic modeling tools "just too crude for humanistic work."[99]The lack of transparency in these programs obscures the subjective nature of the data and its processing, she argues, as these programs "generate standard diagrams based on conventional algorithms for screen display ... mak[ing] it very difficult for the semantics of the data processing to be made evident."[99]Similar problems can be seen at a lower level, with databases used for digital humanities analysis replicating the biases of the analogue systems of data.[100]As, essentially, "every database is a narrative"[100]visualisations or diagrams often obscure the underlying structures or omissions of data without acknowledging that they are incomplete or present only a particular angle.
There has also been some recent controversy among practitioners of digital humanities around the role that race and/oridentity politicsplays. Tara McPherson attributes some of the lack of racial diversity in digital humanities to the modality ofUNIXand computers themselves.[101]An open thread on DHpoco.org recently garnered well over 100 comments on the issue of race in digital humanities, with scholars arguing about the amount that racial (and other) biases affect the tools and texts available for digital humanities research.[102]McPherson posits that there needs to be an understanding and theorizing of the implications of digital technology and race, even when the subject for analysis appears not to be about race.
Amy E. Earhart criticizes what has become the new digital humanities "canon" in the shift from websites using simpleHTMLto the usage of theTEIand visuals in textual recovery projects.[103]Works that have been previously lost or excluded were afforded a new home on the internet, but much of the same marginalizing practices found in traditional humanities also took place digitally. According to Earhart, there is a "need to examine the canon that we, as digital humanists, are constructing, a canon that skews toward traditional texts and excludes crucial work by women, people of color, and the LGBTQ community."[103]
Practitioners in digital humanities are also failing to meet the needs of users with disabilities. George H. Williams argues that universal design is imperative for practitioners to increase usability because "many of the otherwise most valuable digital resources are useless for people who are—for example—deaf or hard of hearing, as well as for people who are blind, have low vision, or have difficulty distinguishing particular colors."[104]In order to provide accessibility successfully, and productive universal design, it is important to understand why and how users with disabilities are using the digital resources while remembering that all users approach their informational needs differently.[104]
Digital humanities have been criticized for not only ignoring traditional questions of lineage and history in the humanities, but lacking the fundamental cultural criticism that defines the humanities. However, it remains to be seen whether or not the humanities have to be tied to cultural criticism, per se, in order to be the humanities.[90][19]The sciences[vague]might imagine the Digital Humanities as a welcome improvement over the non-quantitative methods of the humanities and social sciences.[105][106]
As the field matures, there has been a recognition that the standard model of academic peer-review of work may not be adequate for digital humanities projects, which often involve website components, databases, and other non-print objects. Evaluation of quality and impact thus require a combination of old and new methods of peer review.[3]One response has been the creation of theDHCommons Journal. This accepts non-traditional submissions, especially mid-stage digital projects, and provides an innovative model of peer review more suited for the multimedia, transdisciplinary, and milestone-driven nature of Digital Humanities projects. Other professional humanities organizations, such as theAmerican Historical Associationand theModern Language Association, have developed guidelines for evaluating academic digital scholarship.[107][108]
The 2012 edition ofDebates in the Digital Humanitiesrecognized the fact that pedagogy was the "neglected 'stepchild' of DH" and included an entire section on teaching the digital humanities.[5]Part of the reason is that grants in the humanities are geared more toward research with quantifiable results rather than teaching innovations, which are harder to measure.[5]In recognition of a need for more scholarship on the area of teaching, the edited volumeDigital Humanities Pedagogywas published and offered case studies and strategies to address how to teach digital humanities methods in various disciplines.
|
https://en.wikipedia.org/wiki/Digital_humanities
|
Dynamic network analysis(DNA) is an emergent scientific field that brings together traditionalsocial network analysis(SNA),link analysis(LA),social simulationandmulti-agent systems(MAS) withinnetwork scienceandnetwork theory. Dynamic networks are afunctionoftime(modeled as asubsetof thereal numbers) to a set ofgraphs; for each time point there is a graph. This is akin to the definition ofdynamical systems, in which the function is from time to an ambient space, where instead of ambient space time is translated to relationships between pairs ofvertices.[1]
There are two aspects of this field. The first is thestatistical analysisof DNA data. The second is the utilization of simulation to address issues of network dynamics. DNA networks vary from traditional social networks in that they are larger, dynamic, multi-mode, multi-plex networks, and may contain varying levels ofuncertainty. The main difference of DNA to SNA is that DNA takes interactions of social features conditioning structure and behavior of networks into account. DNA is tied to temporal analysis but temporal analysis is not necessarily tied to DNA, as changes in networks sometimes result from external factors which are independent of social features found in networks. One of the most notable and earliest of cases in the use of DNA is in Sampson's monastery study, where he took snapshots of the same network from different intervals and observed and analyzed the evolution of the network.[2]
DNA statistical tools are generally optimized for large-scale networks and admit the analysis of multiple networks simultaneously in which, there are multiple types ofnodes(multi-node) and multiple types of links (multi-plex). Multi-node multi-plex networks are generally referred to as
meta-networks or high-dimensional networks. In contrast, SNA statistical tools focus on single or at most two mode data and facilitate the analysis of only one type of link at a time.
DNA statistical tools tend to provide more measures to the user, because they have measures that use data drawn from multiple networks simultaneously. Latent space models (Sarkar and Moore, 2005)[3]and agent-based simulation are often used to examine dynamic social networks (Carley et al., 2009).[4]From a computer simulation perspective, nodes in DNA are like atoms in quantum theory, nodes can be, though need not be, treated as probabilistic. Whereas nodes in a traditional SNA model arestatic, nodes in a DNA model have the ability to learn. Properties change over time; nodes can adapt: A company's employees can learn new skills and increase their value to the network; or, capture one terrorist and three more are forced to improvise. Change propagates from one node to the next and so on. DNA adds the element of a network's evolution and considers the circumstances under which change is likely to occur.
There are three main features to dynamic network analysis that distinguish it from standard social network analysis. First, rather than just using social networks, DNA looks at meta-networks. Second, agent-based modeling and other forms of simulations are often used to explore how networks evolve and adapt as well as the impact of interventions on those networks. Third, the links in the network are not binary; in fact, in many cases they represent the probability that there is a link.
Complex information about object relationships can be effectively condensed into low-dimensional embeddings in a latent space.[5]Dynamic systems, unlike static ones, involve temporal changes. Differences in learned representations over time in a dynamic system can arise from actual changes or arbitrary alterations that do not affect the metrics in the latent space with the former reflecting on the system's stability and the latter linked to the alignment of embeddings.[6]
In essence, the stability of the system defines its dynamics, while misalignment signifies irrelevant changes in the latent space. Dynamic embeddings are considered aligned when variations between embeddings at different times accurately represent the system's actual changes, not meaningless alterations in the latent space. The matter of stability and alignment of dynamic embeddings holds significant importance in various tasks reliant on temporal changes within the latent space. These tasks encompass future metadata prediction, temporal evolution, dynamic visualization, and obtaining average embeddings, among others.
A meta-network is a multi-mode, multi-link, multi-level network. Multi-mode means that there are many types of nodes; e.g., nodes people and locations. Multi-link means that there are many types of links; e.g., friendship and advice. Multi-level means that some nodes may be members of other nodes, such as a network composed of people and organizations and one of the links is who is a member of which organization.
While different researchers use different modes, common modes reflect who, what, when, where, why and how. A simple example of a meta-network is the PCANS formulation with people, tasks, and resources.[7]A more detailed formulation considers people, tasks, resources, knowledge, and organizations.[8]The ORA tool was developed to support meta-network analysis.[9]
|
https://en.wikipedia.org/wiki/Dynamic_network_analysis
|
Thefriendship paradoxis the phenomenon first observed by the sociologistScott L. Feldin 1991 that on average, an individual's friends have more friends than that individual.[1]It can be explained as a form ofsampling biasin which people with more friends are more likely to be in one's own friend group. In other words, one is less likely to be friends with someone who has very few friends. In contradiction to this, most people believe that they have more friends than their friends have.[2][3][4][5]
The same observation can be applied more generally tosocial networksdefined by other relations than friendship: for instance, most people's sexual partners have had (on the average) a greater number of sexual partners than they have.[6][7]
The friendship paradox is an example of how network structure can significantly distort an individual's local observations.[8][9]
In spite of its apparentlyparadoxicalnature, the phenomenon is real, and can be explained as a consequence of the general mathematical properties ofsocial networks. The mathematics behind this are directly related to thearithmetic-geometric mean inequalityand theCauchy–Schwarz inequality.[10]
Formally, Feld assumes that a social network is represented by anundirected graphG= (V,E), where the setVofverticescorresponds to the people in the social network, and the setEofedgescorresponds to the friendship relation between pairs of people. That is, he assumes that friendship is asymmetric relation: ifxis a friend ofy, thenyis a friend ofx. The friendship betweenxandyis therefore modeled by the edge{x,y},and the number of friends an individual has corresponds to a vertex'sdegree. The average number of friends of a person in the social network is therefore given by the average of the degrees of theverticesin the graph. That is, if vertexvhasd(v)edges touching it (representing a person who hasd(v)friends), then the average numberμof friends of a random person in the graph is
The average number of friends that a typical friend has can be modeled by choosing a random person (who has at least one friend), and then calculating how many friends their friends have on average. This amounts to choosing, uniformly at random, an edge of the graph (representing a pair of friends) and an endpoint of that edge (one of the friends), and again calculating the degree of the selected endpoint. The probability of a certain vertexv{\displaystyle v}to be chosen is
The first factor corresponds to how likely it is that the chosen edge contains the vertex, which increases when the vertex has more friends. The halving factor simply comes from the fact that each edge has two vertices. So the expected value of the number of friends of a (randomly chosen) friend is
We know from the definition of variance that
whereσ2{\displaystyle \sigma ^{2}}is the variance of the degrees in the graph. This allows us to compute the desired expected value as
For a graph that has vertices of varying degrees (as is typical for social networks),σ2{\displaystyle {\sigma }^{2}}is strictly positive, which implies that the average degree of a friend is strictly greater than the average degree of a random node.
Another way of understanding how the first term came is as follows. For each friendship(u, v), a nodeumentions thatvis a friend andvhasd(v)friends. There ared(v)such friends who mention this. Hence the square ofd(v)term. We add this for all such friendships in the network from both theu's andv's perspective, which gives the numerator. The denominator is the number of total such friendships, which is twice the total edges in the network (one from theu's perspective and the other from thev's).
After this analysis, Feld goes on to make some more qualitative assumptions about the statistical correlation between the number of friends that two friends have, based on theories of social networks such asassortative mixing, and he analyzes what these assumptions imply about the number of people whose friends have more friends than they do. Based on this analysis, he concludes that in real social networks, most people are likely to have fewer friends than the average of their friends' numbers of friends. However, this conclusion is not a mathematical certainty; there exist undirected graphs (such as the graph formed by removing a single edge from a largecomplete graph) that are unlikely to arise as social networks but in which most vertices have higher degree than the average of their neighbors' degrees.
The Friendship Paradox may be restated ingraph theoryterms as "the average degree of a randomly selected node in a network is less than the average degree of neighbors of a randomly selected node", but this leaves unspecified the exact mechanism of averaging (i.e., macro vs micro averaging). LetG=(V,E){\displaystyle G=(V,E)}be an undirected graph with|V|=N{\displaystyle |V|=N}and|E|=M{\displaystyle |E|=M}, having no isolated nodes. Let the set of neighbors of nodeu{\displaystyle u}be denotednbr(u){\displaystyle \operatorname {nbr} (u)}. The average degree is thenμ=1N∑u∈V|nbr(u)|=2MN≥1{\displaystyle \mu ={\frac {1}{N}}\sum _{u\in V}|\operatorname {nbr} (u)|={\frac {2M}{N}}\geq 1}. Let the number of "friends of friends" of nodeu{\displaystyle u}be denotedFF(u)=∑v∈nbr(u)|nbr(v)|{\displaystyle \operatorname {FF} (u)=\sum _{v\in \operatorname {nbr} (u)}|\operatorname {nbr} (v)|}. Note that this can count 2-hop neighbors multiple times, but so does Feld's analysis. We haveFF(u)≥|nbr(u)|≥1{\displaystyle \operatorname {FF} (u)\geq |\operatorname {nbr} (u)|\geq 1}. Feld considered the following "micro average" quantity.
However, there is also the (equally legitimate) "macro average" quantity, given by
The computation of MacroAvg can be expressed as the following pseudocode.
Each edge{u,v}{\displaystyle \{u,v\}}contributes to MacroAvg the quantity|nbr(v)||nbr(u)|+|nbr(u)||nbr(v)|≥2{\displaystyle {\frac {|\operatorname {nbr} (v)|}{|\operatorname {nbr} (u)|}}+{\frac {|\operatorname {nbr} (u)|}{|\operatorname {nbr} (v)|}}\geq 2}, becausemina,b>0ab+ba=2{\displaystyle \min _{a,b>0}{\frac {a}{b}}+{\frac {b}{a}}=2}. We thus get
Thus, we have bothMicroAvg≥μ{\displaystyle {\text{MicroAvg}}\geq \mu }andMacroAvg≥μ{\displaystyle {\text{MacroAvg}}\geq \mu }, but no inequality holds between them.[11]
In a 2023 paper, a parallel paradox, but for negative, antagonistic, or animosity ties, termed the "enmity paradox," was defined and demonstrated by Ghasemian andChristakis.[12]In brief, one's enemies have more enemies than one does, too. This paper also documented diverse phenomena is "mixed worlds" of both hostile and friendly ties.
The analysis of the friendship paradox implies that the friends of randomly selected individuals are likely to have higher than averagecentrality. This observation has been used as a way to forecast and slow the course ofepidemics, by using this random selection process to choose individuals to immunize or monitor for infection while avoiding the need for a complex computation of the centrality of all nodes in the network.[13][14][15]In a similar manner, in polling and election forecasting, the friendship paradox has been exploited in order to reach and query well-connected individuals who may have knowledge about how numerous other individuals are going to vote.[16]However, when utilized in such contexts, the friendship paradox inevitably introduces bias by over-representing individuals with many friends, potentially skewing resulting estimates.[17][18]
A study in 2010 by Christakis and Fowler showed that flu outbreaks can be detected almost two weeks before traditional surveillance measures would do so by using the friendship paradox in monitoring the infection in a social network.[19]They found that using the friendship paradox to analyze the health ofcentralfriends is "an ideal way to predict outbreaks, but detailed information doesn't exist for most groups, and to produce it would be time-consuming and costly."[20]This extends to the spread of ideas as well, with evidence that the friendship paradox can be used to track and predict the spread of ideas and misinformation through networks.[21][13][22]This observation has been explained with the argument that individuals with more social connections may be the driving forces behind the spread of these ideas and beliefs, and as such can be used as early-warning signals.[18]
Friendship paradox based sampling (i.e., sampling random friends) has been theoretically and empirically shown to outperform classical uniform sampling for the purpose of estimating thepower-law degree distributionsofscale-free networks.[23][24]The reason is that sampling the network uniformly will not collect enough samples from the characteristicheavy tailpart of the power-law degree distribution to properly estimate it. However, sampling random friends incorporates more nodes from the tail of the degree distribution (i.e., more high degree nodes) into the sample. Hence, friendship paradox based sampling captures the characteristic heavy tail of a power-law degree distribution more accurately and reduces the bias and variance of the estimation.[24]
The "generalized friendship paradox" states that the friendship paradox applies to other characteristics as well. For example, one's co-authors are on average likely to be more prominent, with more publications, more citations and more collaborators,[25][26][27]or one's followers on Twitter have more followers.[28]The same effect has also been demonstrated for Subjective Well-Being by Bollen et al. (2017),[29]who used a large-scale Twitter network and longitudinal data on subjective well-being for each individual in the network to demonstrate that both a Friendship and a "happiness" paradox can occur in online social networks.
The friendship paradox has also been used as a means to identify structurally influential nodes within social networks, so as to magnifysocial contagionof diverse practices relevant to human welfare and public health. This has been shown to be possible in several large-scale randomized controlled field trials conducted byChristakiset al., with respect to the adoption of multivitamins[30]or maternal and child health practices[31][32]in Honduras, or of iron-fortified salt in India.[33]This technique is valuable because, by exploiting the friendship paradox, one can identify such influential nodes without the expense and delay of actually mapping the whole network.
|
https://en.wikipedia.org/wiki/Friendship_paradox
|
Individualhuman mobilityis the study that describes how individual humans move within a network or system.[1]The concept has been studied in a number of fields originating in the study of demographics. Understanding human mobility has many applications in diverse areas, includingspread of diseases,[2][3]mobile viruses,[4]city planning,[5][6][7]traffic engineering,[8][9]financial market forecasting,[10]andnowcastingof economicwell-being.[11][12]
In recent years, there has been a surge in large data sets available on human movements. These data sets are usually obtained fromcell phoneorGPSdata, with varying degrees of accuracy. For example, cell phone data is usually recorded whenever a call or a text message has been made or received by the user, and contains the location of the tower that the phone has connected to as well as the time stamp.[13]In urban areas, user and the telecommunication tower might be only a few hundred meters away from each other, while in rural areas this distance might well be in region of a few kilometers. Therefore, there is varying degree of accuracy when it comes to locating a person using cell phone data. These datasets are anonymized by the phone companies so as to hide and protect the identity of actual users. As example of its usage, researchers[13]used the trajectory of 100,000 cell phone users within a period of six months, while in much larger scale[14]trajectories of three million cell phone users were analyzed.
GPS data are usually much more accurate even though they usually are, because ofprivacyconcerns, much harder to acquire. Massive amounts of GPS data describing human mobility are produced, for example, by on-board GPS devices on private vehicles.[15][16]The GPS device automatically turns on when the vehicle starts, and the sequence of GPS points the device produces every few seconds forms a detailed mobility trajectory of the vehicle. Some recent scientific studies compared the mobility patterns emerged from mobile phone data with those emerged from GPS data.[15][16][17]
Researchers have been able to extract very detailed information about the people whose data are made available to public. This has sparked a great amount of concern about privacy issues. As an example of liabilities that might happen,New York Cityreleased 173 million individualtaxitrips. City officials used a veryweak cryptographyalgorithm to anonymize the license number and medallion number, which is an alphanumeric code assigned to each taxi cab.[18]This made it possible for hackers to completely de-anonymize the dataset, and even some were able to extract detailed information about specific passengers and celebrities, including their origin and destination and how much they tipped.[18][19]
At the large scale, when the behaviour is modelled over a period of relatively long duration (e.g. more than one day), human mobility can be described by three major components:
Brockmann,[20]by analysing banknotes, found that the probability of travel distance follows ascale-freerandom walkknown asLévy flightof formP(r)∼r−(1+β){\displaystyle P(r)\ \sim r^{-(1+\beta )}}whereβ=0.6{\displaystyle \beta =0.6}. This was later confirmed by two studies that used cell phone data[13]and GPS data to track users.[15]The implication of this model is that, as opposed to other more traditional forms of random walks such asbrownian motion, human trips tend to be of mostly short distances with a few long distance ones. In brownian motion, the distribution of trip distances are govern by a bell-shaped curve, which means that the next trip is of a roughly predictable size, the average, where in Lévy flight it might be an order of magnitude larger than the average.
Some people are inherently inclined to travel longer distances than the average, and the same is true for people with lesser urge for movement. Radius of gyration is used to capture just that and it indicates the characteristic distance travelled by a person during a time period t.[13]Each user, within his radius of gyrationrg(t){\displaystyle r_{g}(t)}, will choose his trip distance according toP(r){\displaystyle P(r)}.
The third component models the fact that humans tend to visit some locations more often than what would have happened under a random scenario. For example, home or workplace or favorite restaurants are visited much more than many other places in a user's radius of gyration. It has been discovered thatS(t)∼tμ{\displaystyle S(t)\ \sim t^{\mu }}whereμ=0.6{\displaystyle \mu =0.6}, which indicates a sublinear growth in different number of places visited by an individual .
These three measures capture the fact that most trips happen between a limited number of places, with less frequent travels to places outside of an individual's radius of gyration.
Although the human mobility is modeled as a random process, it is surprisingly predictable. By measuring the entropy of each person's movement, it has been shown[14]that there is a 93% potential predictability. This means that although there is a great variance in type of users and the distances that each of them travel, the overall characteristic of them is highly predictable. Implication of it is that in principle, it is possible to accurately model the processes that are dependent on human mobility patterns, such as disease or mobile virus spreading patterns.[21][22][23]
On individual scale, daily human mobility can be explained by only 17Network motifs. Each individual, shows one of these motifs characteristically, over a period of several months. This opens up the possibility to reproduce daily individual mobility using a tractable analytical model[24]
Infectious diseasesspread across the globe usually because of long-distance travels of carriers of the disease. These long-distance travels are made usingair transportationsystems and it has been shown that "network topology, traffic structure, and individual mobility patterns are all essential for accurate predictions of disease spreading".[21]On a smaller spatial scale the regularity of human movement patterns and its temporal structure should be taken into account in models of infectious disease spread.[25]Cellphone viruses that are transmitted via bluetooth are greatly dependent on the human interaction and movements. With more people using similar operating systems for their cellphones, it's becoming much easier to have a virus epidemic.[22]
InTransportation Planning, leveraging the characteristics of human movement, such as tendency to travel short distances with few but regular bursts of long-distance trips, novel improvements have been made toTrip distributionmodels, specifically toGravity model of migration[26]
|
https://en.wikipedia.org/wiki/Individual_mobility
|
Influence-for-hireorcollective influence, refers to the economy that has emerged around buying and sellinginfluenceonsocial media platforms.[1]
Companies that engage in the influence-for-hire industry range fromcontent farmsto high-endpublic relationsagencies. Traditionallyinfluence operationshave largely been confined to public sector actors like intelligence agencies, in the influence-for-hire industry the groups conduction the operations are private with commerce being their primary consideration.[2]However many of the clients in the influence-for-hire industry are countries or countries acting through proxies.[1]They are often located in countries with less expensive digital labor.[3]
In May 2021,Facebooktook a Ukrainian influence-for-hire network offline. Facebook attributed the network to organizations and consultants linked to Ukrainian politicians includingAndriy Derkach.[4][5]
During the COVID-19 pandemicstate sponsored misinformationwas spread through influence-for-hire networks.[6]
In August 2021, a report published by theAustralian Strategic Policy Instituteimplicated theChinese governmentand the rulingChinese Communist Partyin campaigns of online manipulation conducted againstAustraliaandTaiwanusing influence-for-hire.[7][8][9][10]
|
https://en.wikipedia.org/wiki/Influence-for-hire
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Mathematical sociologyis aninterdisciplinaryfield of research concerned with the use of mathematics within sociological research.[1]
Starting in the early 1940s,Nicolas Rashevsky,[2][3]and subsequently in the late 1940s,Anatol Rapoportand others, developed arelationaland probabilistic approach to the characterization of largesocial networksin which the nodes are persons and the links are acquaintanceship. During the late 1940s, formulas were derived that connected local parameters such as closure of contacts – if A is linked to both B and C, then there is a greater than chance probability that B and C are linked to each other – to the global network property of connectivity.[4]
Moreover, acquaintanceship is apositive tie, but what aboutnegative tiessuch as animosity among persons? To tackle this problem,graph theory, which is the mathematical study of abstract representations of networks of points and lines, can be extended to include these two types of links and thereby to create models that represent both positive and negative sentiment relations, which are represented assigned graphs. A signed graph is calledbalancedif the product of the signs of all relations in every cycle (links in every graph cycle) is positive. Through formalization by mathematicianFrank Harary, this work produced the fundamental theorem of this theory. It says that if a network of interrelated positive and negative ties is balanced, e.g. as illustrated by the psychological principle that "my friend's enemy is my enemy", then it consists of two sub-networks such that each has positive ties among its nodes and there are only negative ties between nodes in distinct sub-networks.[5]The imagery here is of a social system that splits into twocliques. There is, however, a special case where one of the two sub-networks is empty, which might occur in very small networks.
In another model, ties have relative strengths. 'Acquaintanceship' can be viewed as a 'weak' tie and 'friendship' is represented as a strong tie. Like its uniform cousin discussed above, there is a concept of closure, called strong triadic closure. A graph satisfies strong triadic closure If A is strongly connected to B, and B is strongly connected to C, then A and C must have a tie (either weak or strong).
In these two developments we have mathematical models bearing upon the analysis of structure. Other early influential developments in mathematical sociology pertained to process. For instance, in 1952Herbert A. Simonproduced a mathematical formalization of a published theory[6]ofsocial groupsby constructing a model consisting of a deterministic system of differential equations. A formal study of the system led to theorems about the dynamics and the impliedequilibrium statesof any group.
The emergence of mathematical models in the social sciences was part of the zeitgeist in the 1940s and 1950s in which a variety of new interdisciplinary scientific innovations occurred, such asinformation theory,game theory,cyberneticsand mathematical model building in the social and behavioral sciences.[7]
Focusing onmathematicswithinsociologicalresearch, mathematical sociology usesmathematicsto construct social theories. Mathematical sociology aims to take sociological theory and to express it in mathematical terms. The benefits of this approach include increased clarity and the ability to use mathematics to derive implications of a theory that cannot be arrived at intuitively. In mathematical sociology, the preferred style is encapsulated in the phrase "constructing a mathematical model." This means making specified assumptions about some social phenomenon, expressing them in formal mathematics, and providing an empirical interpretation for the ideas. It also means deducing properties of the model and comparing these with relevant empirical data.Social network analysisis the best-known contribution of this subfield to sociology as a whole and to the scientific community at large. The models typically used in mathematical sociology allow sociologists to understand how predictable local interactions are and they are often able to elicit global patterns of social structure.[8]
In 1954, a critical expository analysis of Rashevsky's social behavior models was written by sociologistJames S. Coleman.[9]Rashevsky's models and as well as the model constructed by Simon raise a question: how can one connect such theoretical models to the data of sociology, which often take the form of surveys in which the results are expressed in the form of proportions of people believing or doing something. This suggests deriving the equations from assumptions about the chances of an individual changing state in a small interval of time, a procedure well known in the mathematics ofstochastic processes.
Coleman embodied this idea in his 1964 bookIntroduction to Mathematical Sociology, which showed how stochastic processes in social networks could be analyzed in such a way as to enable testing of the constructed model by comparison with the relevant data. The same idea can and has been applied to processes of change in social relations, an active research theme in the study of social networks, illustrated by an empirical study appearing in the journal Science.[10]
In other work, Coleman employed mathematical ideas drawn from economics, such asgeneral equilibrium theory, to argue that general social theory should begin with a concept ofpurposive actionand, for analytical reasons, approximate such action by the use ofrational choicemodels (Coleman, 1990). This argument is similar to viewpoints expressed by other sociologists in their efforts to use rational choice theory in sociological analysis although such efforts have met with substantive and philosophical criticisms.[11]
Meanwhile, structural analysis of the type indicated earlier received a further extension to social networks based on institutionalized social relations, notably those of kinship. The linkage of mathematics and sociology here involved abstract algebra, in particular,group theory.[12]This, in turn, led to a focus on a data-analytical version of homomorphic reduction of a complex social network (which along with many other techniques is presented inWassermanand Faust 1994[13]).
In regard to Rapoport's random and biased net theory, his 1961 study of a large sociogram, co-authored with Horvath turned out to become a very influential paper.[14]There was early evidence of this influence. In 1964, Thomas Fararo and a co-author analyzed another large friendship sociogram using a biased net model.[15]Later in the 1960s, Stanley Milgram described the small world problem and undertook a field experiment dealing with it.[16][17]A highly fertile idea was suggested and applied by Mark Granovetter in which he drew upon Rapoport's 1961 paper to suggest and apply a distinction between weak and strong ties. The key idea was that there was "strength" in weak ties.[18]
Some programs of research in sociology employ experimental methods to study social interaction processes.Joseph Bergerand his colleagues initiated such a program in which the central idea is the use of the theoretical concept "expectation state" to construct theoretical models to explain interpersonal processes, e.g., those linking external status in society to differential influence in local group decision-making. Much of this theoretical work is linked to mathematical model building, especially after the late 1970s adoption of a graph theoretic representation of social information processing, as Berger (2000) describes in looking back upon the development of his program of research. In 1962 he and his collaborators explained model building by reference to the goal of the model builder, which could be explication of a concept in a theory, representation of a single recurrent social process, or a broad theory based on a theoretical construct, such as, respectively, the concept of balance in psychological and social structures, the process of conformity in an experimental situation, and stimulus sampling theory.[19]
The generations of mathematical sociologists that followed Rapoport, Simon, Harary, Coleman, White and Berger, including those entering the field in the 1960s such as Thomas Fararo, Philip Bonacich, and Tom Mayer, among others, drew upon their work in a variety of ways.
Mathematical sociology remains a small subfield within the discipline, but it has succeeded in spawning a number of other subfields which share its goals of formally modeling social life. The foremost of these fields issocial network analysis, which has become among the fastest growing areas of sociology in the 21st century.[20]The other major development in the field is the rise ofcomputational sociology, which expands the mathematical toolkit with the use ofcomputer simulations,artificial intelligenceand advanced statistical methods. The latter subfield also makes use of the vast new data sets on social activity generated by social interaction on the internet.
One important indicator of the significance of mathematical sociology is that the general interest journals in the field, including such central journals asThe American Journal of SociologyandThe American Sociological Review, have published mathematical models that became influential in the field at large.
More recent trends in mathematical sociology are evident in contributions toThe Journal of Mathematical Sociology(JMS). Several trends stand out: the further development of formal theories that explain experimental data dealing with small group processes, the continuing interest in structural balance as a major mathematical and theoretical idea, the interpenetration of mathematical models oriented to theory and innovative quantitative techniques relating to methodology, the use of computer simulations to study problems in social complexity, interest inmicro–macro linkageand the problem of emergence, and ever-increasing research on networks of social relations.
Thus, topics from the earliest days, like balance and network models, continue to be of contemporary interest. The formal techniques employed remain many of the standard and well-known methods of mathematics: differential equations, stochastic processes and game theory. Newer tools like agent-based models used in computer simulation studies are prominently represented. Perennial substantive problems still drive research:social diffusion,social influence,social statusorigins and consequences, segregation, cooperation,collective action, power, and much more.
Many of the developments in mathematical sociology, includingformal theory, have exhibited notable decades-long advances that began with path-setting contributions by leading mathematical sociologists and formal theorists. This provides another way of taking note of recent contributions but with an emphasis on continuity with early work through the use of the idea of “research program,” which is a coherent series of theoretical and empirical studies based on some fundamental principle or approach. There are more than a few of these programs and what follows is no more than a brief capsule description of leading exemplars of this idea in which there is an emphasis on the originating leadership in each program and its further development over decades.
(1)Rational Choice Theoryand James S. Coleman: After his 1964 pioneeringIntroduction to Mathematical Sociology, Coleman continued to make contributions to social theory and mathematical model building and his 1990 volume,Foundations of Social Theorywas the major theoretical work of a career that spanned the period from 1950s to 1990s and included many other research-based contributions.[21]The Foundation book combined accessible examples of how rational choice theory could function in the analysis of such sociological topics as authority,trust,social capitaland thenorms(in particular, their emergence). In this way, the book showed how rational choice theory could provide an effective basis for making the transition from micro to macro levels of sociological explanation. An important feature of the book is its use of mathematical ideas in generalizing the rational choice model to includeinterpersonal sentiment relationsas modifiers of outcomes and doing so such that the generalized theory captures the original more self-oriented theory as a special case, as point emphasized in a later analysis of the theory.[22]The rationality presupposition of the theory led to debates among sociological theorists.[23]Nevertheless, many sociologists drew upon Coleman's formulation of a general template for micro-macro transition to gain leverage on the continuation of topics central to his and the discipline's explanatory focus on a variety of macrosocial phenomena in which rational choice simplified the micro level in the interest of combining individual actions to account for macro outcomes of social processes.[24]
(2)Structuralism (Formal)andHarrison C. White: In the decades since his earliest contributions, Harrison White has led the field in putting social structural analysis on a mathematical and empirical basis, including the 1970 publication ofChains of Opportunity: System Models of Mobility in Organizations which set out and applied to data avacancy chainmodel for mobility in and across organizations. His very influential other work includes the operational concepts ofblockmodelandstructural equivalencewhich start from a body of social relational data to produce analytical results using these procedures and concepts. These ideas and methods were developed in collaboration with his former studentsFrançois Lorraine,Ronald Breiger, andScott Boorman. These three are among the more than 30 students who earned their doctorates under White in the period 1963-1986.[25]Thetheory and application of blockmodelshas been set out in detail in a recent monograph.[26]White's later contributions include a structuralist approach to markets[27]and, in 1992, a general theoretical framework,[28]later appearing in a revised edition.[29]
(3)Expectation states theoryand Joseph Berger: Under Berger's intellectual and organizational leadership, Expectation States Theory branched out into a large number of specific programs of research on specific problems, each treated in terms of the master concept of expectation states. He and his colleague and frequent collaboratorMorris Zelditch Jrnot only produced work of their own but created a doctoral program at Stanford University that led to an enormous outpouring of research by notable former students, includingMurray Webster,David Wagner, andHamit Fisek. Collaboration with mathematicianRobert Z. Normanled to the use of mathematical graph theory as a way of representing and analyzing social information processing in self-other interactions. Berger and Zelditch also advanced work in formal theorizing and mathematical model building as early as 1962 with a collaborative expository analysis of types of models.[30]Berger and Zelditch stimulated advances in other theoretical research programs by providing outlets for the publication of new work, culminating in a 2002 edited volume[31]that includes a chapter that presents an authoritative overview ofExpectation states theoryas a program of cumulative research dealing withgroup processes.
(4) Formalization inTheoretical Sociologyand Thomas J. Fararo: Many of this sociologist's contributions have been devoted to bringing mathematical thinking into greater contact with sociological theory.[32]He organized a symposium attended by sociological theorists in which formal theorists delivered papers that were subsequently published in 2000.[33]Through collaborations with students and colleagues his own theoretical research program dealt with such topics asmacrostructural theoryandE-state structuralism(both with former studentJohn Skvoretz),subjective images of stratification[34](with former studentKenji Kosaka),tripartite structural analysis(with colleaguePatrick Doreian)[35]andcomputational sociology(with colleagueNorman P. Hummon).[36][37]Two of his books are extended treatments of his approach to theoretical sociology.[38][39]
(5) Social Network Analysis andLinton C. Freeman: In the early 1960s Freeman directed a sophisticated empirical study ofcommunity power structure. In 1978 he established the journalSocial Networks.It rapidly became a major outlet for original research papers that used mathematical techniques to analyze network data. The journal also publishes conceptual and theoretical contributions, including his paper “Centralityin Social Networks: Conceptual Clarification.” In turn, the mathematical concept defined in that paper led to further elaborations of the ideas, to experimental tests, and to numerous applications in empirical studies.[40]He is the author of a study of the history and sociology of the field of social network analysis.[41]
(6)Quantitative MethodologyandKenneth C. Land: Kenneth Land has been on the frontier of quantitative methodology in sociology as well as formal theoretical model building. The influential yearly volumeSociological Methodologyhas been one of Land's favorite outlets for the publication of papers that often lie in the intersection of quantitative methodology and mathematical sociology. Two of his theoretical papers appeared early in this journal: “Mathematical Formalization of Durkheim's Theory of Division of Labor” (1970) and “Formal Theory” (1971). His decades-long research program includes contributions relating to numerous special topics and methods, includingsocial statistics,social indicators, stochastic processes,mathematical criminology,demographyandsocial forecasting. Thus Land brings to these fields the skills of a statistician, a mathematician and a sociologist, combined.
(7)Affect Control TheoryandDavid R. Heise: In 1979, Heise published a groundbreaking formal and empirical study in the tradition of interpretive sociology, especially symbolic interactionism,Understanding Events: Affect and the Construction of Social Action.It was the origination of a research program that has included his further theoretical and empirical studies and those of other sociologists, such asLynn Smith-Lovin,Dawn RobinsonandNeil MacKinnon. Definition of the situation and self-other definitions are two of the leading concepts in affect control theory. The formalism used by Heise and other contributors uses a validated form of measurement and acyberneticcontrol mechanism in which immediate feelings and compared with fundamental sentiments in such a way as to generate an effort to bring immediate feelings in a situation into correspondence with sentiments. In the simplest models, each person in an interactive pair, is represented in terms of one side of a role relationship in which fundamental sentiments associated with each role guide the process of immediate interaction. A higher level of the control process can be activated in which the definition of the situation is transformed. This research program comprises several of the key chapters in a 2006 volume[42]of contributions to control systems theory (in the sense of Powers 1975[43]) in sociology.
(8) "Distributive Justice Theory" andGuillermina Jasso: Since 1980, Jasso has treated problems of distributive justice with an original theory that uses mathematical methods.[44]She has elaborated upon and applied this theory to a wide range of social phenomena.[45]Her most general mathematical apparatus – with the theory of distributive justice as a special case—deals with any subjective comparison between some actual state and some reference level for it, e.g., a comparison of an actual reward with an expected reward. In her justice theory, she starts with a very simple premise, the justice evaluation function (the natural logarithm of the ratio of actual to just reward) and then derives numerous empirically testable implications.[46]
(9) Collaborative research and John Skvoretz. A major feature of modern science is collaborative research in which the distinctive skills of the participants combine to produce original research. Skvoretz, in addition to this other contributions, has been a frequent collaborator in a variety of theoretical research programs, often using mathematical expertise as well as skills in experimental design, statistical data analysis and simulation methods. Some examples are: (1) Collaborative work on theoretical, statistical and mathematical problems in biased net theory.[47](2) Collaborative contributions to Expectation States Theory.[48](3) Collaborative contributions toElementary Theory.[49](4) Collaboration withBruce Mayhewin a structuralist research program.[50]From the early 1970s, Skvoretz has been one of the most prolific of contributors to the advance of mathematical sociology.[51]
The above discussion could be expanded to include many other programs and individuals including European sociologists such asPeter Abelland the lateRaymond Boudon.
The Mathematical Sociology section of TheAmerican Sociological Associationin 2002 initiated awards for contributions to the field, includingThe James S. Coleman Distinguished Career Achievement Award.(Coleman had died in 1995 before the section had been established.) Given every other year, the awardees include some of those just listed in terms of their career-long research programs:
The section's other categories of awards and their recipients are listed atASA Section on Mathematical Sociology
Mathematical sociology textbooks cover a variety of models, usually explaining the required mathematical background before discussing important work in the literature (Fararo 1973, Leik and Meeker 1975, Bonacich and Lu 2012). An earlier text byOtomar Bartos(1967) is still of relevance. Of wider scope and mathematical sophistication is the text by Rapoport (1983). A very reader-friendly and imaginative introduction to explanatory thinking leading to models is Lave and March (1975, reprinted 1993). TheJournal of Mathematical Sociology(started in 1971) has been open to papers covering a broad spectrum of topics employing a variety of types of mathematics, especially through frequent special issues. Other journals in sociology who publish papers with substantial use of mathematics areComputational and Mathematical Organization Theory,Journal of social structure,Journal of Artificial Societies and Social Simulation
Articles inSocial Networks,a journal devoted to social structural analysis, very often employ mathematical models and related structural data analyses. In addition – importantly indicating the penetration of mathematical model building into sociological research – the major comprehensive journals in sociology, especiallyThe American Journal of SociologyandThe American Sociological Review,regularly publish articles featuring mathematical formulations.
|
https://en.wikipedia.org/wiki/Mathematical_sociology
|
Metcalfe's lawstates that the financial value or influence of atelecommunications networkisproportional to the squareof the number of connected users of the system (n2). The law is named afterRobert Metcalfeand was first proposed in 1980, albeit not in terms of users, but rather of "compatible communicating devices" (e.g., fax machines, telephones).[1]It later became associated with users on theEthernetafter a September 1993Forbesarticle byGeorge Gilder.[2]
Metcalfe's law characterizes many of thenetwork effectsof communication technologies and networks such as theInternet,social networkingand theWorld Wide Web. Former Chairman of the U.S.Federal Communications CommissionReed Hundtsaid that this law gives the most understanding to the workings of the present-day Internet.[3]Mathematically, Metcalfe's Law shows that the number of unique possible connections in ann{\displaystyle n}-node connection can be expressed as thetriangular numbern(n−1)/2{\displaystyle n(n-1)/2}, which isasymptoticallyproportional ton2{\displaystyle n^{2}}.
The law has often been illustrated using the example offaxmachines: a single fax machine on its own is useless, but the value of every fax machine increases with the total number of fax machines in the network, because the total number of people with whom each user may send and receive documents increases.[4]This is common illustration to explainnetwork effect. Thus, in any social network, the greater the number of users with the service, the more valuable the service becomes to the community.
Metcalfe's law was conceived in 1983 in a presentation to the3Comsales force.[5]It statedVwould be proportional to the total number of possible connections, or approximatelyn-squared.
The original incarnation was careful to delineate between a linear cost (Cn), non-linear growth(n2) and a non-constant proportionality factor affinity (A). Thebreak-even pointpoint where costs are recouped is given by:C×n=A×n(n−1)/2{\displaystyle C\times n=A\times n(n-1)/2}At some size, the right-hand side of the equationV, value, exceeds the cost, andAdescribes the relationship between size and net value added. For largen, net network value is then:Π=n(A×(n−1)/2−C){\displaystyle \Pi =n(A\times (n-1)/2-C)}Metcalfe properly dimensionedAas "value per user". Affinity is also a function of network size, and Metcalfe correctly asserted thatAmust decline asngrows large. In a 2006 interview, Metcalfe stated:[6]
There may be diseconomies of network scale that eventually drive values down with increasing size. So, ifV=An2, it could be thatA(for “affinity,” value per connection) is also a function ofnand heads down after some network size, overwhelmingn2.
Network size, and hence value, does not grow unbounded but is constrained by practical limitations such as infrastructure, access to technology, andbounded rationalitysuch asDunbar's number. It is almost always the case that user growthnreaches a saturation point. With technologies, substitutes, competitors andtechnical obsolescenceconstrain growth ofn. Growth of n is typically assumed to follow asigmoid functionsuch as alogistic curveorGompertz curve.
Ais also governed by the connectivity ordensityof the network topology. In an undirected network, everyedgeconnects two nodes such that there are 2mnodes per edge. The proportion of nodes in actual contact are given byc=2m/n{\displaystyle c=2m/n}.
The maximum possible number of edges in a simple network (i.e. one with no multi-edges or self-edges) is(n2)=n(n−1)/2{\displaystyle {\binom {n}{2}}=n(n-1)/2}.
Therefore the densityρof a network is the faction of those edges that are actually present is:
which for large networks is approximated byρ=c/n{\displaystyle \rho =c/n}.[7]
Metcalfe's law assumes that the value of each noden{\displaystyle n}is of equal benefit.[3]If this is not the case, for example because one fax machine serves 60 workers in a company, the second fax machine serves half of that, the third one third, and so on, then the relative value of an additional connection decreases. Likewise, in social networks, if users that join later use the network less than early adopters, then the benefit of each additional user may lessen, making the overall network less efficient if costs per users are fixed.
Within the context of social networks, many, including Metcalfe himself, have proposed modified models in which the value of the network grows asnlogn{\displaystyle n\log n}rather thann2{\displaystyle n^{2}}.[8][3]Reed[non sequitur]andAndrew Odlyzkohave sought out possible relationships to Metcalfe's Law in terms of describing the relationship of a network and one can read about how those are related. Tongia and Wilson also examine the related question of the costs to those excluded.[9]
For more than 30 years, there was little concrete evidence in support of the law. Finally, in July 2013, Dutch researchers analyzed European Internet-usage patterns over a long-enough time[specify]and foundn2{\displaystyle n^{2}}proportionality for small values ofn{\displaystyle n}andnlogn{\displaystyle n\log n}proportionality for large values ofn{\displaystyle n}.[10]A few months later, Metcalfe himself provided further proof by usingFacebook's data over the past 10 years to show a good fit for Metcalfe's law.[11]
In 2015, Zhang, Liu, and Xu parameterized the Metcalfe function in data fromTencentand Facebook. Their work showed that Metcalfe's law held for both, despite differences in audience between the two sites (Facebook serving a worldwide audience and Tencent serving only Chinese users). The functions for the two sites wereVTencent=7.39×10−9×n2{\displaystyle V_{\text{Tencent}}=7.39\times 10^{-9}\times n^{2}}andVFacebook=5.70×10−9×n2{\displaystyle V_{\text{Facebook}}=5.70\times 10^{-9}\times n^{2}}respectively.[12]One of the earliest mentions of the Metcalfe Law in the context of Bitcoin was by a Reddit post by Santostasi in 2014. He compared the observed generalized Metcalfe behavior for Bitcoin to the Zipf's Law and the theoretical Metcalfe result.[13]The Metcalfe's Law is a critical component of Santostasi's Bitcoin Power Law Theory.[14]In a working paper, Peterson linked time-value-of-money concepts to Metcalfe value using Bitcoin and Facebook as numerical examples of the proof,[15]and in 2018 applied Metcalfe's law toBitcoin, showing that over 70% of variance in Bitcoin value was explained by applying Metcalfe's law to increases in Bitcoin network size.[16]
In a 2024 interview, mathematicianTerence Taoemphasized the importance of universality and networking within the mathematics community, for which he cited the Metcalfe's Law. Tao believes that a larger audience leads to more connections, which ultimately results in positive developments within the community. For this, he cited Metcalfe's law to support this perspective. Tao further stated, "my whole career experience has been sort of the more connections equals just better stuff happening".[17]
|
https://en.wikipedia.org/wiki/Metcalfe%27s_law
|
Netocracywas a term invented by the editorial board of the American technology magazineWiredin the early 1990s. AportmanteauofInternetandaristocracy,netocracyrefers to a perceived global upper-class that bases its power on a technological advantage and networking skills, in comparison to what is portrayed as abourgeoisieof a gradually diminishing importance.
The concept was later picked up and redefined byAlexander BardandJan Söderqvistfor their bookNetocracy — The New Power Elite and Life After Capitalism(originally published in Swedish in 2000 asNätokraterna : boken om det elektroniska klassamhället, published in English by Reuters/Pearsall UK in 2002).
The netocracy concept has been compared withRichard Florida's concept of thecreative class. Bard and Söderqvist have also defined anunderclassin opposition to the netocracy, which they refer to as the consumtariat.
Alexander Bard describes a new underclass called theconsumtariat, aportmanteauofconsumerandproletariat, whose main activity is consumption, regulated from above. It is kept occupied with private problems, its desires provoked with the use of advertisements and its active participation is limited to things likeproduct choice, product customization, engaging with interactive products and life-style choice.[1]
Similar to netocracy, is the concept ofcyberdeutocracy. Karl W. Deutsch in his bookThe Nerves of Government: Models of Political Communication and Control[2]hypothesized about "information elites, controlling means of mass communication and, accordingly, power institutions, the functioning of which is based on the use of information in their activities." Thus Deutsch introduced the concept ofdeutocracy, combining the words 'Deutsch' and 'autocracy' to get the new term. Cyberdeutocracy combines 'deutocracy' with the prefix 'cyber-' and is defined as a political regime based on the control by the political and corporate elites of the information and communication infrastructure of the Internet space. As a tool of social control, Cyberdeutocracy allows elites to engage in the:
The term was coined by Phillip Freiberg in his 2018 paper "What are CyberSimulacra and Cyberdeutocracy?"[3]
Netocracy can also refer to "Internet-enabled democracy" where issue-based politics will supersede party-based politics. In this sense, the wordnetocracyis also used as aportmanteauofInternetanddemocracy, not ofInternetandaristocracy:
|
https://en.wikipedia.org/wiki/Netocracy
|
Network-based diffusion analysis (NBDA)is a statistical tool to detect and quantify social transmission of information or a behaviour in social networks (SNA, etc.). NBDA assumes thatsocial transmissionof a behavior follows the social network of associations or interactions among individuals, since individuals who spend a lot of time together, or who interact more have more opportunity to learn from each other.[1]Therefore, NBDA infers social transmission if the spread of a novel behavior follows the social network of apopulation. NBDA thus allows the study ofsocial learningto be linked toanimal behaviorresearch that uses social network analysis. NBDA was introduced by Franz & Nunn[2]and further developed by Hoppitt, Boogert, & Laland.[3]
NBDA requires prior knowledge about the underlyingsocial networkof a population.[2]In an observational study, the order (or timing) at which individuals in the population acquire a behaviour or information is recorded. NBDA then tests whether the spread of information or behaviour is explained by the previously determined network or not. Because more closely associated individuals are more likely to interact with each other, information is assumed to travel along social ties. If there is a good match between the diffusion of information and the underlying network social transmission is assumed. Otherwise, it is assumed that information wasasociallyacquired (e.g. trial and error, mistakes, etc.).
NBDA does not only serve as a tool for the detection of social learning, but also allows the estimation of the strength of the social transmission effect.[3]In addition, several individual-level variables can be included in the analysis, which have potential influence on an individual's learning rate (e.g. gender, rank or age), and can also be used to model the effect of, and statistically control for potential ecological and genetic influences. NBDA has been successfully used in a number of studies to identify and quantify the effects of social transmission on the spread of behaviors in both wild and captive animal populations such asstarlings,[3]chimpanzees[4]orhumpback whales.[5]
|
https://en.wikipedia.org/wiki/Network-based_diffusion_analysis
|
Organizational patternsare inspired in large part by the principles of the software pattern community, that in turn takes it cues fromChristopher Alexander's work on patterns of the built world.[1]Organizational patterns also have roots inKroeber's classic anthropological texts on the patterns that underlie culture and society.[2]They in turn have provided inspiration for the Agile software development movement,
and for the creation of parts ofScrumand ofExtreme Programmingin particular.
An early explicit citation to patterns of social structure can be found in the anthropological literature.
Patterns are those arrangements or systems of internal relationship which give to any culture its coherence or plan, and keep it from being a mere accumulation of random bits.
They are therefore of primary importance.[3]
Kroeberspeaks ofuniversal patternsthat describe some overall scheme common to all human culture; ofsystemic patternsare broad but normative forms relating to beliefs, behaviors, signs, and economics; andtotal culture patternsthat are local.Kroebernotes that systemic patterns can pass from culture to culture:
A second kind of pattern consists of a system or complex of cultural material that has proved its utility as a system and therefore tends to cohere and persist as a unit; it is modifiable only with difficulty as to its underlying plan. Any one such systemic pattern is limited primarily to one aspect of culture, such as subsistence, religion, or economics; but it is not limited areally, or to one particular culture; it can be diffused cross-culturally, from one people to another. . . . What distinguishes these systemic patterns of culture—or well-patterned systems, as they might also be called—is a specific interrelation of their component parts, a nexus that holds them together strongly, and tends to preserve the basic plan... As a result of the persistence of these systemic patterns, their significance becomes most evident on a historical view.[4]
The pattern aspect ofKroeber's view fits very well the systems-thinking pattern view ofChristopher Alexanderin the field of architecture.
Alexander's books became an inspiration for the software world, and in particular for theobject-oriented programmingworld, in about 1993.
Organizational patterns in the sense they are recognized in the software community today first made an appearance at the originalHillside Groupworkshop that would lead to the pattern community and itsPLoPconferences.[5]
The Hillside Groupsent out a call for pattern papers and, in 1994, held the first pattern conference at Allerton Park in central Illinois in the United States.
The second conference, also at Allerton, would follow a year later.
These first twoPLoPconferences witnessed a handful of organizational patterns:
A flurry of associated publications and follow-up articles followed quickly thereafter, including an extemporization of the organizational patterns approach in the Bell Labs Technical Journal,[11]an invited piece in ASE,[12]a CACM article by Alistair Cockburn[13]and, shortly thereafter, a pattern-laden book by Alistair,[14]as well as chapters by Benualdi[15]and Janoff[16]in thePatterns Handbook.It was also about this time thatMichael A. Beedleet al. published patterns that described explicit extensions to existing organizational patterns, for application in projects using a then five-year-old software development framework called Scrum.[17]A few more articles, such as the one by Brash et al.[18]also started to appear.
Little more happened on the organizational patterns front until the publication of the book by Berczuk et al. on configuration management patterns;[19]this was a break-off effort from the effort originally centered at Bell Labs.
In the meantime,Jim Coplienand Neil Harrison had been collecting organizational patterns and combining them into a collection of four pattern languages.
Most of these patterns were based on the original research from Bell Laboratories, which studied over 120 organizations over the period of a decade.
These empirical studies were based on subject role-play in software development organizations, reminiscent of the sociodramas ofMoreno's originalsocial networkapproach.[20]However, the pattern language also had substantial input from other sources and in particular the works by Cockburn, Berczuk, and Cunningham.
This collection was published asOrganizational Patterns of Agile Software Developmentin 2004.[21]
One of the most recent organizational pattern articles comes from an early pattern contributor and advocate, the object design pioneer Grady Booch.[22]
Like other patterns, organizational patterns aren't created or invented: they are discovered (or "mined") from empirical observation.
The early work on organizational patterns at Bell Laboratories focused on extracting patterns fromsocial networkanalysis.
That research used empirical role-playing techniques to gather information about the structure of relationships in the subject organization.
These structures were analyzed for recurring patterns across organization and their contribution to achieving organizational goals.
The recurring successful structures were written up inpattern formto describe their tradeoffs and detailed design decisions (forces), the context in which they apply, along with a generic description of the solution.
Patterns provide an incremental path to organizational improvement.
The pattern style of building something (in this case, an organization) is:
As with Alexander-style patterns of software architecture, organizational patterns can be organized intopattern languages: collections of patterns that build on each other.
A pattern language can suggest the patterns to be applied for a known set of working patterns that are present.
The history ofAgile software developmentand of organizational patterns have been entwined since the beginning.
Kent Beck was the shepherd (interactive pattern reviewer) of the Coplien paper for the 1995PLoP,
and he mentions the influence of this work onextreme programmingin a 2003 publication.[23]The idea of daily Scrum meetings in fact came from a draft of an article for Dr. Dobb's Journal[24]that described the organizational patterns research on the Borland QPW project.[25]Beedle's early work with Sutherland brought the pattern perspective more solidly into the history of Scrum.
More recently, the Scrum community has taken up newfound interest in organizational patterns[26]and there is joint research going forward between the two communities.
In this vein, the first ScrumPLoPconference took place in Sweden in May, 2010, sanctioned by both theScrum Allianceand theHillside Group.
|
https://en.wikipedia.org/wiki/Organizational_patterns
|
Thesmall-world experimentcomprised several experiments conducted byStanley Milgramand other researchers examining theaverage path lengthforsocial networksof people in the United States.[1]The research was groundbreaking in that it suggested that human society is asmall-world-type network characterized by short path-lengths. The experiments are often associated with the phrase "six degrees of separation", although Milgram did not use this term himself.
Guglielmo Marconi's conjectures based on his radio work in the early 20th century, which were articulated in his 1909Nobel Prizeaddress,[2][failed verification]may have inspired[3]Hungarian authorFrigyes Karinthyto write a challenge to find another person to whom he could not be connected through at most five people.[4]This is perhaps the earliest reference to the concept ofsix degrees of separation, and the search for an answer to the small world problem.
MathematicianManfred Kochenand political scientistIthiel de Sola Poolwrote a mathematical manuscript, "Contacts and Influences", while working at theUniversity of Parisin the early 1950s, during a time when Milgram visited and collaborated in their research. Their unpublished manuscript circulated among academics for over 20 years before publication in 1978. It formally articulated the mechanics ofsocial networks, and explored the mathematical consequences of these (including the degree of connectedness). The manuscript left many significant questions about networks unresolved, and one of these was the number of degrees of separation in actual social networks.
Milgram took up the challenge on his return from Paris, leading to the experiments reported in "The Small World Problem" in the May 1967 (charter) issue of the popular magazinePsychology Today, with a more rigorous version of the paper appearing inSociometrytwo years later. ThePsychology Todayarticle generated enormous publicity for the experiments, which are well known today, long after much of the formative work has been forgotten.
Milgram's experiment was conceived in an era when a number of independent threads were converging on the idea that the world is becoming increasingly interconnected. Michael Gurevich had conducted seminal work in his empirical study of the structure of social networks in hisMITdoctoral dissertation under Pool. Mathematician Manfred Kochen, an Austrian who had been involved instatisturban design, extrapolated these empirical results in a mathematical manuscript,Contacts and Influences, concluding that, in an American-sized population without social structure, "it is practically certain that any two individuals can contact one another by means of at least two intermediaries. In a [socially] structured population it is less likely but still seems probable. And perhaps for the whole world's population, probably only one more bridging individual should be needed."[5]They subsequently constructedMonte Carlo simulationsbased on Gurevich's data, which recognized that both weak and strong acquaintance links are needed to model social structure. The simulations, running on the slower computers of 1973, were limited, but still were able to predict that a more realistic three degrees of separation existed across the U.S. population, a value that foreshadowed the findings of Milgram.
Milgram revisited Gurevich's experiments in acquaintanceship networks when he conducted a highly publicized set of experiments beginning in 1967 atHarvard University. One of Milgram's most famous works is a study of obedience and authority, which is widely known as the Milgram Experiment.[6]Milgram's earlier association with Pool and Kochen was the likely source of his interest in the increasing interconnectedness among human beings. Gurevich's interviews served as a basis for his small world experiments.
Milgram sought to develop an experiment that could answer the small world problem. This was the same phenomenon articulated by the writerFrigyes Karinthyin the 1920s while documenting a widely circulated belief inBudapestthat individuals were separated by six degrees of social contact. This observation, in turn, was loosely based on the seminaldemographicwork of the Statists who were so influential in the design of Eastern European cities during that period. MathematicianBenoit Mandelbrot, born in Poland and having traveled extensively in Eastern Europe, was aware of the Statist rules of thumb, and was also a colleague of Pool, Kochen and Milgram at the University of Paris during the early 1950s (Kochen brought Mandelbrot to work at theInstitute for Advanced Studyand laterIBMin the U.S.). This circle of researchers was fascinated by the interconnectedness and "social capital" of social networks.
Milgram's study results showed that people in the United States seemed to be connected by approximately three friendship links, on average, without speculating on global linkages; he never actually used the phrase "six degrees of separation". Since thePsychology Todayarticle gave the experiments wide publicity, Milgram, Kochen, and Karinthy all had been incorrectly attributed as the origin of the notion of "six degrees"; the most likely popularizer of the phrase "six degrees of separation" isJohn Guare, who attributed the value "six" to Marconi.
Milgram's experiment developed out of a desire to learn more about the probability that two randomly selected people would know each other.[7]This is one way of looking at the small world problem. An alternative view of the problem is to imagine the population as a social network and attempt to find theaverage path lengthbetween any two nodes. Milgram's experiment was designed to measure these path lengths by developing a procedure to count the number of ties between any two people.
Shortly after the experiments began, letters would begin arriving to the targets and the researchers would receive postcards from the respondents. Sometimes the packet would arrive to the target in as few as one or two hops, while some chains were composed of as many as nine or ten links. However, a significant problem was that often people refused to pass the letter forward, and thus the chain never reached its destination. In one case, 232 of the 296 letters never reached the destination.[7]
However, 64 of the letters eventually did reach the target contact. Among these chains, theaverage path lengthfell around five and a half or six. Hence, the researchers concluded that people in the United States are separated by about six people on average. Although Milgram himself never used the phrase "six degrees of separation", these findings are likely to have contributed to its widespread acceptance.[4]
In an experiment in which 160 letters were mailed out, 24 reached the target in his home inSharon, Massachusetts. Of those 24 letters, 16 were given to the target by the same person, a clothing merchant Milgram called "Mr. Jacobs". Of those that reached the target at his office, more than half came from two other men.[8]
The researchers used the postcards to qualitatively examine the types of chains that are created. Generally, the package quickly reached a close geographic proximity, but would circle the target almost randomly until it found the target's inner circle of friends.[7]This suggests that participants strongly favored geographic characteristics when choosing an appropriate next person in the chain.
There are a number of methodological criticisms of the small-world experiment, which suggest that the average path length might actually be smaller or larger than Milgram expected. Four such criticisms are summarized here:
In addition to these methodological criticisms, conceptual issues are debated. One regards the social relevance of indirect contact chains of different degrees of separation. Much formal and empirical work focuses on diffusion processes, but the literature on the small-world problem also often illustrates the relevance of the research using an example (similar to Milgram's experiment) of a targeted search in which a starting person tries to obtain some kind of resource (e.g., information) from a target person, using a number of intermediaries to reach that target person. However, there is little empirical research showing that indirect channels with a length of about six degrees of separation are actually used for such directed search, or that such search processes are more efficient compared to other means (e.g., finding information in a directory).[11]
The Reversal Small-World Experiment is a 1978 study conducted by Peter D. Killworth and H. Russell Bernard, aiming to test and refine the understanding of the small-world phenomenon. This phenomenon suggests that individuals in a social network are connected by surprisingly short chains of acquaintances. The study builds upon the pioneering work of Stanley Milgram. Killworth and Bernard introduced a reversal approach to the experiment, addressing key limitations in Milgram’s methodology and testing the validity of his conclusions regarding the structure and reachability of social networks.
Milgram’s original experiment relied on forward routing, where participants were tasked with passing messages to a target person by selecting acquaintances they believed were closest to the destination. However, Milgram’s findings were limited by:
To address these issues, Killworth and Bernard designed an experiment where messages started from the target person and traced paths backward through networks to the originating participants. This reversal method aimed to provide a more accurate measure of social reachability and improve the understanding of network structures.
Killworth and Bernard conducted their study using two separate experimental setups:
The study involved diverse groups of participants from different social settings, aiming to compare various types of social networks. The researchers asked participants: to estimate how many intermediaries would be needed to connect them to a randomly chosen person, to list and categorize their acquaintances, including professional, familial, and casual relationships and to assess how well they could predict social distances.
The key differences from Milgram’s experiment were: the reverse tracking of connections rather than relying on participants' ability to forward messages. An emphasis on estimating social ties, rather than simply measuring completion rates of message chains. An analysis of clustering patterns, determining whether certain groups (e.g., work colleagues vs. family) were more effective in forming short chains.
The Tipping PointbyMalcolm Gladwell, based on articles originally published inThe New Yorker,[13]elaborates on the "funneling" concept. Gladwell condenses sociological research, which argues that the six-degrees phenomenon is dependent on a few extraordinary people ("connectors") with large networks of contacts and friends: these hubs then mediate the connections between the vast majority of otherwise weakly connected individuals.
Recent work in the effects of the small world phenomenon on disease transmission, however, have indicated that due to thestrongly connectednature of social networks as a whole, removing these hubs from a population usually has little effect on the average path length through thegraph(Barrett et al., 2005).[citation needed]
A corollary of network structures is that if the edges that connect nodes in a network, even a randomly constructed one, are above a certain threshold, then the shortest path between nodes, averaged across the entire network, is short. Subsequent research following Milgram’s experiment, namely by Watts and Strogatz, have aimed to reflect the highly-connected and highly-clustered networks of reality.[14]By combining lattice structures and random graphs in their model, these researchers successfully captured the interconnection across large groups of individuals that Milgram illustrates in his famous experiment. When applied with game theory dynamics to construct small-scale yet highly dynamic models, these clustered small-network graphs have had broad reach across academic domains, including economics,[15]behavioral science,[16]neuroscience,[17]computer science,[18]and epidemiology.[19]As with Milgram’s original experiment, thesmall-network modelis commonly used in understanding social systems, since networks represent individuals as a node embedded in a community of other nodes. A focus has been understanding the influence of social dynamics such as herding on individual behavior.[20]Ferreira, Hong, Rutherford et. al explore social networks as a contemporary analogy that propagates the message of protests around the globe, making a phenomenon like theArab Springmore likely than in earlier societies. They found an increase in the number of simultaneous protests beginning in 2005 and 2006, whenTwitter,Facebookand other social networks began to be broadly used. They also note that central hubs, or nodes that connect to many otherwise unconnected nodes and subnetworks, play a crucial role in spreading the message of a protest.[21]
Smaller communities, such as mathematicians and actors, have been found to be densely connected by chains of personal or professional associations. Mathematicians have created theErdős numberto describe their distance fromPaul Erdősbased on shared publications. A similar exercise has been carried out for the actorKevin Baconand other actors who appeared in movies together with him — the latter effort informing the game "Six Degrees of Kevin Bacon". There is also the combinedErdős-Bacon number, for actor-mathematicians and mathematician-actors. Players of the popular Asian gameGodescribe their distance from the great playerHoninbo Shusakuby counting theirShusaku number, which counts degrees of separation through the games the players have had.[22]
The small-world question is still a popular research topic today, with many experiments still being conducted. For instance, Peter Dodds, Roby Muhamad, and Duncan Watts conducted the first large-scale replication of Milgram's experiment, involving 24,163 e-mail chains and 18 targets around the world.[23]
Doddset al. also found that the mean chain length was roughly six, even after accounting for attrition. A similar experiment using popular social networking sites as a medium was carried out atCarnegie Mellon University. Results showed that very few messages actually reached their destination. However, the critiques that apply to Milgram's experiment largely apply also to this current research.[citation needed]
Recent research suggests that the small-world effect is a phenomenon that appeared rather recently in human history, leading to a drastic reduction in the average chain distance in social and physical networks. This can be justified by studying evolution patterns of infectious diseases throughout history, notably theBlack Plaguein Medieval Europe. Past epidemics have been noticed to spread in waves from well-defined central points, which can be explained through the localized nature of interactions of medieval populations. More recentepidemicshave exhibited qualitatively different properties, as diseases no longer spread from one location outward, but rather with many starting clusters, due to travel and long-range physical (and social) interactions. This means that new long-distance connections were made through the development of transportation and communication technologies and that the likelihood of two individuals knowing each other if they live far away from each other has increased enough to drastically change the pattern of disease spread. This serves as an indication that the graph of physical and social connections in the world’s population has structurally changed.[24]
In 1998,Duncan J. WattsandSteven StrogatzfromCornell Universitypublished the first network model on the small-world phenomenon. They showed that networks from both the natural and man-made world, such aspower gridsand the neural network ofC. elegans, exhibit the small-world phenomenon. Watts and Strogatz showed that, beginning with a regular lattice, the addition of a small number of random links reduces the diameter—the longest direct path between any two vertices in the network—from being very long to being very short.[25]The research was originally inspired by Watts' efforts to understand the synchronization ofcricketchirps, which show a high degree of coordination over long ranges as though the insects are being guided by an invisible conductor. The mathematical model which Watts and Strogatz developed to explain this phenomenon has since been applied in a wide range of different areas. In Watts' words:[26]
I think I've been contacted by someone from just about every field outside of English literature. I've had letters from mathematicians, physicists, biochemists, neurophysiologists, epidemiologists, economists, sociologists; from people in marketing, information systems, civil engineering, and from a business enterprise that uses the concept of the small world for networking purposes on the Internet.
Generally, their model demonstrated the truth inMark Granovetter's observation that it is "the strength of weak ties"[27]that holds together a social network. Although the specific model has since been generalized byJon Kleinberg[citation needed], it remains a canonical case study in the field ofcomplex networks. Innetwork theory, the idea presented in thesmall-world networkmodel has been explored quite extensively. Indeed, several classic results inrandom graphtheory show that even networks with no real topological structure exhibit the small-world phenomenon, which mathematically is expressed as the diameter of the network growing with the logarithm of the number of nodes (rather than proportional to the number of nodes, as in the case for a lattice). This result similarly maps onto networks with a power-law degree distribution, such asscale-free networks.
Incomputer science, the small-world phenomenon (although it is not typically called that) is used in the development of secure peer-to-peer protocols, novel routing algorithms for the Internet andad hocwireless networks, and search algorithms for communication networks of all kinds.
With the rise ofdigital communicationandonline social networks, researchers have revisited the small-world phenomenon in large-scale, real-world contexts. Modern studies indicate that the degrees of separation have significantly decreased, particularly due to the widespread use of social media platforms.
One of the most extensive studies on digital networks was conducted by Facebook and the University of Milan. In 2011, researchers analyzed the connections between 721 million active Facebook users—over 10% of the global population at the time. They found that the average number of intermediaries between any two users was 4.74, suggesting a much smaller world than previously estimated.[28]By 2016, an updated study by Facebook revealed that this number had further decreased to just 3.57 degrees of separation, highlighting the growing interconnectedness of individuals through digital platforms.[29]
The increasing reach of digital networks has profound implications across various domains:
While digital connectivity has brought people closer, it also presents challenges such as misinformation spread, privacy concerns, and the impact of online interactions on real-world relationships. Nonetheless, these studies demonstrate how technology continues to reshape social structures, reducing the degrees of separation and further validating the small-world phenomenon in the digital age.
The small-world phenomenon, originally demonstrated byStanley Milgram's experiment, suggests that individuals in large social networks are connected through surprisingly short chains of acquaintances. This structural property has significant implications forsocial capital, which refers to the resources and benefits that individuals or groups can access through their social connections. Research has shown thatsmall-world networksoptimize both local clustering and global reach, facilitating the efficient flow of information and trust. In such networks, social capital is enhanced asweak ties—bridges between otherwise distant clusters—enable access to diverse resources and opportunities. These weak ties, often described inMark Granovetter's strength of weak ties theory, act as conduits for novel information andsocial mobility. Moreover, small-world structures support both bonding social capital, by reinforcing strong community ties, and bridging social capital, by connecting disparatesocial groups.[30]
Empirical studies have linked the small-world topology toinnovation diffusion, job-market efficiency, and collective action, demonstrating that network structure plays a crucial role in shaping social capital at both individual and societal levels.[31]
Social networks pervade popular culture in the United States and elsewhere. In particular, the notion ofsix degreeshas become part of the collective consciousness.Social networking servicessuch asFacebook, Linkedin, and Instagram have greatly increased the connectivity of the online space through the application of social networking concepts.
|
https://en.wikipedia.org/wiki/Small_world_phenomenon
|
Social media analyticsorsocial media monitoringis the process of gathering and analyzing data fromsocial networkssuch asFacebook,Instagram,LinkedIn, orTwitter. A part of social media analytics is calledsocial media monitoringorsocial listening. It is commonly used by marketers to track online conversations about products and companies. One author defined it as "the art and science of extracting valuable hidden insights from vast amounts of semi-structured and unstructured social media data to enable informed and insightful decision-making."[1]
There are three main steps in analyzing social media: data identification,data analysis, and information interpretation. To maximize the value derived at every point during the process, analysts may define a question to be answered. The important questions for data analysis are: "Who? What? Where? When? Why? and How?" These questions help in determining the proper data sources to evaluate, which can affect the type of analysis that can be performed.[2]To make it easier to track social media analytics, purpose built tools such asHootsuite,Sprout Social, Later, andBufferhave been created to help companies consolidate analytics into one place.[3]
Data identification is the process of identifying thesubsetsof available data to focus on for analysis. Raw data is useful once it is interpreted. After data has been analyzed, it can begin to convey a message. Any data that conveys a meaningful message becomes information. On a high level, unprocessed data takes the following forms to translate into exact message: noisy data; relevant and irrelevant data, filtered data; only relevant data, information; data that conveys a vague message, knowledge; data that conveys a precise message, wisdom; data that conveys exact message and reason behind it. To derivewisdomfrom an unprocessed data, we need to start processing it, refine the dataset by including data that we want to focus on, and organize data to identify information. In the context of social media analytics, data identification means "what" content is of interest. In addition to the text of content, we want to know: who wrote the text? Where was it found or on which social media venue did it appear? Are we interested in information from a specific locale? When did someone say something in social media?[2]
Attributes of data that need to be considered are as follows:
Data analysisis the set of activities that assist in transforming raw data into insight, which in turn leads to a new base of knowledge andbusiness value. In other words, data analysis is the phase that takes filtered data as input and transforms that into information of value to the analysts. Many different types of analysis can be performed with social media data, including analysis of posts,sentiment, sentiment drivers, geography,demographics, etc. The data analysis step begins once we know what problem we want to solve and know that we have sufficient data that is enough to generate a meaningful result. How can we know if we have enough evidence to warrant a conclusion? The answer to this question is: we don't know. We can't know this unless we start analyzing the data. While analyzing if we found the data isn't sufficient, reiterate the first phase and modify the question. If the data is believed to be sufficient for analysis, we need to build a data model.[2]
Developing adata modelis a process or method that we use to organize data elements and standardize how the individual data elements relate to each other. This step is important because we want to run acomputer programover the data; we need a way to tell thecomputerwhich words or themes are important and if certain words relate to the topic we are exploring.
In the analysis of our data, it's handy to have several tools available at our disposal to gain a different perspective on discussions taking place around the topic. The aim here is to configure the tools to perform at peak for a particular task. For example, thinking about aword cloud, if we take a large amount of data around computer professionals, say the "IT architect", and built a word cloud, no doubt the largest word in the cloud would be "architect". This analysis is also about tool usage. Some tools may do a good job at determining sentiment, where as others may do a better job at breaking down text into a grammatical form that enables us to better understand the meaning and use of various words or phrases. In performing analytic analysis, it is difficult to enumerate each and every step to take on an analytical journey. It is very much an iterative approach as there is no prescribed way of doing things.[2]
The taxonomy and the insight derived from that analysis are as follows:
The insights derived from analysis can be as varied as the original question that was posed in step one of analysis. At this stage, as the nontechnical business users are the receivers of the information, the form of presenting the data becomes important. How could the data make sense efficiently so it could be used in good decision making?Visualization (graphics)of the information is the answer to this question.[7]
The best visualizations are ones that expose something new about the underlyingpatternsand relationships contain the data. Exposure of the patterns and understating them play a key role in decision making process. Mainly there are three criteria to consider in visualizing data.
Recent research on social media analytics has emphasized the need to adopt abusiness intelligence-based approach to collecting, analyzing, and interpreting social media data.[9][10]Social media presents a promising, albeit challenging, source of data for business intelligence. Customers voluntarily discuss products and companies, giving a real-time pulse of brand sentiment and adoption.[11]Social media is one of the most important tools for marketers in the rapidly evolving media landscape. Firms have created specialized positions to handle their social media marketing. These arguments are in line with the literature on social media marketing that suggests that social media activities are interrelated and influence each other.[12]
Moon and Iacobucci (2022)[13]focused on the marketing applications of social media analytics. Such applications include consumer behavior on social media, social media impact on firm performance, business strategy, product/brand management, social media network analysis, consumer privacy and data security on social media, and fictitious/biased content on social media. In particular, consumer privacy and data security are becoming more and more important in the social media universe given the increasing risk stemming from social mediadata breaches. In a similar vein, suspicious social media postings have significantly increased along with the growth of social media. Luca and Servas (2015)[14]reported that firms have a potential incentive to use fake postings when they have increased competition. Therefore, upgrading our ability to identify and monitor suspicious postings (e.g., fake reviews on Yelp) has become an important part of social media platform management.[15]
Muruganantham and Gandhi (2020) proposed a Multi-Criteria Decision Making (MCDM) model to prove that social media users' preferences, sentiments, behavior, and marketing data are related to social media analytics. Internet users are closely connected and show a high degree of mutual influence in social ideology and social networks, which in turn affects business intelligence.[16]
The possibilities of the dangers of social media analytics andsocial media miningin the political arena were revealed in the late 2010s. In particular, the involvement of the data mining companyCambridge Analyticain the2016 United States presidential electionandBrexithave been representative cases that show the arising dangers of linking social media mining and politics. This has raised the question ofdata privacyfor individuals and the legal boundaries to be created for data science companies in relevance to politics in the future. Both of the examples listed below demonstrate a future in which big data can change the game of international politics. It is likely politics and technology will evolve together throughout the next century. In the cases with Cambridge Analytica, the effects of social media analytics have resonated throughout the globe through two major world powers, the United States and the U.K.
Thescandalthat followed the American presidential election of 2016 was one involving a three-way relationship between Cambridge Analytica, the Trump campaign, and Facebook. Cambridge Analytica acquired the data of over 87 million[17]unaware Facebook users and analyzed the data for the benefit of the Trump campaign. By creating thousands of data points on 230 million U.S. adults, the data mining company had the potential to analyze which individuals could be swayed into voting for the Trump campaign, and then send messages or advertisements to said targets and influence user mindset. Specific target voters could then be exposed to pro-Trump messages without being aware, even, of the political influence settling on them. Such a specific form of targeting in which select individuals are introduced to an above-average amount of campaign advertisement is referred to as "micro-targeting."[18]There remains great controversy in measuring the amount of influence this micro-targeting had in the 2016 elections. The impact of micro-targeting ads and social media data analytics on politics is unclear as of the late 2010s, as a newly arising field of technology.
While this was a breach of user privacy, data mining and targeted marketing undermined the public accountability to which social media entities are no longer subject, therefore twisting the democratic election system and allowing it to be dominated by platforms of “user-generated content [that] polarized the media’s message.”[19]
Analysis of Facebook political groups and postings by social media analytics firm, CounterAction, have shown the role of social media giants in protest movements such asattempts to overturn the 2020 United States presidential electionand the2021 United States Capitol attack.[20][21]
During the 2016Brexit referendumCambridge Analytica attracted controversy for its use of data gathered from social media. A similar case took place in which a breach and Facebook data was acquired by Cambridge Analytica. There was concern that they had used the data to encourage British citizens to vote to leave the European Union in the2016 EU referendum.[22]After a three-year investigation it was concluded in 2020 that there had been no involvement in the referendum.[23][22]Besides Cambridge Analytica, several other data companies such asAIQ[24]and the Cambridge University Psychometric Centre[25]were accused of, then investigated by the British government for their possible abuse of data to promote unlawful campaign techniques for Brexit.[26][27]The referendum ended with 51.89% of voters supporting the withdrawal of the United Kingdom from the European Union. This final decision impacted politics within the United Kingdom, and sent ripples across political and economic institutions worldwide.[28]
|
https://en.wikipedia.org/wiki/Social_media_analytics
|
Social media intelligence(SMIorSOCMINT) comprises the collective tools and solutions that allow organizations to analyze conversations, respond to synchronize social signals, and synthesize socialdata pointsinto meaningful trends and analysis, based on the user's needs. Social media intelligence allows one to utilizeintelligence gatheringfromsocial mediasites, using both intrusive or non-intrusive means, from open and closedsocial networks.[1]This type of intelligence gathering is one element ofOSINT(Open- Source Intelligence).[2]
The term was coined in a 2012 paper written by SirDavid Omand,Jamie BartlettandCarl Millerfor theCentre for the Analysis of Social Media, at the London-based think tank,Demos.[3][4][2]The authors argued that social media is now an important part of intelligence and security work, but that technological, analytical, and regulatory changes are needed before it can be considered a powerful new form of intelligence, including amendments to the United KingdomRegulation of Investigatory Powers Act 2000.[3]
Given the dynamic evolution of social media and social media monitoring, our current understanding of how social media monitoring can help organizations create business value is inadequate. As a result, there is a need to study how organizations can (a) extract and analyze social media data related to their business (Sensing), and (b) utilize external intelligence gained from social media monitoring for specific business initiatives (Seizing).[5]
In Thailand, the Technology Crime Suppression Division not only employs a 30-person team to scrutinize social media for content deemed disrespectful to the monarchy, known as lèse-majesté but also encourages citizens to report such content. Particularly targeting the youth, they run a "Cyber Scout" program where participants are rewarded for reporting individuals posting material perceived as detrimental to the monarchy.[6]
Instances in Israel involve the arrest of Palestinians by the police for their social media posts. An example includes a 15-year-old girl who posted a Facebook status with the words "forgive me," raising suspicions among Israeli authorities that she might be planning an attack.[6]
In Egypt, a leaked 2014 call for tender from the Ministry of Interior reveals efforts to procure a social media monitoring system to identify leading figures and prevent protests before they occur.[6]
In the United States, ZeroFOX faced criticism for sharing a report with Baltimore officials showcasing how their social media monitoring tool could track riots following Freddie Gray's funeral. The report labeled 19 individuals, including two prominent figures from the #BlackLivesMatter movement, as "threat actors."[6]
In the UK, the Association of Chief Police Officers of England, Wales, and Northern Ireland emphasized the significance of social media in intelligence gathering during anti-fracking protests in 2011. Social media analysis closely monitored protests against the badger cull in 2013, with a 2013 report revealing a team of 17 officers in the National Domestic Extremism Unit scanning public tweets, YouTube videos, Facebook profiles, and other online content from UK citizens.[6]
During the 2016 United States presidential election, the Senate Intelligence Committee released reports containing information about Russia’s use of troll farms to mislead black voters about voting.[7]Also, German researchers in 2010 analyzed Twitter messages regarding the German federal election concluding that Twitter played a role in leading users to a specific political opinion.[8]
In a broad sense, social media refers to a conversational, distributed mode of content generation, dissemination, and communication among communities. Different from broadcast-based traditional and industrial media, social media has torn down the boundaries between authorship and readership, while the information consumption and dissemination process is becoming intrinsically intertwined with the process of generating and sharing information.[9]
An example of how SOCMINT is used to affect political opinions is the Cambridge Analytica Scandal. Cambridge Analytica was a company that purchased data from Facebook about its users without the consent or knowledge of Americans. They used this data to build a "psychological warfare tool" to persuade US voters to elect Donald Trump as president in the 2016 election.[10]Christopher Wylie, the whistleblower, reported that personal information was taken in early 2014, and used to build a system that could target US voters with personalized pollical advertisements. More than 50 million individuals' data was exploited and manipulated.[11][12]
In September of 2023, the Philadelphia Police Department began using social media to track and stay one step ahead of criminal activity to stop meetups and potential robberies. This new approach has made officers utilize another tool in their field by being able to find new information as quickly as possible.[13][14]
Law enforcement agencies worldwide are increasingly employing social media intelligence to enhance their capabilities in both crime prevention and investigation. By analyzing publicly available data from social platforms such as Facebook, Twitter, and Instagram, police can track criminal activities, identify suspects, and even prevent potential crimes before they occur. For instance, the FBI utilizes SOCMINT to monitor threats and investigate criminal activities, including analyzing posts, images, and videos that might signal illegal activities or security concerns.[15]
SOCMINT collects data from both organizations and people on an individual level. It has a variety of different purposes, and though its main goal is to improve national security advancements, there are several other benefits as well. This intelligence can identify patterns, predict trends, gather information in current time, etc.[16]In addition, these aspects have allowed for both improvement within businesses and help for law enforcement.[17]
Artificial Social Networking Intelligence (ASNI) refers to the application of artificial intelligence within social networking services and social media platforms. It encompasses various technologies and techniques used to automate, personalize, enhance, improve, and synchronize user's interactions and experiences within social networks. ASNI is expected to evolve rapidly, influencing how we interact online and shaping their digital experiences. Transparency, ethical considerations, media influence bias, and user control over data will be crucial to ensure responsible development and positive impact.
Googleprovides many free services and has built an entire media brand with its vast variety of products. Along with data collection, Google also owns two advertising services,Google Ads, andGoogle AdSense. Surprisingly, most of its revenue comes from advertising, not direct sales of its services or products. Google makes money by selling advertising services to advertisers. They provide ad space to websites on Google, and target ads to consumers of Google services and products. Google can market ads using SOCMINT to collect data from its users and generate revenue.[18]
Research shows that various social media platforms on the Internet such asTwitter,Tumblr(micro-blogging websites),Facebook(a popular social networking website),YouTube(largest video sharing and hosting website), Blogs and discussion forums are being misused by extremist groups for spreading their beliefs and ideologies, promoting radicalization, recruiting members and creating online virtual communities sharing a common agenda. Popular microblogging websites such as Twitter are being used as a real-time platform for information sharing and communication during the planning and mobilization of civil unrest-related events.[19][2]
Thisespionage-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Social_media_intelligence
|
Social media miningis the process of obtaining data fromuser-generated contenton social media in order to extract actionable patterns, form conclusions about users, and act upon the information. Mining supports targeting advertising to users or academic research. The term is an analogy to the process ofminingfor minerals. Mining companies sift through raw ore to find the valuable minerals; likewise, social media mining sifts through social media data in order to discern patterns and trends about matters such as social media usage, online behaviour, content sharing, connections between individuals, buying behaviour. These patterns and trends are of interest to companies, governments and not-for-profit organizations, as such organizations can use the analyses for tasks such as design strategies, introduce programs, products, processes or services.
Social media mining uses concepts fromcomputer science,data mining,machine learning, andstatistics. Mining is based onsocial network analysis,network science,sociology,ethnography, optimization and mathematics. It attempts to formally represent, measure and model patterns from social media data.[1]In the 2010s, major corporations, governments and not-for-profit organizations began mining to learn about customers, clients and others.
Platforms such as Google, Facebook (partnered withDatalogixandBlueKai) conduct mining totarget users with advertising.[2]Scientists andmachine learningresearchers extract insights and design product features.[3]
Users may not understand how platforms use their data.[4]Users tend to click throughTerms of Useagreements without reading them, leading to ethical questions about whether platforms adequately protect users' privacy.
During the2016 United States presidential election, Facebook allowedCambridge Analytica, a political consulting firm linked to theTrumpcampaign, to analyze the data of an estimated 87 million Facebook users to profile voters, creating controversy when this was revealed.[5]
As defined by Kaplan and Haenlein,[6]social media is the "group of internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of user-generated content." There are many categories of social media including, but not limited to, social networking (FacebookorLinkedIn), microblogging (Twitter), photo sharing (Flickr,Instagram,Photobucket, orPicasa), news aggregation (Google Reader,StumbleUpon, orFeedburner), video sharing (YouTube,MetaCafe), livecasting (UstreamorTwitch), virtual worlds (Kaneva), social gaming (World of Warcraft), social search (Google,Bing, orAsk.com), and instant messaging (Google Talk,Skype, orYahoo! messenger).
The first social media website was introduced byGeoCitiesin 1994. It enabled users to create their own homepages without having a sophisticated knowledge ofHTMLcoding. The first social networking site,SixDegrees.com, was introduced in 1997.[7]Since then, many other social media sites have been introduced, each providing service to millions of people. These individuals form a virtual world in which individuals (social atoms), entities (content, sites, etc.) and interactions (between individuals, between entities, between individuals and entities) coexist. Social norms and human behavior govern this virtual world. By understanding these social norms and models of human behavior and combining them with the observations and measurements of this virtual world, one can systematically analyze and mine social media. Social media mining is the process of representing, analyzing, and extracting meaningful patterns from data in social media, resulting from social interactions. It is an interdisciplinary field encompassing techniques from computer science, data mining, machine learning, social network analysis, network science, sociology, ethnography, statistics, optimization, and mathematics. Social media mining faces grand challenges such as the big data paradox, obtaining sufficient samples, the noise removal fallacy, and evaluation dilemma.
Social media mining represents the virtual world of social media in a computable way, measures it, and designs models that can help us understand its interactions. In addition, social media mining provides necessary tools to mine this world for interesting patterns, analyze information diffusion, study influence and homophily, provide effective recommendations, and analyze novel social behavior in social media.
Social media mining is used across several industries including business development, social science research, health services, and educational purposes.[8][9]Once the data received goes throughsocial media analytics, it can then be applied to these various fields. Often, companies use the patterns of connectivity that pervade social networks, such as assortativity—the social similarity between users that are induced by influence, homophily, and reciprocity and transitivity.[10]These forces are then measured via statistical analysis of the nodes and connections between these nodes.[8]Social analytics also usessentiment analysis, because social media users often relay positive or negative sentiment in their posts.[11]This provides important social information about users' emotions on specific topics.[12][13][14]
These three patterns have several uses beyond pure analysis. For example, influence can be used to determine the most influential user in a particular network.[8]Companies would be interested in this information in order to decide who they may hire forinfluencer marketing. These influencers are determined by recognition, activity generation, and novelty—three requirements that can be measured through the data mined from these sites.[8]Analysts also value measures of homophily: the tendency of two similar individuals to become friends.[10]Users have begun to rely on information of other users' opinions in order to understand diverse subject matter.[11]These analyses can also help create recommendations for individuals in a tailored capacity.[8]By measuring influence and homophily, online and offline companies are able to suggest specific products for individuals consumers, and groups of consumers. Social media networks can use this information themselves to suggest to their users possible friends to add, pages to follow, and accounts to interact with.
Modern social media mining is a controversial practice that has led to exponential gains in user growth for tech giants such as Facebook, Inc., Twitter, andGoogle. Companies such as these, considered "Big Tech" are companies that build algorithms that take advantage of user input to understand their preferences, and keep them on the platform as much as possible. These inputs, that can be as simple as time spent on a given screen, provide the data being mined, and lead to companies profiting heavily from using that data to capitalize on extremely accurate predictions about user behavior. The growth of platforms accelerated rapidly once these strategies were put in place; Most of the largest platforms now average over 1 billion active users per month as of 2021.[15]
It has been claimed by a multitude of anti-algorithm personalities, likeTristan HarrisorChamath Palihapitiya, that certain companies (specifically Facebook) valued growth above all else, and ignored potential negative impacts from these growth engineering tactics.[16]
At the same time, users have now created their own data arbitrages with the help of their own data, through content monetization and becominginfluencers. Users typically have access to a varied set of analytics specific to people that interact with them on social media, and can use these as building blocks for their own targeting and growth strategies through ads and posts that cater to their audiences. Influencers also commonly promote products and services for established brands, creating one of the largest digital industries: Influencer marketing. Instagram, Facebook, Twitter, YouTube, Google, and others have long given access to platform analytics, and allowed third parties to access that information as well, at times unbeknownst to even the user whose data is being viewed/bought.[17]
Social media mining research articles are published in computer science, social science, and data mining conferences and journals:
Conference papers can be found in proceedings of Knowledge
Discovery and Data Mining (KDD), World Wide Web (WWW), Association
for Computational Linguistics (ACL), Conference on Information
and Knowledge Management (CIKM), International Conference on Data
Mining (ICDM), Internet Measuring Conference (IMC).
Social media mining is also present on manydata management/database conferencessuch as the ICDE Conference,SIGMOD ConferenceandInternational Conference on Very Large Data Bases.
|
https://en.wikipedia.org/wiki/Social_media_mining
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Asocial networkis asocial structureconsisting of a set ofsocialactors (such asindividualsor organizations), networks ofdyadicties, and othersocial interactionsbetween actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures.[1]The study of these structures usessocial network analysisto identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks.
Social networks and the analysis of them is an inherentlyinterdisciplinaryacademic field which emerged fromsocial psychology,sociology,statistics, andgraph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations".[2]Jacob Morenois credited with developing the firstsociogramsin the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in thesocial and behavioral sciencesby the 1980s.[1][3]Social network analysisis now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with othercomplex networks, it forms part of the nascent field ofnetwork science.[4][5]
The social network is atheoreticalconstructuseful in thesocial sciencesto study relationships between individuals,groups,organizations, or even entiresocieties(social units, seedifferentiation). The term is used to describe asocial structuredetermined by suchinteractions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. Anaxiomof the social network approach to understandingsocial interactionis that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is thatindividual agencyis often ignored[6]although this may not be the case in practice (seeagent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations,network analyticsare useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited toanthropology,biology,communication studies,economics,geography,information science,organizational studies,social psychology,sociology, andsociolinguistics.
In the late 1890s, bothÉmile DurkheimandFerdinand Tönniesforeshadowed the idea of social networks in their theories and research ofsocial groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society").[7]Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors.[8]Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups.[9]
Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently.[6][10][11]Inpsychology, in the 1930s,Jacob L. Morenobegan systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (seesociometry). Inanthropology, the foundation for social network theory is the theoretical andethnographicwork ofBronislaw Malinowski,[12]Alfred Radcliffe-Brown,[13][14]andClaude Lévi-Strauss.[15]A group of social anthropologists associated withMax Gluckmanand theManchester School, includingJohn A. Barnes,[16]J. Clyde MitchellandElizabeth Bott Spillius,[17][18]often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom.[6]Concomitantly, British anthropologistS. F. Nadelcodified a theory of social structure that was influential in later network analysis.[19]Insociology, the early (1930s) work ofTalcott Parsonsset the stage for taking a relational approach to understanding social structure.[20][21]Later, drawing upon Parsons' theory, the work of sociologistPeter Blauprovides a strong impetus for analyzing the relational ties of social units with his work onsocial exchange theory.[22][23][24]
By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologistHarrison Whiteand his students at theHarvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time wereCharles Tilly, who focused on networks in political and community sociology and social movements, andStanley Milgram, who developed the "six degrees of separation" thesis.[25]Mark Granovetter[26]andBarry Wellman[27]are among the former students of White who elaborated and championed the analysis of social networks.[26][28][29][30]
Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such asDuncan J. Watts,Albert-László Barabási,Peter Bearman,Nicholas A. Christakis,James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks.
In general, social networks areself-organizing,emergent, andcomplex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system.[32][33]These patterns become more apparent as network size increases. However, a global network analysis[34]of, for example, allinterpersonal relationshipsin the world is not feasible and is likely to contain so muchinformationas to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis.[35][36]The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Althoughlevels of analysisare not necessarilymutually exclusive, there are three general levels into which networks may fall:micro-level,meso-level, andmacro-level.
At the micro-level, social network research typically begins with an individual,snowballingas social relationships are traced, or may begin with a small group of individuals in a particular social context.
Dyadic level: Adyadis a social relationship between two individuals. Network research on dyads may concentrate onstructureof the relationship (e.g. multiplexity, strength),social equality, and tendencies towardreciprocity/mutuality.
Triadic level: Add one individual to a dyad, and you have atriad. Research at this level may concentrate on factors such asbalanceandtransitivity, as well associal equalityand tendencies towardreciprocity/mutuality.[35]In thebalance theoryofFritz Heiderthe triad is the key to social dynamics. The discord in a rivalrouslove triangleis an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory ofsigned graphs.
Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density,centrality,prestigeand roles such asisolates, liaisons, andbridges.[37]Such analyses, are most commonly used in the fields ofpsychologyorsocial psychology,ethnographickinshipanalysis or othergenealogicalstudies of relationships between individuals.
Subset level:Subsetlevels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus ondistanceand reachability,cliques,cohesivesubgroups, or othergroup actionsorbehavior.[38]
In general, meso-level theories begin with apopulationsize that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks.[39]
Organizations: Formalorganizationsaresocial groupsthat distribute tasks for a collectivegoal.[40]Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms offormalorinformalrelationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures.[40]Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups.[41]
Randomly distributed networks:Exponential random graph modelsof social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including generaldegree-based structural effects commonly observed in many human social networks as well asreciprocityandtransitivity, and at the node-level,homophilyandattribute-based activity and popularity effects, as derived from explicit hypotheses aboutdependenciesamong network ties.Parametersare given in terms of the prevalence of smallsubgraphconfigurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior.[42]
Scale-free networks: Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at leastasymptotically. Innetwork theorya scale-free ideal network is arandom networkwith adegree distributionthat unravels the size distribution of social groups.[43]Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness ofverticeswith adegreethat greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows apower law.[44]TheBarabásimodel of network evolution shown above is an example of a scale-free network.
Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such aseconomicor otherresourcetransferinteractions over a largepopulation.
Large-scale networks:Large-scale networkis a term somewhat synonymous with "macro-level." It is primarily used insocialandbehavioralsciences, and ineconomics. Originally, the term was used extensively in thecomputer sciences(seelarge-scale network mapping).
Complex networks: Most larger social networks display features ofsocial complexity, which involves substantial non-trivial features ofnetwork topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see,complexity science,dynamical systemandchaos theory), as dobiological, andtechnological networks. Suchcomplex networkfeatures include a heavy tail in thedegree distribution, a highclustering coefficient,assortativityor disassortativity among vertices,community structure(seestochastic block model), andhierarchical structure. In the case ofagency-directednetworks these features also includereciprocity, triad significance profile (TSP, seenetwork motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such aslatticesandrandom graphs, do not show these features.[45]
Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these areGraph theory,Balance theory, Social comparison theory, and more recently, theSocial identity approach.[46]
Few complete theories have been produced from social network analysis. Two that have arestructural role theoryandheterophily theory.
The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties".[47]
In the context of networks,social capitalexists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections.[48]Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters.[49]When two separate clusters possess non-redundant information, there is said to be a structural hole between them.[49]Thus, a network that bridgesstructural holeswill provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes.[49]
Networks rich in structural holes are a form of social capital in that they offerinformationbenefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters.[49]For example, inbusiness networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory ofweak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction.[50]
Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist.[51][52]Other work examines how network grouping of artists can affect an individual artist's auction performance.[53]An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career.
In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed throughtelecommunicationsdevices andsocial network services. Such devices and services require extensive and ongoing maintenance and analysis, often usingnetwork sciencemethods.Community developmentstudies, today, also make extensive use of such methods.
Complex networksrequire methods specific to modelling and interpretingsocial complexityandcomplex adaptive systems, including techniques ofdynamic network analysis.
Mechanisms such asDual-phase evolutionexplain how temporal changes in connectivity contribute to the formation of structure in social networks.
The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants incollective actionssuch asprotests; promotion of peaceful behavior,social norms, andpublic goodswithincommunitiesthrough networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats.[54]
Incriminologyandurban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength.[55]
Diffusion of ideas and innovationsstudies focus on the spread and use of ideas from one actor to another or onecultureand another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., byNicholas Christakisand collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages,[56][57]Indian slums,[58]or in the lab.[59]Still other experiments have documented the experimental induction of social contagion of voting behavior,[60]emotions,[61]risk perception,[62]and commercial products.[63]
Indemography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents.[64][65]
The field ofsociologyfocuses almost entirely on networks of outcomes of social interactions. More narrowly,economic sociologyconsiders behavioral interactions of individuals and groups throughsocial capitaland social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy.[66]
Analysis of social networks is increasingly incorporated intohealth care analytics, not only inepidemiologicalstudies but also in models ofpatient communicationand education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations andsystems.[67]
Human ecologyis aninterdisciplinaryandtransdisciplinarystudy of the relationship betweenhumansand theirnatural,social, andbuilt environments. The scientific philosophy of human ecology has a diffuse history with connections togeography,sociology,psychology,anthropology,zoology, and naturalecology.[68][69]
In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo,[70]De Nooy,[71]Senekal,[72]andLotker,[73]to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings ofEven-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped usingvisualizationfrom SNA.
Research studies offormalorinformal organizationrelationships,organizational communication,economics,economic sociology, and otherresourcetransfers. Social networks have also been used to examine how organizations interact with each other, characterizing the manyinformal connectionsthat link executives together, as well as associations and connections between individual employees at different organizations.[74]Many organizational social network studies focus onteams.[75]Withinteamnetwork studies, research assesses, for example, the predictors and outcomes ofcentralityand power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affectorganizational commitment,[76]organizational identification,[37]interpersonal citizenship behaviour.[77]
Social capitalis a form ofeconomicandcultural capitalin which social networks are central,transactionsare marked byreciprocity,trust, andcooperation, andmarketagentsproducegoods and servicesnot mainly for themselves, but for acommon good.Social capitalis split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations.[78]This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations.[78]The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions.[78]
Social capitalis a sociological concept about the value ofsocial relationsand the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use.[79][80][81]In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity.[79][82]
This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understandconsumer behaviourand drive sales.
In manyorganizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities.[48]Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economistJohn Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress."[83]Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement.
A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking.[84]In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms.[85]By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts.
There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations.[86]However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted.[48]Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career.
Computer networkscombined with social networking software produce a new medium for social interaction. A relationship over a computerizedsocial networking servicecan be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In acomputer-mediated communicationcontext, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise ofelectronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world.[87]Social network analysismethods have become essential to examining these types of computer mediated communication.
In addition, the sheer size and the volatile nature ofsocial mediahas given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data.[88]
Based on the pattern ofhomophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhoodsegregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area.[89]
|
https://en.wikipedia.org/wiki/Social_network
|
Social network analysis(SNA)softwareissoftwarewhich facilitatesquantitativeorqualitativeanalysis of social networks, by describing features of a network either through numerical orvisual representation.
Networks can consist of anything from families,[1]project teams,classrooms,sports teams,legislatures,nation-states,disease vectors, membership onnetworking websiteslike Twitter or Facebook, or even the Internet. Networks can consist of direct linkages between nodes or indirect linkages based upon shared attributes, shared attendance at events, or common affiliations.[2]Network features can be at the level of individualnodes,dyads,triads, ties and/or edges, or the entire network. For example, node-level features can include network phenomena such asbetweennessandcentrality, or individual attributes such as age, sex, or income.[3]SNA software generates these features from raw network data formatted in an edgelist, adjacency list, oradjacency matrix(also called sociomatrix), often combined with (individual/node-level) attribute data.[4]Though the majority of network analysis software uses a plain text ASCII data format, some software packages contain the capability to utilize relational databases to import and/or store network features.
Visual representations ofsocial networksare important to understand network data and convey the result of the analysis.[5]Visualization often also facilitates qualitative interpretation of network data. With respect to visualization, network analysis tools are used to change the layout, colors, size and other properties of the network representation.
Some SNA software can performpredictive analysis.[6]This includes using network phenomena such as a tie to predict individual level outcomes (often called peer influence or contagion modeling), using individual-level phenomena to predict network outcomes such as the formation of a tie/edge (often called homophily models[7]) or particular type of triad, or using network phenomena to predict other network phenomena, such as using a triad formation at time 0 to predict tie formation at time 1.
|
https://en.wikipedia.org/wiki/Social_network_analysis_software
|
Asocial networking service(SNS), orsocial networking site, is a type of onlinesocial mediaplatform which people use to buildsocial networksorsocial relationshipswith other people who share similar personal or career content, interests, activities, backgrounds or real-life connections.[1][2]
Social networking services vary in format and the number of features. They can incorporate a range of new information and communication tools, operating ondesktopsand onlaptops, on mobile devices such astablet computersandsmartphones. This may feature digital photo/video/sharing and diary entries online (blogging).[2]Online communityservices are sometimes considered social-network services by developers and users, though in a broader sense, a social-network service usually provides an individual-centered service whereas online community services are groups centered. Generally defined as "websites that facilitate the building of a network of contacts in order to exchange various types of content online," social networking sites provide a space for interaction to continue beyond in-person interactions. These computer mediated interactions link members of various networks and may help to create, sustain and develop new social and professional relationships.[3]
Social networking sites allow users to share ideas, digital photos and videos, posts, and to inform others about online or real-world activities and events with people within their social network. While in-person social networking – such as gathering in a village market to talk about events – has existed since the earliest development of towns,[4]the web enables people to connect with others who live in different locations across the globe (dependent on access to anInternet connectionto do so).
Depending on the platform, members may be able to contact any other member. In other cases, members can contact anyone they have a connection to, and subsequently anyone that contact has a connection to, and so on.
Facebookhaving a massive 2.13 billion active monthly users and an average of 1.4 billiondaily active usersin 2017.[5]
LinkedIn, a career-oriented social-networking service, generally requires that a member personally know another member inreal lifebefore they contact them online. Some services require members to have a preexisting connection to contact other members.
WithCOVID-19,Zoom, avideoconferencingplatform, has taken an integral place to connect people located around the world and facilitate many online environments such as school, university, work and government meetings.
The main types of social networking services containcategory places(such as age or occupation or religion), means to connect with friends (usually with self-description pages), and a recommendation system linked to trust. One can categorize social-network services into four types:[6]
There have been attempts to standardize these services to avoid the need to duplicate entries of friends and interests (see theFOAFstandard). A study reveals thatIndiarecorded world's largest growth in terms of social media users in 2013.[7]A 2013 survey found that 73% of U.S. adults use social-networking sites.[8]
The potential for computer networking to facilitate newly improved forms of computer-mediated social interaction was suggested early on.[30]Efforts to support social networks viacomputer-mediated communicationwere made in many early online services, includingUsenet,[31]ARPANET,LISTSERV, and bulletin board services (BBS). Many prototypical features of social networking sites were also present in online services such asThe Source,Delphi,America Online,Prodigy,CompuServe, andThe WELL.[32]
Early social networking on theWorld Wide Webbegan in the form of generalized online communities such asTheglobe.com(1995),[33]Geocities(1994) andTripod.com(1995). Many of these early communities focused on bringing people together to interact with each other through chat rooms and encouraged users to share personal information and ideas via personal web pages by providing easy-to-use publishing tools and free or inexpensive web space. Some communities – such asClassmates.com– took a different approach by simply having people link to each other via email addresses.PlanetAllstarted in 1996.
In the late 1990s,user profilesbecame a central feature of social networking sites, allowing users to compile lists of "friends" and search for other users with similar interests. New social networking methods were developed by the end of the 1990s, and many sites began to develop more advanced features for users to find and manage friends.[34]Open Diary, a community for online diarists, invented both friends-only content and the reader comment, two features of social networks important to user interaction.[35]
This newer generation of social networking sites began to flourish with the emergence ofSixDegreesin 1997,[2]Open Diaryin 1998,[36]Mixiin 1999,[37]Makeoutclubin 2000,[38][39]Cyworldin 2001,[40][2]Hub Culturein 2002, andFriendsterandNexopiain 2003.[41]Cyworld also became one of the first companies to profit from the sale ofvirtual goods.[42][43]MySpaceandLinkedInwere launched in 2003, andBebowas launched in 2005.Orkutbecame the first popular social networking service in Brazil (although most of its very first users were from the United States) and quickly grew in popularity in India (Madhavan, 2007).[2]There was a rapid increase in social networking sites' popularity; in 2005, MySpace had morepageviewsthanGoogle.[44]Many of these services were displaced byFacebook, which launched in 2004 and became the largest social networking site in the world in 2009.[45][46]
The termsocial mediawas first used in 2004 and is often used to describe social networking services.[47][48]
Web-based social networking services make it possible to connect people who share interests and activities across political, economic, and geographic borders.[49]Through e-mail and instant messaging,online communitiesare created where agift economyandreciprocal altruismare encouraged throughcooperation. Information is suited to agift economy, as information is anonrival goodand can be gifted at practically no cost.[50][51]Scholars have noted that the term "social" cannot account for technological features of the social network platforms alone.[52]Hence, the level of network sociability should determine by the actual performances of its users. According to the communication theory of uses and gratifications, an increasing number of individuals are looking to the Internet and social media to fulfill cognitive, affective, personal integrative, social integrative, and tension free needs. With Internet technology as a supplement to fulfill needs, it is in turn affecting everyday life, including relationships, school, church, entertainment, and family.[53]Companies are using social media as a way to learn about potential employees' personalities and behavior. In numerous situations, a candidate who might otherwise have been hired has been rejected due to offensive or otherwise unseemly photos or comments posted to social networks or appearing on a newsfeed.
Facebookand other social networking tools are increasingly the aims of scholarly research. Scholars in many fields have begun to investigate the impact of social networking sites, investigating how such sites may play into issues ofidentity, politics,privacy,[54]social capital,youth culture, andeducation.[55]Research has also suggested that individuals add offline friends onFacebookto maintain contact and often this blurs the lines between work and home lives.[56]Users from around the world also utilise social networking sites as an alternativenewssource.[57]While social networking sites have arguably changed how we access the news,[58]users tend to have mixed opinions about the reliability of content accessed through these sites.[59]
According to a study in 2015, 63% of the users of Facebook or Twitter in the USA consider these networks to be their main source of news, with entertainment news being the most seen. In the times of breaking news, Twitter users are more likely to stay invested in the story. In some cases when the news story is more political, users may be more likely to voice their opinion on a linked Facebook story with a comment or like, while Twitter users will just follow the site's feed and retweet the article.[60]In online social networks, the veracity and reliability of news may be diminished due to the absence of traditional media gatekeepers.[61]
A 2015 study shows that 85% of people aged 18 to 34 use social networking sites for their purchase decision making. While over 65% of people aged 55 and over-rely on word of mouth.[62]Several websites are beginning to tap into the power of the social networking model forphilanthropy. Such models provide a means for connecting otherwise fragmented industries and small organizations without the resources to reach a broader audience with interested users.[63]Social networks are providing a different way for individuals to communicate digitally. These communities of hypertexts allow for the sharing of information and ideas, an old concept placed in a digital environment. In 2011, HCL Technologies conducted research that showed that 50% of British employers had banned the use of social networking sites/services during office hours.[64][65]
Research has provided us with mixed results as to whether or not a person's involvement in social networking can affect their feelings ofloneliness. Studies have indicated that how a person chooses to use social networking can change their feelings of loneliness in either a negative or positive way. Some companies with mobile workers have encouraged their workers to use social networking to feel connected. Educators are using social networking to stay connected with their students whereas individuals use it to stay connected with their close relationships.[66]
Social networking sites can be used by consumers to create a social media firestorm which is "A digital artifact created by large numbers of user comments of multiple purposes (condemnation and support) and tones (aggressive and cordial) that appear rapidly and recede shortly after”.[1]
Each social networking user is able to create a community that centers around a personal identity they choose to create online.[67]In his bookDigital Identities: Creating and Communicating the Online Self,[68]Rob Coverargues that social networking's foundation inWeb 2.0, high-speed networking shifts online representation to one which is both visual and relational to other people, complexifying the identity process for younger people and creating new forms ofanxiety.[68]In 2016, news reports stated that excessive usage of SNS sites may be associated with an increase in the rates of depression, to almost triple the rate for non-SNS users. Experts worldwide[which?]have said that 2030 people who use SNS more have higher levels of depression than those who use SNS less.[69]At least one study went as far as to conclude that the negative effects of Facebook usage are equal to or greater than the positive effects of face-to-face interactions.[70]
According to a recent article fromComputers in Human Behavior, Facebook has also been shown to lead to issues of social comparison. Users are able to select which photos and status updates to post, allowing them to portray their lives in acclamatory manners.[71]These updates can lead to other users feeling like their lives are inferior by comparison.[72]Users may feel especially inclined to compare themselves to other users with whom they share similar characteristics or lifestyles, leading to a fairer comparison.[71]Motives for these comparisons can be associated with the goals of improving oneself by looking at profiles of people who one feels are superior, especially when their lifestyle is similar and possible.[71]One can also self-compare to make oneself feel superior to others by looking at the profiles of users who one believes to be worse off.[71]However, a study by the Harvard Business Review shows that these goals often lead to negative consequences, as use of Facebook has been linked with lower levels of well-being; mental health has been shown to decrease due to the use of Facebook.[72]Computers in Human Behavioremphasizes that these feelings of poor mental health have been suggested to cause people to take time off from their Facebook accounts; this action is called "Facebook Fatigue" and has been common in recent years.[71]
Usage of social networking has contributed to a new form of abusive communication, and academic research has highlighted a number of social-technological explanations for this behaviour. These including the anonymity afforded by interpersonal communications,[73]factors that include boredom or attention seeking,[74]or the result of morepolarisedonline debate.[75]The impact in this abuse has found impacts through the prevalence of onlinecyberbullying, andonline trolling. There has also been a marked increase in political violence and abuse through social media platforms. For instance, one study by Ward and McLoughlin found that 2.57% of all messages sent toUK MPsonTwitterwere found to contain abusive messages.[75]
According toboydandEllison's 2007 article, "Why Youth (Heart) Social Network Sites: The Role of Networked Publics in Teenage Social Life", social networking sites share a variety of technical features that allow individuals to: construct a public/semi-public profile, articulate a list of other users that they share a connection with, and view their list of connections within the system. The most basic of these are visible profiles with a list of "friends" who are also users of the site.[55]In an article entitled "Social Network Sites: Definition, History, and Scholarship," boyd and Ellison adopt Sunden's (2003) description of profiles as unique pages where one can "type oneself into being".[2]A profile is generated from answers to questions, such as age, location, interests, etc. Some sites allow users to upload pictures, add multimedia content or modify the look and feel of the profile. Others, e.g., Facebook, allow users to enhance their profile by adding modules or "Applications".[2]Many sites allow users to post blog entries, search for others with similar interests and compile and share lists of contacts. User profiles often have a section dedicated to comments from friends and other users. To protect user privacy, social networks typically have controls that allow users to choose who can view their profile, contact them, add them to their list of contacts, and so on.[citation needed]
There is a trend towards moreinteroperability between social networksled by technologies such asOpenIDandOpenSocial. In most mobile communities, mobile phone users can now create their own profiles, make friends, participate in chat rooms, create chat rooms, hold private conversations, share photos and videos, and share blogs by using their mobile phone. Some companies provide wireless services that allow their customers to build their own mobile community and brand it; one of the most popular wireless services for social networking inNorth AmericaandNepalis Facebook Mobile.
Recently, Twitter has also introduced fact check labels to combat misinformation which was primarily spread due to the coronavirus but also has had an impact on debunking false claims by Donald Trump in the 2020 election.[citation needed]
Social media platforms may allow users to change theiruser name(or "handle", distinct from the "display name"), which could change theURLto their profile. Users are advised to do so with caution, since it could breakback linksfrom others' posts and comments depending on implementation, and external back links.[76]
The things you share are things that make you look good, things which you are happy to tie into your identity.
While the popularity of social networking consistently rises,[78]new uses for the technology are frequently being observed. Today's technologically savvy population requires convenient solutions to their daily needs.[79]At the forefront of emerging trends in social networking sites is the concept of "real-time web" and "location-based". Real-time allows users to contribute contents, which is then broadcast as it is being uploaded—the concept is analogous to live radio and television broadcasts.Twitterset the trend for "real-time" services, wherein users can broadcast to the world what they are doing, or what is on their minds within a 140-character limit.Facebookfollowed suit with their "Live Feed" where users' activities are streamed as soon as it happens. While Twitter focuses on words,Clixtr, another real-time service, focuses on group photo sharing wherein users can update their photo streams with photos while at an event. Facebook, however, remains the largest photo sharing site with over 250 billion photos as of September 2013.[80]In April 2012, the image-based social media networkPinteresthad become the third largest social network in the United States.[81]
Companies have begun to merge business technologies and solutions, such ascloud computing, with social networking concepts. Instead of connecting individuals based on social interest, companies are developing interactive communities that connect individuals based on shared business needs or experiences. Many provide specialized networking tools andapplicationsthat can be accessed via their websites, such asLinkedIn. Others companies, such asMonster.com, have been steadily developing a more "socialized" feel to their career center sites to harness some of the power of social networking sites. These more business related sites have their own nomenclature for the most part but the most common naming conventions are "Vocational Networking Sites" or "Vocational Media Networks", with the former more closely tied to individual networking relationships based on social networking principles.[citation needed]
Foursquaregained popularity as it allowed for users to check into places that they are frequenting at that moment.Gowallais another such service that functions in much the same way that Foursquare does, leveraging theGPSin phones to create a location-based user experience. Clixtr, though in the real-time space, is also a location-based social networking site, since events created by users are automatically geotagged, and users can view events occurring nearby through the ClixtriPhoneapp. Recently,Yelpannounced its entrance into the location-based social networking space through check-ins with their mobile app; whether or not this becomes detrimental to Foursquare or Gowalla is yet to be seen, as it is still considered a new space in the Internet technology industry.[82]
One popular use for this new technology is social networking between businesses. Companies have found that social networking sites such as Facebook and Twitter are great ways to build their brand image. According to Jody Nimetz, author of Marketing Jive,[83]there are five major uses for businesses and social media: to create brand awareness, as an onlinereputation managementtool, for recruiting, to learn about new technologies and competitors, and as alead generationtool to intercept potential prospects.[83]These companies are able to drive traffic to their own online sites while encouraging their consumers and clients to have discussions on how to improve or change products or services. As of September 2013, 71% of online adults use Facebook, 17% use Instagram, 21% use Pinterest, and 22% use LinkedIn.[84]
One other use that is being discussed is the use of social networks in the science communities. Julia Porter Liebeskind et al. have published a study on how new biotechnology firms are using social networking sites to share exchanges in scientific knowledge.[85]They state in their study that by sharing information and knowledge with one another, they are able to "increase both their learning and their flexibility in ways that would not have been possible within a self-contained hierarchical organization". Social networking is allowing scientific groups to expand their knowledge base and share ideas, and without these new means of communicating their theories might become "isolated and irrelevant". Researchers use social networks frequently to maintain and develop professional relationships.[86]They are interested in consolidating social ties and professional contact, keeping in touch with friends and colleagues and seeing what their own contacts are doing. This can be related to their need to keep updated on the activities and events of their friends and colleagues in order to establish collaborations on common fields of interest and knowledge sharing.[87]
Social networks are also used to communicate scientists research results[88]and as a public communication tool and to connect people who share the same professional interests, their benefits can vary according to the discipline.[89]The most interesting aspects of social networks for professional purposes are their potentialities in terms of dissemination of information and the ability to reach and multiple professional contacts exponentially. Social networks likeAcademia.edu,LinkedIn,Facebook, andResearchGategive the possibility to join professional groups and pages, to share papers and results, publicize events, to discuss issues and create debates.[87]Academia.edu is extensively used by researchers, where they follow a combination of social networking and scholarly norms.[90]ResearchGate is also widely used by researchers, especially to disseminate and discuss their publications,[91]where it seems to attract an audience that it wider than just other scientists.[92]The usage of ResearchGate and Academia in different academic communities has increasingly been studied in recent years.[93]
The advent of social networking platforms may also be impacting the ways in which learners engage with technology in general. For a number of years, Prensky's (2001) dichotomy betweenDigital Nativesand Digital Immigrants has been considered a relatively accurate representation of the ease with which people of a certain age range—in particular those born before and after 1980—use technology. Prensky's theory has been largely disproved, however, and not least on account of the burgeoning popularity of social networking sites and other metaphors such as White and Le Cornu's "Visitors" and "Residents" (2011) are greater currency. The use of online social networks by school libraries is also increasingly prevalent and they are being used to communicate with potential library users, as well as extending the services provided by individual school libraries. Social networks and their educational uses are of interest to many researchers. According to Livingstone and Brake (2010), "Social networking sites, like much else on the Internet, represent a moving target for researchers and policymakers."[94]Pew Research Center project, called Pew Internet, did a USA-wide survey in 2009 and in 2010 February published that 47% of American adults use a social networking website.[95]Same survey found that 73% of online teenagers use SNS, which is an increase from 65% in 2008, 55% in 2006.[95]Recent studies have shown that social network services provide opportunities within professional education, curriculum education, and learning. However, there are constraints in this area. Researches, especially in Africa, have disclosed that the use of social networks among students has been known to affect their academic life negatively. This is buttressed by the fact that their use constitutes distractions, as well as that the students tend to invest a good deal of time in the use of such technologies.[citation needed]
Albayrak and Yildirim (2015) examined the educational use of social networking sites. They investigated students' involvement in Facebook as a Course Management System (CMS) and the findings of their study support that Facebook as a CMS has the potential to increase student involvement in discussions and out-of-class communication among instructors and students.[96]
Professional use of social networking services refers to the employment of a network site to connect with other professionals within a given field of interest. These type of social networking services are referred to as "Career-oriented social networking markets (CSNM)".[9]LinkedInis one example and is a social networking website geared towards companies and industry professionals looking to make new business contacts or keep in touch with previous co-workers, affiliates, and clients. LinkedIn provides not only a professional social use but also encourages people to inject their personality into their profile – making it more personal than a resume.[97]Similar websites to LinkedIn (also geared towards companies and industry professionals looking for work opportunities) to connect includeAngelList,XING,Goodwall, The Dots,[98]Jobcase,Bark.com, ...[99]Variousfreelance marketplacewebsites (which focus on freelance work) also exist. There are also a number of otheremployment websitesfocused oninternational volunteering, notablyVolunteerMatch,Idealist.organdAll for Good.[100]NationalWWOOFnetworks finally allow for searching forhomestayson organic farms.[101]
Now other social network sites are also being used in this manner.Twitterhas become [a] mainstay for professional development as well as promotion[102]and online SNSs support both the maintenance of existing social ties and the formation of new connections. Much of the early research on online communities assume that individuals using these systems would be connecting with others outside their preexisting social group or location, liberating them to form communities around shared interests, as opposed to shared geography.[103]Other researchers have suggested that the professional use of network sites produce "social capital". For individuals, social capital allows a person to draw on resources from other members of the networks to which he or she belongs.[104]These resources can take the form of useful information, personal relationships, or the capacity to organize groups. As well, networks within these services also can be established or built by joining special interest groups that others have made, or creating one and asking others to join.[105]
According to Doering, Beach, and O'Brien, a future English curriculum needs to recognize a significant shift in how adolescents are communicating with each other.[106]Curriculum uses of social networking services can also include sharing curriculum-related resources. Educators tap into user-generated content to find and discuss curriculum-related content for students. Responding to the popularity of social networking services among many students, teachers are increasingly using social networks to supplement teaching and learning in traditional classroom environments. This way they can provide new opportunities for enriching existing curriculum through creative, authentic and flexible, non-linear learning experiences.[107]Some social networks, such asEnglish, baby!andLiveMocha, are explicitly education-focused and couple instructional content with an educational peer environment.[108]The newWeb 2.0technologies built into most social networking services promote conferencing, interaction, creation, research on a global scale, enabling educators to share, remix, and repurpose curriculum resources. In short, social networking services can become research networks as well aslearning networks.[109]
Educators and advocates of newdigital literaciesare confident that social networking encourages the development of transferable, technical, and social skills of value in formal and informal learning.[94]In a formal learning environment, goals or objectives are determined by an outside department or agency.Tweeting,instant messaging, orbloggingenhances student involvement. Students who would not normally participate in class are more apt to partake through social network services. Networking allows participants the opportunity for just-in-time learning and higher levels of engagement.[110]The use of SNSs allow educators to enhance the prescribed curriculum. When learning experiences are infused into a website student utilize every day for fun, students realize that learning can and should be a part of everyday life.[111]It does not have to be separate and unattached.[112][unreliable source?]
Informal learning consists of the learner setting the goals and objectives. It has been claimed that media no longer just influence human culture; they are human culture.[113]With such a high number of users between the ages of 13 and 18, a number of skills are developed. Participants hone technical skills in choosing to navigate through social networking services. This includes elementary items such as sending an instant message or updating a status. The development of new media skills are paramount in helping youth navigate the digital world with confidence.
Social networking services foster learning through whatJenkins(2006) describes as a "participatory culture".[114]A participatory culture consists of a space that allows engagement, sharing, mentoring, and an opportunity for social interaction. Participants of social network services avail of this opportunity. Informal learning, in the forms of participatory and social learning online, is an excellent tool for teachers to sneak in material and ideas that students will identify with and therefore, in a secondary manner, students will learn skills that would normally be taught in a formal setting in the more interesting and engaging environment of social learning.[115][unreliable source?]Sites like Twitter provide students with the opportunity to converse and collaborate with others in real time.[citation needed]
Social networking services provide a virtual "space" for learners.James Gee(2004) suggests thataffinity spacesinstantiate participation, collaboration, distribution, dispersion of expertise, and relatedness.[116]Registered users share and search for knowledge which contributes to informal learning.[citation needed]
In the past, social networking services were viewed as a distraction and offered no educational benefit. Blocking these social networks was a form of protection for students against wasting time, bullying, and invasions of privacy. In an educational setting, Facebook, for example, is seen by many instructors and educators as a frivolous, time-wasting distraction from schoolwork, and it is not uncommon to be banned in junior high or high school computer labs.[112]Cyberbullyinghas become an issue of concern with social networking services. According to the UK Children Go Online survey of 9- to 19-year-olds, it was found that a third have received bullying comments online.[117]To avoid this problem, many school districts/boards have blocked access to social networking services such as Facebook, MySpace, and Twitter within the school environment. Social networking services often include a lot of personal information posted publicly, and many believe that sharing personal information is a window into privacy theft. Schools have taken action to protect students from this. It is believed that this outpouring of identifiable information and the easy communication vehicle that social networking services open the door to sexual predators, cyberbullying, andcyberstalking.[118]In contrast, however, 70% of social media using teens and 85% of adults believe that people are mostly kind to one another on social network sites.[95]
Recent research suggests that there has been a shift in blocking the use of social networking services. In many cases, the opposite is occurring as the potential of online networking services is being realized. It has been suggested that if schools block them [social networking services], they are preventing students from learning the skills they need.[119]Banning social networking [...] is not only inappropriate but also borderline irresponsible when it comes to providing the best educational experiences for students.[120]Schools and school districts have the option of educating safe media usage as well as incorporatingdigital mediainto the classroom experience, thus preparing students for the literacy they will encounter in the future.[citation needed]
A cyberpsychology research study conducted by Australian researchers demonstrated that a number of positive psychological outcomes are related to Facebook use. These researchers established that people can derive a sense of social connectedness and belongingness in the online environment. Importantly, this online social connectedness was associated with lower levels of depression and anxiety, and greater levels of subjective well-being. These findings suggest that the nature of online social networking determines the outcomes of online social network use.[121][122]
Social networks are being used by activists as a means of low-cost grassroots organizing. Extensive use of an array of social networking sites enabled organizers of 2009National Equality Marchto mobilize an estimated 200,000 participants to march on Washington with a cost savings of up to 85% per participant over previous methods.[123]The August2011 England riotswere similarly considered to have escalated and been fuelled by this type of grassroots organization.[citation needed]
A rise in social network use is being driven by college students using the services to network with professionals for internship and job opportunities. Many studies have been done on the effectiveness of networking online in a college setting, and one notable one is by Phipps Arabie and Yoram Wind published inAdvances in Social Network Analysis.[124]Many schools have implemented online alumni directories which serve as makeshift social networks that current and former students can turn to for career advice. However, these alumni directories tend to suffer from an oversupply of advice-seekers and an undersupply of advice providers. One new social networking service, Ask-a-peer, aims to solve this problem by enabling advice seekers to offer modest compensation to advisers for their time. LinkedIn is also another great resource. It helps alumni, students and unemployed individuals look for work. They are also able to connect with others professionally and network with companies.
In addition, employers have been found to use social network sites to screen job candidates.[125]
Asocial network hosting serviceis a web hosting service that specifically hosts the user creation of web-based social networking services, alongside related applications.[citation needed]
A social trade network is a service that allows participants interested in specific trade sectors to share related contents and personal opinions.[citation needed]
Few social networks charge money for membership. In part, this may be because social networking is a relatively new service, and the value of using them has not been firmly established in customers' minds. Companies such asMyspaceandFacebooksellonline advertisingon their site. Their business model is based upon large membership count, and charging for membership would be counterproductive.[126]Some believe that the deeper information that the sites have on each user will allow much better targeted advertising than any other site can currently provide.[127]In recent times, Apple has been critical of the Google and Facebook model, in which users are defined as product and a commodity, and their data being sold for marketing revenue.[128]Social networks operate under an autonomous business model, in which a social network's members serve dual roles as both the suppliers and the consumers of content. This is in contrast to a traditional business model, where the suppliers and consumers are distinct agents. Revenue is typically gained in the autonomous business model via advertisements, but subscription-based revenue is possible when membership and content levels are sufficiently high.[129]
People use social networking sites for meeting new friends, finding old friends, or locating people who have the same problems or interests they have, called niche networking. More and more relationships and friendships are being formed online and then carried to an offline setting. Psychologist and University of Hamburg professor Erich H. Witte says that relationships which start online are much more likely to succeed. In this regard, there are studies which predict tie strength among the friends[130]on social networking websites. One online dating site claims that 2% of all marriages begin at its site, the equivalent of 236 marriages a day. Other sites claim one in five relationships begin online.[citation needed]
Users do not necessarily share with others the content which is of most interest to them, but rather that which projects a good impression of themselves.[77]While everyone agrees that social networking has had a significant impact on social interaction, there remains a substantial disagreement as to whether the nature of this impact is completely positive. A number of scholars have done research on the negative effects of Internet communication as well. These researchers have contended that this form of communication is an impoverished version of conventional face-to-face social interactions, and therefore produce negative outcomes such as loneliness and depression for users who rely on social networking entirely. By engaging solely in online communication, interactions between communities, families, and other social groups are weakened.[131]
Social networking services have led to manyissues regarding privacy, bullying, social anxiety and potential for misuse.
Social networking services are increasingly being used in legal andcriminal investigations. The information posted on sites such as MySpace and Facebook has been used by police (forensic profiling), probation, and university officials to prosecute users of said sites. In some situations, content posted on MySpace has been used in court.[132]
Facebook is increasingly being used by school administrations and law enforcement agencies as a source of evidence against student users. This site being the number one online destination for college students, allows users to create profile pages with personal details. These pages can be viewed by other registered users from the same school, which often include resident assistants and campus police who have signed up for the service.[133]OneUKpolice force has sifted pictures from Facebook and arrested some people who had been photographed in a public place holding a weapon such as a knife (having a weapon in a public place is illegal).[134]
Social networking is more recently being used by various government agencies. Social networking tools serve as a quick and easy way for the government to get the suggestion of the public and to keep the public updated on their activity, however, this comes with a significant risk of abuse, for example, to cultivate aculture of fearsuch as that outlined inNineteen Eighty-FourorTHX-1138.
TheCenters for Disease Controldemonstrated the importance ofvaccinationson the popular children's siteWhyvilleand theNational Oceanic and Atmospheric Administrationhas a virtual island onSecond Lifewhere people can explore caves or explore theeffects of global warming.[135]Likewise, NASA has taken advantage of a few social networking tools, includingTwitterandFlickr. The NSA is taking advantage of them all.[136]NASA is using such tools to aid theReview of U.S. Human Space Flight Plans Committee, whose goal it is toensure that the nation is on a vigorous and sustainable path to achieving its boldest aspirations in space.[137]
The use of social networking services in an enterprise context presents the potential of having a major impact on the world of business and work.[138]Social networks connect people at low cost; this can be beneficial forentrepreneursandsmall businesseslooking to expand their contact bases. These networks often act as a customer relationship management tool for companies selling products and services. Companies can also use social networks for advertising in the form of banners and text ads. Since businesses operate globally, social networks can make it easier to keep in touch with contacts around the world. Applications for social networking sites have extended toward businesses and brands are creating their own, high functioning sites, a sector known asbrand networking. It is the idea that a brand can build its consumer relationship by connecting their consumers to the brand image on a platform that provides them relative content, elements of participation, and a ranking or score system. Brand networking is a new way to capitalize on social trends as a marketing tool. The power of social networks is beginning to permeate into internal culture of businesses where they are finding uses forcollaboration,file sharingandknowledge transfer. The term "enterprise social software" is becoming increasingly popular for these types of applications.[citation needed]
Many social networks provide an online environment for people to communicate and exchange personal information for dating purposes. Intentions can vary from looking for a one time date, short-term relationships, and long-term relationships.[139]Most of these social networks, just like online dating services, require users to give out certain pieces of information. This usually includes a user's age, gender, location, interests, and perhaps a picture. Releasing very personal information is usually discouraged for safety reasons.[140]This allows other users to search or be searched by some sort of criteria, but at the same time, people can maintain a degree of anonymity similar to most online dating services. Online dating sites are similar to social networks in the sense that users create profiles to meet and communicate with others, but their activities on such sites are for the sole purpose of finding a person of interest to date. Social networks do not necessarily have to be for dating; many users simply use it for keeping in touch with friends, and colleagues.[141]
However, an important difference between social networks and online dating services is the fact that online dating sites usually require a fee, where social networks are free.[142]This difference is one of the reasons the online dating industry is seeing a massive decrease in revenue due to many users opting to use social networking services instead. Many popular online dating services such asMatch.com,Yahoo Personals, andeHarmony.comare seeing a decrease in users, where social networks likeMySpaceand Facebook are experiencing an increase in users. The number of Internet users in the United States that visit online dating sites has fallen from a peak of 21% in 2003 to 10% in 2006.[143]Whether it is the cost of the services, the variety of users with different intentions, or any other reason, it is undeniable that social networking sites are quickly becoming the new way to find dates online.[citation needed]
TheNational School Boards Associationreports that almost 60% of students who use social networking talk about education topics online, and more than 50% talk specifically about schoolwork. Yet the vast majority of school districts have stringent rules against nearly all forms of social networking during the school day—even though students and parents report few problem behaviors online. Social networks focused on supporting relationships between teachers and their students are now used for learning, educators professional development, and content sharing.HASTACis a collaborative social network space for new modes of learning and research in higher education, K-12, and lifelong learning;Ningsupports teachers;TermWiki,TeachStreetand other sites are being built to foster relationships that include educational blogs, portfolios, formal and ad hoc communities, as well as communication such as chats, discussion threads, and synchronous forums. These sites also have content sharing and rating features. Social networks are also emerging as onlineyearbooks, both public and private. One such service isMyYearbook, which allows anyone from the general public to register and connect. A new trend emerging is private label yearbooks accessible only by students, parents, and teachers of a particular school, similar toFacebook's beginning within Harvard.[citation needed]
The use ofvirtual currencysystems inside social networks create new opportunities for global finance.Hub Cultureoperates a virtual currencyVenused for global transactions among members, product sales[144]and financial trades in commodities and carbon credits.[145][146]In May 2010,carbon pricingcontracts were introduced to the weighted basket of currencies and commodities that determine the floating exchange value of Ven. The introduction of carbon to the calculation price of the currency made Ven the first and only currency that is linked to the environment.[147]
Social networks are beginning to be adopted by healthcare professionals as a means to manage institutional knowledge, disseminate peer to peer knowledge and to highlight individual physicians and institutions. The advantage of using a dedicated medical social networking site is that all the members are screened against the state licensing board list of practitioners.[148]A new trend is emerging with social networks created to help its members with various physical and mental ailments.[149]For people suffering from life-altering diseases or chronic health conditions, companies such asHealthUnlockedandPatientsLikeMeoffers their members the chance to connect with others dealing with similar issues and share experiences. For alcoholics and addicts, SoberCircle gives people in recovery the ability to communicate with one another and strengthen their recovery through the encouragement of others who can relate to their situation.DailyStrengthis also a website that offers support groups for a wide array of topics and conditions, including the support topics offered byPatientsLikeMeand SoberCircle. Some social networks aim to encourage healthy lifestyles in their users.SparkPeopleandHealthUnlockedoffer community and social networking tools for peer support during weight loss.FitocracyandQUENTIQare focused on exercise, enabling users to share their own workouts and comment on those of other users. Other aspects of social network usage include the analysis of data coming from existing social networks (such as Twitter) to discover large crowd concentration events (based on tweets location statistical analysis) and disseminate the information to e.g. mobility-challenged individuals for e.g. avoiding the specific areas and optimizing their journey in an urban environment.[150]
Social networking sites have recently showed a value in social and political movements.[151]In theEgyptian revolution,FacebookandTwitterboth played an allegedly pivotal role in keeping people connected to the revolt.Egyptian activistshave credited social networking sites with providing a platform for planning protest and sharing news fromTahrir Squarein real time. By presenting a platform for thousands of people to instantaneously share videos of mainly events featuring brutality, social networking can be a vital tool in revolutions.[152]On the flip side, social networks enable government authorities to easily identify, and repress, protestors and dissidents.[153]Another political application of social media is promoting the involvement of younger generations in politics and ongoing political issues.[154]
Perhaps the most significant political application of social media isBarack Obama's election campaign in 2008. It was the first of its kind, as it successfully incorporated social media into its campaign winning strategy, evolving the way of political campaigns forevermore in the ever-changing technological world we find ourselves in today. His campaign won by engaging everyday people and empowering volunteers, donors, and advocates, through social networks, text messaging, email messaging and online videos.[155]Obama's social media campaign was vast, with his campaign boasting 5 million 'friends' on over 15 social networking sites, with over 3 million friends just on Facebook.[156]Another significant success of the campaign was online videos, with nearly 2,000 YouTube videos being put online, receiving over 80 million views.[156]
In 2007, when Obama first announced his candidacy, there was no such thing as an iPhone or Twitter. However, a year later, Obama was sending out voting reminders to thousands of people through Twitter, showing just how fast social media moves. Obama's campaign was current and needed to be successful in incorporating social media, as social media acts best and is most effective in real time.[157]
Building up to the 2012 presidential election, it was interesting to see how strong the influence of social media would be following the 2008 campaigns, where Obama's winning campaign had been social media-heavy, whereas McCain's campaign did not really grasp social media.John F. Kennedywas the first president who really understood television, and similarly, Obama is the first president to fully understand the power of social media.[158]Obama has recognized social media is about creating relationships and connections and therefore used social media to the advantage of presidential election campaigns, in which Obama has dominated his opponents in terms of social media space.
Other political campaigns have followed on from Obama's successful social media campaigns, recognizing the power of social media and incorporating it as a key factor embedded within their political campaigns, for example, Donald Trump's presidential electoral campaign, 2016. Dan Pfeiffer, Obama's former digital and social media guru, commented that Donald Trump is "way better at the internet than anyone else in the GOP which is partly why he is winning".[159]
Research has shown that 66% of social media users actively engage in political activity online, and like many other behaviors, online activities translate into offline ones.[158]With research from the 'MacArthur Research Network on Youth and Participatory Politics' stating that young people who are politically active online are double as likely to vote than those who are not politically active online.[158]Therefore, political applications of social networking sites are crucial, particularly to engage with the youth, who perhaps are the least educated in politics and the most in social networking sites. Social media is, therefore, a very effective way in which politicians can connect with a younger audience through their political campaigns.[160]
On June 28, 2020,The New York Timesreleased an article sharing the finding of two researchers who studied the impact ofTikTok, a video-sharing and social networking application, on political expression. The application, besides being a creative space to express oneself, has been used maliciously to spread disinformation ahead of US PresidentDonald Trump's Tulsa rally in Oklahoma and amplified footage of police brutality atBlack Lives Matterprotests.[161]
Crowdsourcing social media platform, such asDesign Contest,Arcbazar,Tongal, combined group of professionalfreelancers, such asdesigners, and help them communicate with business owners interested in their suggestion. This process is often used to subdivide tedious work or to fund-raise startup companies and charities, and can also occur offline.[162]
There are a number of projects that aim to developfree and open source softwareto use for social networking services. These technologies are often referred to as social engine or social networking engine software.
The following is a list of the largest social networking services, in order by number of active users, as of January 2024, as published byStatista:[163]
*Platforms have not published updated user figures in the past 12 months, figures may be out of date and less reliable**Figure uses daily active users, so monthly active user number is likely higher
|
https://en.wikipedia.org/wiki/Social_networking_service
|
Social software, also known associal appsorsocial platformincludes communications and interactive tools that are often based on theInternet. Communication tools typically handle capturing, storing and presenting communication, usually written but increasingly including audio and video as well. Interactive tools handle mediated interactions between a pair or group of users. They focus on establishing and maintaining a connection among users, facilitating the mechanics of conversation and talk.[1]Social softwaregenerally refers to software that makes collaborative behaviour, the organisation and moulding of communities, self-expression, social interaction and feedback possible for individuals. Another element of the existing definition ofsocial softwareis that it allows for the structured mediation of opinion between people, in a centralized or self-regulating manner. The most improved area for social software is thatWeb 2.0applicationscan all promote co-operation between people and the creation of online communities more than ever before. The opportunities offered by social software are instant connections and opportunities to learn.[2]An additional defining feature of social software is that apart from interaction and collaboration, it aggregates the collective behaviour of its users, allowing not only crowds to learn from an individual but individuals to learn from the crowds as well.[3]Hence, the interactions enabled by social software can be one-to-one, one-to-many, or many-to-many.[2]
Aninstant messagingapplication orclientallows one to communicate with another person over a network in real time, in relative privacy. One can add friends to a contact or buddy list by entering the person's email address or messenger ID. If the person is online, their name will typically be listed as available for chat. Clicking on their name will activate a chat window with space to write to the other person, as well as read their reply.
Internet Relay Chat(IRC) and otheronline chattechnologies allow users to join and communicate with many people at once, publicly. Users may join a pre-existing chat room or create a new one about any topic. Once inside, you may type messages that everyone else in the room can read, as well as respond to/from others. Often there is a steady stream of people entering and leaving. Whether you are in another person's chat room or one you've created yourself, you are generally free to invite others online to join you in that room.
The goal of collaborative software, also known as groupware, such asMoodle, Landing pages, Enterprise Architecture, andSharePoint, is to allow subjects to share data – such as files, photos, text, etc. for the purpose of project work or schoolwork. The intent is to first form a group and then have them collaborate. Clay Shirky defines social software as "software that supports group interaction". Since groupware supports group interaction (once the group is formed), it would consider it to be social software.
Originally modeled after the real-world paradigm of electronicbulletin boardsof the world before internet was widely available,internet forumsallow users to post a "topic" for others to review. Other users can view the topic and post their own comments in a linear fashion, one after the other. Most forums are public, allowing anybody to sign up at any time. A few are private, gated communities where new members must pay a small fee to join.
Forums can contain many different categories in ahierarchy, typically organized according to topics and subtopics. Other features include the ability to post images or files or to quote another user's post with special formatting in one's own post. Forums often grow in popularity until they can boast several thousand members posting replies to tens of thousands of topics continuously.
There are various standards and claimants for the market leaders of each software category. Various add-ons may be available, including translation and spelling correction software, depending on the expertise of the operators of the bulletin board. In some industry areas, the bulletin board has its own commercially successful achievements: free and paid hardcopy magazines as well as professional and amateur sites.
Current successful services have combined new tools with the oldernewsgroupandmailing listparadigm to produce hybrids. Also, as a service catches on, it tends to adopt characteristics and tools of other services that compete. Over time, for example,wiki user pageshave become social portals for individual users and may be used in place of other portal applications.
In the past, web pages were only created and edited by web designers that had the technological skills to do so. Currently there are many tools that can assist individuals with web content editing. Wikis allow novices to be on the same level as experienced web designers because wikis provide easy rules and guidelines. Wikis allow all individuals to work collaboratively on web content without having knowledge of any markup languages. A wiki is made up of many content pages that are created by its users. Wiki users are able to create, edit, and link related content pages together. The user community is based on the individuals that want to participate to improve the overall wiki. Participating users are in a democratic community where any user can edit any other user's work.[4]
Blogs, short for web logs, are like online journals for a particular person. The owner will post a message periodically, allowing others to comment. Topics often include the owner's daily life, views on politics, or about a particular subject important to them.
Blogs mean many things to different people, ranging from "online journal" to "easily updated personal website." While these definitions are technically correct, they fail to capture the power of blogs as social software. Beyond being a simple homepage or an online diary, some blogs allow comments on the entries, thereby creating a discussion forum. They also have blogrolls (i.e., links to other blogs which the owner reads or admires) and indicate their social relationship to those other bloggers using theXFNsocial relationship standard.Pingbackandtrackbackallow one blog to notify another blog, creating an inter-blog conversation. Blogs engage readers and can build a virtual community around a particular person or interest. Blogging has also become fashionable in business settings by companies who useenterprise social software.
Simultaneous editing of a text or media file by different participants on a network was first demonstrated on research systems as early as the 1970s, but is now practical on a global network. Collaborative real-time editing is now utilized, for example, in film editing and in cloud-based office applications.
Many prediction market tools have become available (including somefree software) that make it easy to predict and bet on future events. This software allows a more formal version of social interaction, although it qualifies as a robust type of social software.
Social network services allow people to come together online around shared interests, hobbies or causes. For example, some sites provide meeting organization facilities for people who practice the same sports. Other services enable business networking and social event meetup.
Some largewikishave effectively become social network services by encouraging user pages and portals.
Social network search engines are a class of search engines that use social networks to organize, prioritize or filter search results. There are two subclasses of social network search engines: those that useexplicitsocial networks and those that useimplicitsocial networks.
Lacking trustworthy explicit information about such viewpoints, this type of social network search engine mines the web to infer the topology of online social networks. For example, theNewsTrovesearch engine infers social networks from content - sites, blogs, pods and feeds - by examining, among other things, subject matter, link relationships and grammatical features to infer social networks.
Deliberative social networks are webs of discussion and debate for decision-making purposes. They are built for the purpose of establishing sustained relationships between individuals and their government. They rely upon informed opinion and advice that is given with a clear expectation of outcomes.
Commercial social networks are designed to support business transaction and to build a trust between an individual and a brand, which relies on opinion of product, ideas to make the product better, enabling customers to participate with the brands in promoting development, service delivery and a better customer experience.[citation needed]
A social guide recommending places to visit or contains information about places in the real world, such as coffee shops, restaurants and wifi hotspots, etc.
Some web sites allow users to post their list ofbookmarksor favorite websites for others to search and view them. These sites can also be used to meet others through sharing common interests. Additionally, many social bookmarking sites allow users to browse through websites and content shared by other users based on popularity or category. As such, use of social bookmarking sites is an effective tool forsearch engine optimizationandsocial media optimizationforwebmasters.[5]
Enterprise bookmarkingis a method of tagging and linking any information using an expanded set of tags to capture knowledge about data. It collects and indexes these tags in a web-infrastructure server residing behind the firewall. Users can share knowledge tags with specified people or groups, shared only inside specific networks, typically within an organization.
Social viewingallows multiple users to aggregate from multiple sources and view online videos together in a synchronized viewing experience.
Insocial catalogingmuch like social bookmarking, this software is aimed towards academics. It allows the user to post a citation for an article found on the internet or a website, online database like Academic Search Premier or LexisNexis Academic University, a book found in a library catalog and so on. These citations can be organized into predefined categories, or a new category defined by the user through the use oftags. This method allows academics researching or interested in similar areas to connect and share resources.
This application allows visitors to keep track of their collectibles, books, records and DVDs. Users can share their collections. Recommendations can be generated based on user ratings, using statistical computation andnetwork theory. Some sites offer a buddy system, as well as virtual "check outs" of items for borrowing among friends.Folksonomyortaggingis implemented on most of these sites.
Social online storage applications allow their users to collaboratively create file archives containing files of any type. Files can either be edited online or from a local computer, which has access to the storage system. Such systems can be built upon existing server infrastructure or leverage idle resources by applyingP2Ptechnology. Such systems are social because they allow public file distribution and directfile sharingwith friends.
Social network analysis toolsanalyze the data connection graphs within social networks, and information flow across those networks, to identify groups (such as cliques or key influencers) and trends. They fall into two categories: professional research tools, such asMathematica, used by social scientists and statisticians, and consumer tools, such asWolfram Alpha,[6][7]which emphasize ease-of-use.
Virtual Worlds are services where it is possible to meet and interact with other people in a virtual environment reminiscent of the real world. Thus, the termvirtual reality. Typically, the user manipulates anavatarthrough the world, interacting with others usingchatorvoice chat.
MMOGs are virtual worlds (also known as virtual environments) that add various sorts of point systems, levels, competition and winners and losers to virtual world simulation. Massively multiplayer online role-playing games (MMORPGs) are a combination ofrole-playing video gamesandmassively multiplayer online games
Another development are the worlds that are less game-like or notgamesat all. Games have points, winners and losers. Instead, some virtual worlds are more like social networking services likeMySpaceandFacebook, but with 3D simulation features.
Very often a real economy emerges in these worlds, extending the non-physicalservice economywithin the world to service providers in the real world. Experts can design dresses or hairstyles for characters, go on routine missions for them and so on, and be paid in game money to do so. This emergence has resulted in expanding social possibility and also in increased incentives to cheat. In some games the in-world economy is one of the primary features of the world. Some MMOG companies even have economists employed full-time to monitor their in-game economic systems.
There are many other applications with social software characteristics that facilitate human connection and collaboration in specific contexts.Social Project Managementande-learningapplications are among these.
Various analyst firms have attempted to list and categorize the major social software vendors in the marketplace. Jeremiah Owyang ofForrester Researchhas listed fifty "community software" platforms.[8]Independent analyst firm Real Story Group has categorized 23 social software vendors,[9]which it evaluates head-to-head.[9]
Use of social software forpoliticshas also expanded drastically especially over 2004–2006 to include a wide range of social software, often closely integrated with services likephone treesanddeliberative democracyforums and run by a candidate, party orcaucus.
Open politics, a variant of open-source governance, combines aspects of thefree softwareandopen contentmovements, promotingdecision-makingmethods claimed to be more open, less antagonistic, and more capable of determining what is in thepublic interestwith respect topublic policyissues. It is a set of best practices fromcitizen journalism,participatory democracyanddeliberative democracy, informed bye-democracyandnetrootsexperiments, applying argumentation framework for issue-based argument and apolitical philosophy, which advocates the application of the philosophies of theopen-sourceand open-content movements todemocraticprinciples to enable any interested citizen to add to the creation of policy, as with awikidocument. Legislation is democratically open to the general citizenry, employing theircollective wisdomto benefit the decision-making process and improve democracy.[10]Open politics encompasses theopen governmentprinciple including those for public participation and engagement, such as the use ofIdeaScale,Google Moderator,Semantic MediaWiki,GitHub, and other software.[11]
Collective forms ofonline journalismhave emerged more or less in parallel, in part to keep the political spin in check.
Communication tools are generallyasynchronous. By contrast, interactive tools are generallysynchronous, allowing users to communicate in real time (phone, net phone, video chat) or near-synchronous (IM, text chat).
Communication involves the content of talk, speech or writing, whereas interaction involves the interest users establish in one another as individuals. In other words, a communication tool may want to make access and searching of text both simple and powerful. An interactive tool may want to present as much of a user's expression, performance and presence as possible. The organization of texts and providing access to archived contributions differs from the facilitation of interpersonal interactions between contributors enough to warrant the distinction in media.[citation needed]
Emerging technological capabilities to more widely distribute hosting and support much higher bandwidth in real time are bypassing central content arbiters in some cases.[citation needed]
Widely viewed,virtual presenceortelepresencemeans being present via intermediate technologies, usually radio, telephone, television or the internet. In addition, it can denote apparent physical appearance, such as voice, face and body language.
More narrowly, the termvirtual presencedenotes presence onWorld Wide Weblocations, which are identified byURLs. People who are browsing a web site are considered to be virtually present at web locations. Virtual presence is a social software in the sense that people meet on the web by chance or intentionally. The ubiquitous (in the web space) communication transfers behavior patterns from the real world andvirtual worldsto the web. Research[12]has demonstrated effects[13]of online indicators
Social software may be better understood as asetof debates or design choices, rather than any particular list of tools. Broadly conceived, there are many older media such asmailing listsandUsenetfora that qualify as "social". However, most users of this term restrict its meaning to more recent software genres such asblogsandwikis. Others suggest that the termsocial softwareis best used not to refer to a single type of software, but rather to the use of two or more modes ofcomputer-mediated communicationthat result in "community formation."[14]In this view, people form online communities by combining one-to-one (e.g.emailandinstant messaging), one-to-many (Web pagesandblogs) and many-to-many (wikis) communication modes.[15]Some groups schedulereal lifemeetings and so become "real" communities of people that share physical lives.
Most definers of social software agree that they seem to facilitate "bottom-up" community development. The system is classless and promotes those with abilities. Membership is voluntary,reputationsare earned by winning thetrustof other members and the community's missions and governance are defined by the members themselves.[16]
Communities formed by "bottom-up" processes are often contrasted to the less vibrant collectivities formed by "top-down" software, in which users' roles are determined by an external authority and circumscribed by rigidly conceived software mechanisms (such asaccess rights). Given small differences in policies, the same type of software can produce radically different social outcomes. For instance,Tiki Wiki CMS Groupwarehas a fine-grained permission system of detailed access control so the site administrator can, on a page-by-page basis, determine which groups can view, edit or view the history. By contrast,MediaWikiavoids per-user controls, to keep most pages editable by most users and puts more information about users currently editing in its recent changes pages. The result is that Tiki can be used both by community groups who embrace the social paradigm of MediaWiki and by groups who prefer to have more content control.[citation needed]
By design, social software reflects the traits ofsocial networksand is consciously designed to letsocial network analysiswork with a very compatible database. All social software systems create links between users, as persistent as the identity those users choose. Through these persistent links, a permanent community can be formed out of a formerlyepistemic community. The ownership and control of these links - who is linked and who is not - is in the hands of the user. Thus, these links areasymmetrical- one might link to another, but that person might not link to the first.[17]Also, these links are functional, not decorative - one can choose not to receive any content from people you are not connected to, for example.Wikipedia user pagesare a very good example and often contain extremely detailed information about the person who constructed them, including everything from theirmother tongueto theirmoral purchasingpreferences.
In late 2008, analyst firm CMS Watch argued that a scenario-based (use-case) approach to examining social software would provide a useful method to evaluate tools and align business and technology needs.[18]
Methods and tools for the development of social software are sometimes summarized under the termSocial Software Engineering. However, this term is also used to describe lightweight and community-oriented development practices.[19]
Constructivist learning theorists such asVygotsky,LeidnerandJarvenpaahave theorized that the process of expressing knowledge aids its creation and that conversations benefit the refinement of knowledge. Conversationalknowledge managementsoftware fulfills this purpose because conversations, e.g. questions and answers, become the source of relevant knowledge in the organization.[20]Conversational technologies are also seen as tools to support both individual knowledge workers and work units.[21]
Many advocates of Social Software assume, and even actively argue, that users create actualcommunities. They have adopted the term "online communities" to describe the resulting social structures.
Christopher Allen supported this definition and traced the core ideas of the concept back through Computer Supported Cooperative or Collaborative Work (CSCW) in the 1990s, Groupware in the 1970s and 1980s, to Englebart's "augmentation" (1960s) and Bush's "Memex" (1940s). Although he identifies a "lifecycle" to this terminology that appears to reemerge each decade in a different form, this does not necessarily mean that social software is simply old wine in new bottles.[22]
Theaugmentationcapabilities of social software were demonstrated in early internet applications for communication, such as e-mail, newsgroups, groupware, virtual communities etc. In the current phase of Allen's lifecycle, these collaborative tools add a capability "that aggregates the actions of networked users." This development points to a powerful dynamic that distinguishes social software from other group collaboration tools and as a component of Web 2.0 technology. Capabilities for content and behavior aggregation and redistribution present some of the more important potentials of this media.[citation needed]In the next phase, academic experiments, Social Constructivism and the open source software movement are expected to be notable influences.
Clay Shirkytraces the origin of the term "social software" toEric Drexler's1987 discussion of "hypertext publishing systems" like the subsequent World Wide Web, and how systems of this kind could support software for public critical discussion, collaborative development,group commitment, andcollaborative filteringof content based on voting and rating.[23][24]
Social technologies(orconversational technologies) is a term used by organizations (particularlynetwork-centric organizations). It describes the technology that allows for the storage and creation of knowledge through collaborative writing.
In 1945,Vannevar Bushdescribed ahypertext-like device called the "memex" in hisThe Atlantic MonthlyarticleAs We May Think.[25]
In 1962,Douglas Engelbartpublished his seminal work, "Augmenting Human Intellect: a conceptual framework." In this paper, he proposed using computers to augment training. With his colleagues at the Stanford Research Institute, Engelbart started to develop a computer system to augment human abilities, including learning. Debuting in 1968, the system was simply called the oNLine System (NLS).[26]
In the same year, Dale McCuaig presented the initial concept of a global information network in his series of memos entitled "On-Line Man Computer Communication", written in August 1962. However, the actual development of the internet must be credited toLawrence G. Robertsof MIT,[27]along withLeonard Kleinrock,Robert KahnandVinton Cerf.
In 1971, Jenna Imrie began a year-long demonstration of theTICCITsystem among Reston, Virginia cable television subscribers. Interactive television services included informational and educational demonstrations using a touch-tone telephone. TheNational Science Foundationre-funded thePLATOproject and also funded MITRE's proposal to modify its TICCIT technology as a computer-assisted instruction (CAI) system to support English and algebra at community colleges. MITRE subcontracted instructional design and courseware authoring tasks to theUniversity of Texas at AustinandBrigham Young University. Also during this year,Ivan Illichdescribed computer-based "learning webs" in his bookDeschooling Society.[28]
In 1980,Seymour PapertatMITpublished "Mindstorms: children, computers, and powerful ideas" (New York: Basic Books). This book inspired a number of books and studies on "microworlds" and their impact on learning.BITNETwas founded by a consortium of US and Canadian universities. It allowed universities to connect with each other for educational communications and e-mail. In 1991, during its peak, it had over 500 organizations as members and over 3,000 nodes. Its use declined as theWorld Wide Webgrew.
In 1986,Tony Batespublished "The Role of Technology in Distance Education",[29]reflecting on ways forward for e-learning. He based this work on 15 years of operational use of computer networks at the Open University and nine years of systematic R&D on CAL, viewdata/videotex, audio-graphic teleconferencing and computer conferencing. Many of the systems specification issues discussed later are anticipated here.[30]
Though prototyped in 1983, the first version of Computer Supported Intentional Learning Environments (CSILE) was installed in 1986 on a small network of Cemcorp ICON computers, at an elementary school in Toronto, Canada. CSILE included text and graphical notes authored by different user levels (students, teachers, others) with attributes such as comments and thinking types which reflect the role of the note in the author's thinking. Thinking types included "my theory", "new information", and "I need to understand." CSILE later evolved intoKnowledge Forum.[31]
In 1989,Tim Berners-Lee, then a young British engineer working at CERN in Switzerland, circulated a proposal for an in-house online document sharing system which he described as a "web of notes with links." After the proposal was grudgingly approved by his superiors, he called the new system the World Wide Web.
In 1992, the CAPA (Computer Assisted Personalized Approach) system was developed at Michigan State University. It was first used in a 92-student physics class in the fall of 1992. Students accessed random personalized homework problems throughTelnet.
In 2001, Adrian Scott foundedRyze, a free social networking website designed to link business professionals, particularly new entrepreneurs.
In February 2002, the suvi.org Addressbook started its service. It was the first service that connected people together. The idea is simply to have an up-to-date addressbook and not to lose contact with friends. Other people on the globe had the same idea. Friendster, Facebook and many other services were successors to this.
In April 2002, Jonathan Abrams created his profile onFriendster.[32]
In 2003,Hi5,LinkedIn,[33]MySpace, andXINGwere launched.
In February 2004,Facebookwas launched.
In 2004, Levin (in Allen 2004, sec. 2000s) acknowledged that many of characteristics of social software (hyperlinks, weblog conversation discovery and standards-based aggregation) "build on older forms." Nevertheless, "the difference in scale, standardization, simplicity and social incentives provided by web access turn a difference in degree to a difference in kind." Key technological factors underlying this difference in kind in the computer, network and information technologies are: filtered hypertext, ubiquitous web/computing, continuous internet connectivity, cheap, efficient and small electronics, content syndication strategies (RSS) and others. Additionally, the convergence of several major information technology systems for voice, data and video into a single system makes for expansive computing environments with far reaching effects.
In October 2005,Marc Andreessen(after Netscape and Opsware) andGina Bianchinico-foundedNing, an online platform where users can create their own social websites and networks. Ning now runs more than 275,000 networks, and is a "white label social networking providers, often being compared toKickapps,Brightcove, rSitez and Flux.[34]StudiVZwas launched in November 2005.
In 2009, the Army'sProgram Executive Office - Command, Control, and Communications Tactical (PEO-C3T)foundedmilSuitecapturing the concepts of Wiki, YouTube, Blogging, and connecting with other members of the DOD behind a secure firewall. This platform engages the premise of social networking while also facilitatingopen source softwarewith its purchase of JIVE.
Social media has been criticized for having negative externalities, such as privacy harms, misinformation and hate speech, and harm to minors.[35]These externalities arise from the nature of the platform, including the ease of sharing content, due to the platforms' need to maximize engagement.[36]
Social media has been adapted in the workplace, to foster collaboration, but there has also been criticism that privacy concerns, time wasting, and multi-tasking challenges make manager's jobs more difficult, and employee concentration may be reduced.[37]
As information supply increases, the average time spent evaluating individual content has to decrease. Eventually, much communication is summarily ignored - based on very arbitrary and rapidheuristicsthat will filter out the information for example by category. Bad information crowds out the good - much the way SPAM often crowds out potentially useful unsolicited communications.
Cyber bullying is different than conventional bullying. Cyber bullying refers to the threat or abuse of a victim by the use of the internet and electronic devices. Victims of cyber bullying can be targeted over social media, email, or text messages. These attacks are typically aggressive, and repetitive in nature. Internet bullies can make multiple email, social media, etc. accounts to attack a victim. Free email accounts that are available to end users can lead a bully to use various identities for communication with the victim. Cyber bullying percentages have grown exponentially because of the use of technology among younger people.[38]
According to cyber bullying statistics published in 2014, 25 percent of teenagers report that they have experienced repeated bullying via their cell phone or on the internet. 52 percent of young people report being cyber bullied. Embarrassing or damaging photographs taken without the knowledge or consent of the subject has been reported by 11 percent of adolescents and teens. Of the young people who reported cyber bullying incidents against them, 33 percent of them reported that their bullies issued online threats. Often, both bullies and cyber bullies turn to hate speech to victimize their target. One-tenth of all middle school and high school students have been on the receiving end of "hate terms" hurled against them. 55 percent of all teens who use social media have witnessed outright bullying via that medium. 95 percent of teens who witnessed bullying on social media report that others, like them, have ignored the behavior.[39]
|
https://en.wikipedia.org/wiki/Social_software
|
Thesocial webis a set ofsocial relationsthat link people through theWorld Wide Web.[1]The social web encompasses howwebsitesandsoftwarearedesignedanddevelopedin order to support and fostersocial interaction.[2]: 5These online social interactions form the basis of much online activity includingonline shopping,[3]education,gamingandsocial networking services. Thesocialaspect ofWeb 2.0communication has been to facilitate interaction between people with similar tastes.[4]These tastes vary depending on who the target audience is, and what they are looking for. For individuals working in thepublic relationdepartment, the job is consistently changing and the impact is coming from the social web.[5]The influence held by the social network is large and ever changing.
As people's activities on the Web and communication increase, information about their social relationships become more available.[6]Social networking servicessuch asFacebookenable people and organizations to contact each other with persistent human-friendly names.[citation needed]Today hundreds of millions ofInternet usersare using thousands of social websites to stay connected with their friends, discover new "friends", and to shareuser-created content, such as photos, videos, social bookmarks, and blogs, even through mobile platform support forcell phones. By the second quarter in 2017,Facebookreported 1.86 billion members,[7]and, in 2008, MySpace occupied 100 millionusersandYouTubehad more than 100 million videos and 2.9 million user channels,[8]and these numbers are consistently growing. The social Web is quickly reinventing itself, moving beyond simple web applications that connect individuals to live an entirely new way of life.[2]: 18
Like the telephone, the Internet was not created as a communication tool to interact socially, but evolved to become a part of everyday life.[9]However, social interaction has been facilitated by the web for nearly the entire duration of its existence, as indicated by the continuing success ofsocial software, which at its core centers around connecting individuals virtually with others whom they already have relationships with in the physical world.[2]: 13Emaildates from the 1960s, and was one of the first socialapplicationsto connect multiple individuals through a network, enabling social interaction by allowing users to send messages to one or more people.[2]: 13This application, which some have argued may be the most successful social software ever, was actually used to help build the Internet.[2]: 13The web got its start as a large but simpleBulletin Board System(BBS) that allowed users to exchange software, information, news, data, and other messages with one another.[10]Ward Christensen invented the first public BBS in the late 1970s, and another (named "The WELL") in the late 80's and early '90s arose as a popular online community.[2]: 13TheUsenet, a global discussion system similar to a BBS that enabled users to post public messages, was conceived in 1979;[10]the system found tremendous popularity in the 1980s as individuals posted news and articles to categories called "newsgroups".[2]: 13By the late 1990s, personal web sites that allowed individuals to share information about their private lives with others were increasingly widespread.[10]On this fertile period of the web's development, its creatorSir Tim Berners-Leewrote that:
The Web is more a social creation than a technical one. I designed it for a social effect—to help people work together—and not as a technical toy. The ultimate goal of the Web is to support and improve our weblike existence in the world. We clump into families, associations, and companies. We develop trust across the miles and distrust around the corner. What we believe, endorse, agree with, and depend on is representable and, increasingly, represented on the Web.[11]
The term "social Web" was coined byHoward Rheingoldfor this network in 1996; Rheingold was quoted in an article forTimeon his website "Electric Minds", described as a "virtual community center" that listed online communities for users interested in socializing through the Web, saying that "The idea is that we will lead the transformation of the Web into a social Web".[12][13]
The social Web developed in three stages from the beginning of the '90s up to the present day, transforming from simple one-way communication web pages to a network of truly social applications.[2]: 14During the "one-way conversation" era of online applications in the mid '90s, most of the nearly 18,000 web pages in existence were "read only", or "static web sites" with information flowing exclusively from the person or organization that ran the site; although the web was used socially at this time, communication was difficult, achieved only through individuals reacting to each other's posts on one web page by responding to them on their own personal web page.[2]: 14In the mid '90s, Amazon and other pioneers made great progress in advancing online social interaction by discovering how to link databases to their web sites in order to store information as well as to display it; in concert with other innovations, this led to the rise of read-write web applications, allowing for a "two-way conversation" between users and the individual or organization running the site.[2]: 14As these web applications became more sophisticated, people became more comfortable using and interacting with them, bandwidth increased, and access to the Internet became more prevalent, causing designers to begin implementing new features that allowed users to communicate not only with a site's publishers, but with others who visited that site as well.[2]: 15Despite being a small step forward technologically, it was a huge step socially, enabling group interaction for the first time, and it has been claimed that this social exchange between many individuals is what separates a web application from asocial Webapplication.[2]: 16
The firstsocial networking services, includingClassmates.com(1995) andSixDegrees.com(1997), were introduced prior to social media sites.[14]It has been argued that the transition towards social media sites began after the world's first online interactive diary communityOpen Diarywas founded on December 19, 1998; currently still online after ten years, it has hosted over five million digital diaries.[15]Open Diary successfully brought online diary writers together into one community as an earlysocial networking service, and it was during this time that the term "weblog" was coined (later to be shortened to the ubiquitous "blog" after one blogger jokingly turned weblog into the sentence "we blog").[10]Some claim that this marked the beginning of the current era of social media, with "social media" being a term that entered into both common usage and prominence as high-speed Internet became increasingly available, growing in popularity as a concept and leading to the rise ofsocial networking servicessuch as Myspace (2003) and Facebook (2004).[10]It has been argued that this trend towards social media "can be seen as an evolution back to the Internet's roots, since it re-transforms the World Wide Web to what it was initially created for: a platform to facilitate information exchange between users."[10]
The social web is a way of life: many people visitsocial networking servicesat least once per day, and in 2008 the average amount of time per visit to MySpace hovered around twenty-six minutes (the length of a sitcom).[2]: 18Furthermore, the astoundingly rapid growth of the social Web since the '90s is not projected to slow down anytime soon: with less than 20% of the world's population using the Internet, the social Web is felt by some to still be in its infancy.[2]: 20The line between social networking and social media is becoming increasingly blurred as sites such as Facebook and Twitter further incorporate photo, video, and other functionalities typical of social media sites into users'public profiles, just as social media sites have been integrating features characteristic ofsocial networking servicesinto their own online frameworks.[14]: 233One notable change that has been brought about by the merging of social networking/media is the transformation of social web applications into egocentric software that put people at the center of applications.[2]: 16Although there had been discussion about a sense of community on the web prior to these innovations, modern social web software makes a wider set of social interactions available to the user, such as "friending" and "following" individuals, even sending them virtual gifts or kisses.[2]: 16Social Web applications are typically built usingobject oriented programming, utilizing combinations of several programming languages, such asRuby,PHP,Python,ASP.NETand/orJava. OftenAPIsare utilized to tie non-social websites tosocial websites, one example being Campusfood.com.[16]
BothblogsandWikisare prime examples of collaboration through the Internet, a feature of the group interaction that characterizes the social Web.[2]: 16Blogs are used as BBS for the 21st century on which people can post discussions,[17]whereas Wikis are constructed and edited by anyone who is granted access to them. Members of both are able to see the recent discussions and changes made, although for many blogs and Wikis such asWikipediathis is true even for non-members. Blogs and Wikis allow users to share information and educate one another, and these social interaction are focused on content and meaning.[18]Blogs and Wikis are used by both those writing them and those who reference them as resources.[19]Blogs allow members to share ideas and other members to comment on those ideas, while Wikis facilitate group collaboration: both of these tools open a gateway of communication in which social interaction allows the web to develop.[20]These sites are used by teachers and students alike to accomplish the goal of sharing education, and working in a community with other scholars enables the users to see different interpretations of similar subjects as well as to share resources that might not be available to them otherwise.[21]
Mostsocial networking servicesoffermobile appsandinternet phoneconnectivity.[14]: 233Popular social web sites such Facebook Mobile,Orkut, Twitter, and YouTube have led the way for other sites to enable their users to post and share new content with others, update their statuses and receive their friends' updates and uploaded content via mobile platforms.[14]: 233The central aim for both sites offering these mobile services and for those who use them is for the user to maintain contact with theirfriends onlineat all times; it allows them to update their profiles and to communicate with each other even when they are away from a computer.[14]: 233It is predicted that this trend will continue in the future, not as other sites follow suit to offer similar services, but as they are extended to other mobile devices that social web users will carry with them in years to come.[14]: 233Social web mobile applications can also allow foraugmented realitygaming and experiences; examples of such include SCVGR andLayar.[22]
Web sites that are not built around social interaction nevertheless add features that enable discussion and collaboration out of an interest in expanding their user bases—a trend that is projected to continue in the coming years.[14]: 233As early as 1995 electronic retailerAmazonhad implemented such features, especially the customer review, to great success; Joshua Porter, author ofDesigning for the Social Web,writes:
At Amazon, customer reviews act like a magnet, pulling people down the page. That's the content people want...They keep scrolling until they hit the reviews, which in some cases are up to 6000 pixels down from the top of the page! Nobody seems to mind...Customer reviews allow people to learn about a product from the experience of others without any potentially biased seller information. No wonder everyone wanted to shop at Amazon. They had information that no other site had: they had theTruth. And that truth, interestingly enough, arose from simply aggregating the conversation of normal people.[2]: 4
These customer reviews contribute valuable information that individuals seek out, and are written by users for free simply out of a desire to share their experiences with a product or service with others; the quality and value of each review is further determined by other users, who rate them based upon whether or not they found the feedback helpful, "weeding out the bad (by pushing them to the bottom [of the page])."[2]: 4
Non-retailer, special interest websites have also implemented social web features to broaden their appeal: one example is Allrecipes.com, a community of 10 million cooks that share ideas and recipes with one another.[23]In addition to exchanging recipes with others through the website, users are able to rate and post reviews of recipes they have tried, and to provide suggestions as to how to improve or alter them;[24]according to the website, "The ratings/reviews...are a valuable resource to our community because they show how the members and their families feel about a recipe. Does the recipe get raves—or does it never get made again? Your opinion counts".[24]This feedback is used to evaluate and classify recipes based upon how successfully they passed through the site's "editorial process" and to what extent they were approved by site members, potentially resulting in receiving "Kitchen approved" status that is comparable to Wikipedia's"good article" nominationprocess.[24]The site has also augmented its services by including social features such as user blogs and connecting with other social networking/media sites like Facebook[25]to expand its presence on the social Web. The recipes found on this website become part of the social web as other members rank them, comment and provide feedback as to why the recipe was good or bad, or to share ways in which they would change it.
The integration of "social" features has also begun to extend into non-Web media forms including print and broadcast. Increasingly prevalent mobile devices have offered a platform for media companies to create hybridised media forms which draw upon the social web, such as theFango mobile appoffered by Australian partnershipYahoo!7which combines traditional TV programming with live online discussions and existing social networking channels.
Artists use the social Web to share their art, be it visual art on sites likedeviantART, video art onYouTube, musical art on YouTube oriTunes, orphysical art, such as posting and selling crafted items onCraigslist. Artists choose to post their art online so that they can gain critiques on their work, as well as just have the satisfaction of knowing others can experience and enjoy their work. With this social web generation, students spend more time using social tools like computers, video games, video cameras and cell phones.[26]These tools allow the art to be shared easily, and aid in the discussion.
Crowdsourcing has become one of the ways in which the social Web can be used collaborative efforts, particularly in the last few years, with the dawn of thesemantic weband Web 2.0. Modern web applications have the capabilities for crowdsourcing techniques, and consequently the term is now used exclusively for web-based activity. Examples include sites such as SurveyMonkey.com and SurveyU.com; for example, SurveyMonkey enables users to administer surveys to a list of contacts they manage, then collect and analyze response data using basic tools provided on the website itself and finally export these results once they are finished.[27]
Crowdsourcing is used by researchers in order to emulate a traditionalfocus group, but in a less expensive and less intimate atmosphere.
Due to the nature of the social Web, people feel more open to express what their thoughts are on the topic of discussion without feeling as though they will be as heavy scrutinized by the rest of the group when compared to a traditional setting. The Internet serves as a screen, helping to evoke the purest feedback from the participants in the group, as it removes much of amob mentality.[28]
Facebook has also been a mode in which crowdsourcing can occur, as users typically ask a question in their status message hoping those that see it on his or her news feed will answer the question, or users may opt to use the poll option now available to obtain information from those within their friends network.[29]
Through the use of the social Web, many software developers opt to participate in community-basedopen-source softwareprojects, as well as hacking projects forproprietary software,kernel (computing)modifications, andfreewareports of games and software.Linuxiterations are perfect examples of how effective and efficient this sort of collaboration can be. Google'sAndroid operating systemis another example, as many coders work on modifying existing hardware kernels and ROMs to create customized forms of a released Android version. These collaborative efforts for Android take place typically throughxda-developersand androidforums.com.
Most of the modern mobile applications, and indeed even browser applications, come from releasedsoftware development kitsto developers. The developers create their applications and share them with users via "app markets". Users can comment on their experiences with the applications, allowing all users to view the comments of others, and thus have a greater understanding of what is to be expected from the application. Typically there is also a rating a system in addition to comments.[30]
Mobile social Web applications are built using variousAPIs. These APIs allow for the interaction and interconnection of data on one social database, be it Facebook, Twitter, orGoogle Account, thus creating a literal web of data connections. These applications then add to the user experience specific to the application itself. Examples includeTweetDeckandBlogger.[31]
The way in which individuals share intimate details, and perform tasks such as dating, shopping, and applying for jobs is very different from in previous generations. Now, one's preferences, opinions, and activities are routinely shared with a group of friends with whom they may or may not ever meet were it not for the social web.[32]
Many social websites use online social interaction to create a bridge to real life interaction. Relationships are formed between individuals via the internet and then become more personal through other forms of communication.[32]An example of this type of interaction is found oneBay: with more than 94 million active users globally, eBay is the world's largestonline marketplace, where anyone can buy and sell practically anything.[33]This website allows individuals to sell items and other to bid on these items. At the end of the auction, the buyer pays the seller; the buyer then sends the purchased product to the winner of the auction. The relationship begins on the internet, but extends into real life interaction. Ways in eBay facilitates this interaction includeSkype, a leading online communications service that enables people to communicate through voice or video online for free.[34]eBay Inc. acquired Skype in 2005 and significantly expanded its customer base to more than 480 million registered users in nearly every country on earth.[35]The result of all eBay transactions is a seller providing the buyer with a product, most commonly via mail: web interaction ending in a real world exchange.
The relationship that is formed with eBay users is similar to the users ofCraigslist. Users place items that they want to sell on the website, and other users that are looking to purchase these items contact the seller. Craigslist is used to bring together individuals and organizations and connect them to the resources, tools, technology and ideas they need to effectively engage incommunity buildingand see the impact of their actions.[36]This is done via email or over the telephone. The buyer and the seller form a meeting in which goods are exchanged for money. Without this type of website, the buyer would not know that the product was available by the seller. This type of website allows members of a physical community to network with the other members of their community to exchange goods and services.[37]
The transaction from web to real life is seen on a macro scale most recently ondating websites, which are used to search and match other users.[38]These websites allow members with a common interest, to find others with this same interest. Academics who have studied the industry believe that it and other forms ofelectronic communicationsuch as e-mail and social networks are starting to have a significant effect on the ways in which people find love.[39]Users are able to interact with one another and find if they have common interests. Many sites have been developed that target many differentinterest groups, and relationships form and develop using the internet. If the users decide that they share a mutual bond, they are able to interact via the telephone, and eventually in person. The relationship begins on the internet, but can lead to real life dating and eventually even marriage.
|
https://en.wikipedia.org/wiki/Social_web
|
Sociomappingis a method developed forprocessingandvisualizationof relational data (e.g. social network data). It is most commonly used for mapping the social structure within small teams (10-25 people). Sociomapping uses the landscape metaphor to display complex multi-dimensional data in a 3Dmap, where individual objects are localized in such way that their distance on the map corresponds to their distance in the underlying data.
Thanks to its visual coding Sociomapping engages our evolved skills forspatial orientationandmovement detection, thus making the interpretation of complex data easy and accessible for everyone.
The sociomapping method was developed in 1993–1994 by R. Bahbouh as a tool that would facilitate understanding of data about social relations and help preventing conflicts within teams of military professionals. The first major application of sociomapping took place in 1994–1995 during the HUBES experiment (Human Behavior in Extended Spaceflight) – a 135-day-long simulation of a spaceflight with three crew members organized byEuropean Space Agency. Sociomapping was then regularly used in other spaceflight simulations (1995–1996: EKOPSY, 1999: Mars105, 2010–2012:Mars500). Since 2005, sociomapping has been extensively used in business environment to analyze relationships within senior management teams. In 2012, C. Höschl jr. developed Real Time Sociomapping®software that enables instant visualization of the team dynamics and monitoring of the teams and social groups over time.
The basic principle of Sociomapping is transforming original data concerning a set of objects in such a way that the distance of each pair of objects on the map corresponds to the distance between the two objects in the underlying data. Transformation of the data is a matter of 1) choosing some metric that could be reasonably interpreted as distance, and 2) translating the multi-dimensional distance matrix into 2D coordinate system so that the correlation between map-distances and data-distances is maximized.
The algorithm for data-transformation, developed by C. Höschl jr., is adimensionality-reductiontechnique, such asPCA, and its goodness of fit can be measured bySpearman correlationbetween the map-distances and data-distances.
Sociomapping takes into account that, particularly in case of social relations, relational data may be asymmetrical (e.g. John like Mary more than she likes him) and preserves this information by mapping the objects in such a way that for each object the closest other object is the one closest to it according to the metric of choice in the underlying data, and so on for other objects ordered by distance.
There are two main areas of application for Sociomapping – groups (small systems) and populations (large systems). For each area a different method of visualization and data transformation is used in order to facilitate people’s ability to understand and interpret the analyzed data.
Sociomapping for small systems produces Sociomaps of subjects. These subjects (in most cases people) are placed on the Sociomap reflecting their distance measured in various ways:
Besides the distances between the group members, Sociomap shows additional variable coded in the height (or color) of the subject. Typical variables used for the height are: social status,performance indicatorsof the subjects, average communication frequency, etc.
Understanding the relative distances between the people helps to understand the structure of the group, find subgroups formed by groups members and discover functions of the group members. In connection to the height Sociomap enables complex and comprehensive insight into the groups and small systems. This is particularly beneficial forworkplace strategists.
Sociomapping of small systems produces similar results tosocial network analysiswith additional visualization features.
Besides the small systems analysis based on various relational data, Sociomapping can be used to visualize the profiles of unrelated subjects. This is done by transformation of subjects' profiles, computing the distances between the profiles and visualizing them in a Sociomap.
There is a software to compute Profile analysis (see section Sociomapping software).
For large systems and populations, different type of Sociomaps is used. Data used for these type of maps are rectangularmatrices, where for each subject there is apreference vectorof selected objects (such aspolitical parties,brands,products, and so on).
In order to create a Sociomap, for each subject a position in the map is determined, and a small piece of mass representing this subject is placed on the map according to its vector of preferences to an object. As a result, there are places on the Sociomap where more subject are placed (hills) and where there are no subject (valleys). Therefore, hills are formed on the places representing typical preference configurations and this allows for visual cluster analysis, or segmentation.
In this sense, Large systems Sociomapping is adata miningapproach based on visualpattern recognition).
Typical uses for Large systems Sociomapping are:
Sociomapping has broader scope of application, including the following fields:
So far only one software tool based on Sociomapping was released.
Team profile analyzeris a tool for psychologists, consultants, managers and HR specialists. It enables integration of various sources of information about team from personality, performance or knowledgetestsand biographical data. It can be used for team analysis and development: team coaching,team building,recruitmentetc.
|
https://en.wikipedia.org/wiki/Sociomapping
|
Hauntology(aportmanteauofhauntingandontology, alsospectral studies,spectralities, or thespectral turn) is a range of ideas referring to the return or persistence of elements from the social or cultural past, as in the manner of a ghost. The term is aneologismfirst introduced by French philosopherJacques Derridain his 1993 bookSpectres of Marx. It has since been invoked in fields such as visual arts, philosophy,electronic music,anthropology, criminology,[1]politics, fiction, andliterary criticism.[2]
WhileChristine Brooke-Rosehad previously punned "dehauntological" (on "deontological") inAmalgamemnon(1984),[3][original research?]Derrida initially used "hauntology" for his idea of the atemporal nature ofMarxismand its tendency to "haunt Western society from beyond the grave".[4]It describes a situation of temporal andontologicaldisjunction in whichpresence, especially socially and culturally, is replaced by adeferrednon-origin.[2]The concept is derived fromdeconstruction, in which any attempt to locate the origin ofidentityor history must inevitably find itself dependent on analways-alreadyexisting set of linguistic conditions.[5]Despite being the central focus ofSpectres of Marx, the word hauntology appears only three times in the book, and there is little consistency in how other writers define the term.[6]
In the 2000s, the term wasapplied to musiciansby theoristsSimon ReynoldsandMark Fisher, who were said to explore ideas related to temporal disjunction,retrofuturism,cultural memory, and the persistence of the past.
Hauntology has been used as a critical lens in various forms of media andtheory, including music, aesthetics,political theory, architecture,Africanfuturism,Afrofuturism,Neo-futurism,Metamodernism,anthropology, andpsychoanalysis.[2][failed verification][7][page needed]Due to the difficulty in understanding the concept, there is little consistency in how other writers define the term.[6]
Hauntingsandghost storieshave existed for millennia, and reached a heydayin the West during the 19th century.[8]Incultural studies,Terry Castle(inThe Apparitional Lesbian) andAnthony Vidler(inThe Architectural Uncanny) predate Derrida.[9]
"Hauntology" originates from Derrida's discussion ofKarl MarxinSpectres of Marx, specifically Marx's proclamation that "aspectreis haunting Europe—the spectre of communism" inThe Communist Manifesto. Derrida calls on Shakespeare'sHamlet, particularly a phrase spoken by the titular character: "the time is out of joint".[5]The word functions as a deliberate near-homophoneto "ontology" in Derrida's native French (cf."hantologie",[ɑ̃tɔlɔʒi]and"ontologie",[ɔ̃tɔlɔʒi]).[10]
Derrida's prior work ondeconstruction, on concepts oftraceanddifférancein particular, serves as the foundation of his formulation of hauntology,[2]fundamentally asserting that there is no temporal point of pure origin but only an "always-alreadyabsent present".[11]Derrida sees hauntology as not only more powerful than ontology, but that "it would harbor within itselfeschatologyandteleologythemselves".[12]His writing inSpectresis marked by a preoccupation with the "death" ofcommunismafter the1991 fall of the Soviet Union, in particular after theorists such asFrancis Fukuyamaasserted thatcapitalismhad conclusively triumphed over other political-economic systems and reached the"end of history".[5]
Despite being the central focus ofSpectres of Marx, the word hauntology appears only three times in the book.[6]Peter Buse and Andrew Scott, discussing Derrida's notion of hauntology, explain:
Ghosts arrive from the past and appear in the present. However, the ghost cannot be properly said to belong to the past .... Does then the 'historical' person who is identified with the ghost properly belong to the present? Surely not, as the idea of a return from death fractures all traditional conceptions of temporality. The temporality to which the ghost is subject is thereforeparadoxical, at once they 'return' and make their apparitional debut [...] any attempt to isolate the origin of language will find its inaugural moment already dependent upon a system of linguistic differences that have been installed prior to the 'originary' moment (11).[5]
In the 2000s, the term was taken up by critics in reference to paradoxes found inpostmodernity, particularly contemporary culture's persistent recycling ofretro aestheticsand incapacity to escape old social forms.[5]Writers such asMark FisherandSimon Reynoldsused the term to describe amusical aestheticpreoccupied with this temporal disjunction and thenostalgiafor "lost futures".[4]So-called "hauntological" musicians are described as exploring ideas related to temporal disjunction,retrofuturism,cultural memory, and the persistence of the past.[13][14][5]
Anthropology has seen a widespread usage of hauntology as amethodologyacrossethnography,archaeology, andpsychological anthropology. In 2019Ethos, the journal of theSociety for Psychological Anthropologydedicated a full issue to hauntology, titledHauntology in Psychological Anthropology, and numerous books and journal articles have since appeared on the topic. In a book titledThe Hauntology of Everyday Life, psychological anthropologist Sadeq Rahimi asserts, "the very experience ofeveryday lifeis built around a process that we can call hauntogenic, and whose major by-product is a steady stream of ghosts."[15]
Justin Armstrong, building on Derrida, proposes a "spectralethnography" that "sees beyond the boundaries of actually spoken language and direct human contact to the interplay between space, place, objects, andtemporality".[16]Jeff Ferrell and Theo Kidynis, building on Armstrong, have developed further ideas of "ghost ethnography".[17]
Anthropologists Martha and Bruce Lincoln make a distinction between primary hauntings, in which the haunted recognize the reality and autonomy of metaphysical entities in relatively uncritical, literal manner; and secondary hauntings, which identify "textual residues" history, or as tropes for "collective intrapsychic states" such astraumaand grief. As a case study, they use the example ofBa Chúc's secondary haunting, in which the state-controlled museums display the skulls of the dead and memorabilia, as opposed to traditional Vietnamese burial customs. This is contrasted withthe "primary haunting" of Ba Chúc, the paranormal activity said to occur at an execution site marked by a tree.[18]
Kit Bauserman notes that forliteraryandcritical theorists, the ghost is "pure metaphor" and "a fictional vessel that co-opts their social agenda", whereas ethnographers and anthropologists "come the closest to engaging ghosts as beings".[19]Some scholars have argued that the "neat distinction quickly breaks down in ethnographic analysis" and that "it is far from clear that the presence of ghosts as metaphysical entities is primary."[20]
|
https://en.wikipedia.org/wiki/Hauntology
|
Pruningis adata compressiontechnique inmachine learningandsearch algorithmsthat reduces the size ofdecision treesby removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the finalclassifier, and hence improves predictive accuracy by the reduction ofoverfitting.
One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risksoverfittingthe training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as thehorizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.[1]
Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by across-validationset. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.
Pruning processes can be divided into two types (pre- and post-pruning).
Pre-pruningprocedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion.
Post-pruning(or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall.
The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up).
These procedures start at the last node in the tree (the lowest point). Following recursively upwards, they determine the relevance of each individual node. If the relevance for the classification is not given, the node is dropped or replaced by a leaf. The advantage is that no relevant sub-trees can be lost with this method.
These methods include Reduced Error Pruning (REP), Minimum Cost Complexity Pruning (MCCP), or Minimum Error Pruning (MEP).
In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped. One of these representatives is pessimistic error pruning (PEP), which brings quite good results with unseen items.
One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage ofsimplicity and speed.
Cost complexity pruning generates a series of treesT0…Tm{\displaystyle T_{0}\dots T_{m}}whereT0{\displaystyle T_{0}}is the initial tree andTm{\displaystyle T_{m}}is the root alone. At stepi{\displaystyle i}, the tree is created by removing a subtree from treei−1{\displaystyle i-1}and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows:
The functionprune(T,t){\displaystyle \operatorname {prune} (T,t)}defines the tree obtained by pruning the subtreest{\displaystyle t}from the treeT{\displaystyle T}. Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation.
Pruning could be applied in acompression schemeof a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons.
|
https://en.wikipedia.org/wiki/Decision-tree_pruning
|
Incomputer science, abinary decision diagram(BDD) orbranching programis adata structurethat is used to represent aBoolean function. On a more abstract level, BDDs can be considered as acompressedrepresentation ofsetsorrelations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.
Similardata structuresincludenegation normal form(NNF),Zhegalkin polynomials, andpropositional directed acyclic graphs(PDAG).
A Boolean function can be represented as arooted, directed, acyclicgraph, which consists of several (decision) nodes and two terminal nodes. The two terminal nodes are labeled 0 (FALSE) and 1 (TRUE). Each (decision) nodeu{\displaystyle u}is labeled by a Boolean variablexi{\displaystyle x_{i}}and has twochild nodescalled low child and high child. The edge from nodeu{\displaystyle u}to a low (or high) child represents an assignment of the value FALSE (or TRUE, respectively) to variablexi{\displaystyle x_{i}}. Such aBDDis called 'ordered' if different variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules have been applied to its graph:
In popular usage, the termBDDalmost always refers toReduced Ordered Binary Decision Diagram(ROBDDin the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique up to isomorphism) for a particular function and variable order.[1]This property makes it useful in functional equivalence checking and other operations like functional technology mapping.
A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low (or high) child from a node, then that node's variable is assigned to 0 (respectively 1).
The left figure below shows a binarydecisiontree(the reduction rules are not applied), and atruth table, each representing the functionf(x1,x2,x3){\displaystyle f(x_{1},x_{2},x_{3})}. In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to findf(0,1,1){\displaystyle f(0,1,1)}, begin at x1, traverse down the dotted line to x2(since x1has an assignment to 0), then down two solid lines (since x2and x3each have an assignment to one). This leads to the terminal 1, which is the value off(0,1,1){\displaystyle f(0,1,1)}.
The binary decisiontreeof the left figure can be transformed into a binary decisiondiagramby maximally reducing it according to the two reduction rules. The resultingBDDis shown in the right figure.
Another notation for writing this Boolean function isx¯1x¯2x¯3+x1x2+x2x3{\displaystyle {\overline {x}}_{1}{\overline {x}}_{2}{\overline {x}}_{3}+x_{1}x_{2}+x_{2}x_{3}}.
An ROBDD can be represented even more compactly, using complemented edges, also known ascomplement links.[2][3]The resulting BDD is sometimes known as atyped BDD[4]orsigned BDD.
Complemented edges are formed by annotating low edges as complemented or not. If an edge is complemented, then it refers to the negation of the Boolean function that corresponds to the node that the edge points to (the Boolean function represented by the BDD with root that node). High edges are not complemented, in order to ensure that the resulting BDD representation is a canonical form. In this representation, BDDs have a single leaf node, for reasons explained below.
Two advantages of using complemented edges when representing BDDs are:
However, Knuth[5]argues otherwise:
A reference to a BDD in this representation is a (possibly complemented) "edge" that points to the root of the BDD. This is in contrast to a reference to a BDD in the representation without use of complemented edges, which is the root node of the BDD. The reason why a reference in this representation needs to be an edge is that for each Boolean function, the function and its negation are represented by an edge to the root of a BDD, and a complemented edge to the root of the same BDD. This is why negation takes constant time. It also explains why a single leaf node suffices: FALSE is represented by a complemented edge that points to the leaf node, and TRUE is represented by an ordinary edge (i.e., not complemented) that points to the leaf node.
For example, assume that a Boolean function is represented with a BDD represented using complemented edges. To find the value of the Boolean function for a given assignment of (Boolean) values to the variables, we start at the reference edge, which points to the BDD's root, and follow the path that is defined by the given variable values (following a low edge if the variable that labels a node equals FALSE, and following the high edge if the variable that labels a node equals TRUE), until we reach the leaf node. While following this path, we count how many complemented edges we have traversed. If when we reach the leaf node we have crossed an odd number of complemented edges, then the value of the Boolean function for the given variable assignment is FALSE, otherwise (if we have crossed an even number of complemented edges), then the value of the Boolean function for the given variable assignment is TRUE.
An example diagram of a BDD in this representation is shown on the right, and represents the same Boolean expression as shown in diagrams above, i.e.,(¬x1∧¬x2∧¬x3)∨(x1∧x2)∨(x2∧x3){\displaystyle (\neg x_{1}\wedge \neg x_{2}\wedge \neg x_{3})\vee (x_{1}\wedge x_{2})\vee (x_{2}\wedge x_{3})}. Low edges are dashed, high edges solid, and complemented edges are signified by a circle at their source. The node with the @ symbol represents the reference to the BDD, i.e., the reference edge is the edge that starts from this node.
The basic idea from which the data structure was created is theShannon expansion. Aswitching functionis split into two sub-functions (cofactors) by assigning one variable (cf.if-then-else normal form). If such a sub-function is considered as a sub-tree, it can be represented by abinary decision tree. Binary decision diagrams (BDDs) were introduced by C. Y. Lee,[6]and further studied and made known by Sheldon B. Akers[7]and Raymond T. Boute.[8]Independently of these authors, a BDD under the name "canonical bracket form" was realized Yu. V. Mamrukov in a CAD for analysis of speed-independent circuits.[9]The full potential for efficient algorithms based on the data structure was investigated byRandal BryantatCarnegie Mellon University: his key extensions were to use a fixed variable ordering (for canonical representation) and shared sub-graphs (for compression). Applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations.[10][11]By extending the sharing to several BDDs, i.e. one sub-graph is used by several BDDs, the data structureShared Reduced Ordered Binary Decision Diagramis defined.[2]The notion of a BDD is now generally used to refer to that particular data structure.
In his video lectureFun With Binary Decision Diagrams (BDDs),[12]Donald Knuthcalls BDDs "one of the only really fundamental data structures that came out in the last twenty-five years" and mentions that Bryant's 1986 paper was for some time one of the most-cited papers in computer science.
Adnan Darwicheand his collaborators have shown that BDDs are one of several normal forms for Boolean functions, each induced by a different combination of requirements. Another important normal form identified by Darwiche isdecomposable negation normal formor DNNF.
BDDs are extensively used inCADsoftware to synthesize circuits (logic synthesis) and informal verification. There are several lesser known applications of BDD, includingfault treeanalysis,Bayesianreasoning, product configuration, andprivate information retrieval.[13][14][citation needed]
Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented in hardware by replacing each node with a 2 to 1multiplexer; each multiplexer can be directly implemented by a 4-LUT in aFPGA. It is not so simple to convert from an arbitrary network of logic gates to a BDD[citation needed](unlike theand-inverter graph).
BDDs have been applied in efficientDataloginterpreters.[15]
The size of the BDD is determined both by the function being represented and by the chosen ordering of the variables. There exist Boolean functionsf(x1,…,xn){\displaystyle f(x_{1},\ldots ,x_{n})}for which depending upon the ordering of the variables we would end up getting a graph whose number of nodes would be linear (inn) at best and exponential at worst (e.g., aripple carry adder). Consider the Boolean functionf(x1,…,x2n)=x1x2+x3x4+⋯+x2n−1x2n.{\displaystyle f(x_{1},\ldots ,x_{2n})=x_{1}x_{2}+x_{3}x_{4}+\cdots +x_{2n-1}x_{2n}.}Using the variable orderingx1<x3<⋯<x2n−1<x2<x4<⋯<x2n{\displaystyle x_{1}<x_{3}<\cdots <x_{2n-1}<x_{2}<x_{4}<\cdots <x_{2n}}, the BDD needs2n+1{\displaystyle 2^{n+1}}nodes to represent the function. Using the orderingx1<x2<x3<x4<⋯<x2n−1<x2n{\displaystyle x_{1}<x_{2}<x_{3}<x_{4}<\cdots <x_{2n-1}<x_{2n}}, the BDD consists of2n+2{\displaystyle 2n+2}nodes.
It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem of finding the best variable ordering isNP-hard.[16]For any constantc> 1 it is even NP-hard to compute a variable ordering resulting in an OBDD with a size that is at mostctimes larger than an optimal one.[17]However, there exist efficient heuristics to tackle the problem.[18]
There are functions for which the graph size is always exponential—independent of variable ordering. This holds e.g. for the multiplication function.[1]In fact, the function computing the middle bit of the product of twon{\displaystyle n}-bit numbers does not have an OBDD smaller than2⌊n/2⌋/61−4{\displaystyle 2^{\lfloor n/2\rfloor }/61-4}vertices.[19](If the multiplication function had polynomial-size OBDDs, it would show thatinteger factorizationis inP/poly, which is not known to be true.[20])
Researchers have suggested refinements on the BDD data structure giving way to a number of related graphs, such as BMD (binary moment diagrams), ZDD (zero-suppressed decision diagrams), FBDD (free binary decision diagrams), FDD (functional decision diagrams), PDD (parity decision diagrams), and MTBDDs (multiple terminal BDDs).
Many logical operations on BDDs can be implemented bypolynomial-timegraph manipulation algorithms:[21]: 20
However, repeating these operations several times, for example forming the conjunction or disjunction of a set of BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations for two BDDs may result in a BDD with a size proportional to the product of the BDDs' sizes, and consequently for several BDDs the size may be exponential in the number of operations. Variable ordering needs to be considered afresh; what may be a good ordering for (some of) the set of BDDs may not be a good ordering for the result of the operation. Also, since constructing the BDD of a Boolean function solves the NP-completeBoolean satisfiability problemand the co-NP-completetautology problem, constructing the BDD can take exponential time in the size of the Boolean formula even when the resulting BDD is small.
Computing existential abstraction over multiple variables of reduced BDDs is NP-complete.[22]
Model-counting, counting the number of satisfying assignments of a Boolean formula, can be done in polynomial time for BDDs. For general propositional formulas the problem is♯P-complete and the best known algorithms require an exponential time in the worst case.
|
https://en.wikipedia.org/wiki/Binary_decision_diagram
|
Chi-square automatic interaction detection(CHAID)[1]is adecision treetechnique based on adjusted significance testing (Bonferroni correction,Holm-Bonferroni testing).[2][3]
CHAID is based on a formal extension of AID (Automatic Interaction Detection)[4]and THAID (THeta Automatic Interaction Detection)[5][6]procedures of the 1960s and 1970s, which in turn were extensions of earlier research, including that performed by Belson in the UK in the 1950s.[7]
In 1975, the CHAID technique itself was developed in South Africa. It was published in 1980 by Gordon V. Kass, who had completed a PhD thesis on the topic.[2]
A history of earlier supervised tree methods can be found inRitschard, including a detailed description of the original CHAID algorithm and the exhaustive CHAID extension by Biggs, De Ville, and Suen.[3][1]
CHAID can be used for prediction (in a similar fashion toregression analysis, this version of CHAID being originally known as XAID) as well as classification, and for detection of interaction between variables.[4][5][6]
In practice, CHAID is often used in the context ofdirect marketingto select groups of consumers to predict how their responses to some variables affect other variables, although other early applications were in the fields of medical and psychiatric research.[citation needed]
Like other decision trees, CHAID's advantages are that its output is highly visual and easy to interpret. Because it uses multiway splits by default, it needs rather large sample sizes to work effectively, since with small sample sizes the respondent groups can quickly become too small for reliable analysis.[citation needed]
One important advantage of CHAID over alternatives such as multiple regression is that it is non-parametric.[citation needed]
|
https://en.wikipedia.org/wiki/CHAID
|
Predictive analytics, orpredictive AI, encompasses a variety ofstatisticaltechniques fromdata mining,predictive modeling, andmachine learningthat analyze current and historical facts to makepredictionsabout future or otherwise unknown events.[1]
In business, predictive models exploitpatternsfound in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guidingdecision-makingfor candidate transactions.[2]
The defining functional effect of these technical approaches is that predictive analytics provides a predictive score (probability) for each individual (customer, employee, healthcare patient, product SKU, vehicle, component, machine, or other organizational unit) in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement.
Predictive analytics is a set of business intelligence (BI) technologies that uncovers relationships and patterns within large volumes of data that can be used to predict behavior and events. Unlike other BI technologies, predictive analytics is forward-looking, using past events to anticipate the future.[3]Predictive analytics statistical techniques includedata modeling,machine learning,AI,deep learningalgorithms anddata mining. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs.[4]The core of predictive analytics relies on capturing relationships betweenexplanatory variablesand the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.[1]
Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores (probabilities) for each individual organizational element. This distinguishes it fromforecasting. For example, "Predictive analytics—Technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions."[5]In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated intoprescriptive analyticsfor decision optimization.[6]
The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.
Machine learning can be defined as the ability of a machine to learn and then mimic human behavior that requires intelligence. This is accomplished through artificial intelligence, algorithms, and models.[7]
ARIMA models are a common example of time series models. These models use autoregression, which means the model can be fitted with a regression software that will use machine learning to do most of the regression analysis and smoothing. ARIMA models are known to have no overall trend, but instead have a variation around the average that has a constant amplitude, resulting in statistically similar time patterns. Through this, variables are analyzed and data is filtered in order to better understand and predict future values.[8][9]
One example of an ARIMA method is exponential smoothing models. Exponential smoothing takes into account the difference in importance between older and newer data sets, as the more recent data is more accurate and valuable in predicting future values. In order to accomplish this, exponents are utilized to give newer data sets a larger weight in the calculations than the older sets.[10]
Time series models are a subset of machine learning that utilize time series in order to understand and forecast data using past values. A time series is the sequence of a variable's value over equally spaced periods, such as years or quarters in business applications.[11]To accomplish this, the data must be smoothed, or the random variance of the data must be removed in order to reveal trends in the data. There are multiple ways to accomplish this.
Single moving average methods utilize smaller and smaller numbered sets of past data to decrease error that is associated with taking a single average, making it a more accurate average than it would be to take the average of the entire data set.[12]
Centered moving average methods utilize the data found in the single moving average methods by taking an average of the median-numbered data set. However, as the median-numbered data set is difficult to calculate with even-numbered data sets, this method works better with odd-numbered data sets than even.[13]
Predictive modeling is a statistical technique used to predict future behavior. It utilizes predictive models to analyze a relationship between a specific unit in a given sample and one or more features of the unit. The objective of these models is to assess the possibility that a unit in another sample will display the same pattern. Predictive model solutions can be considered a type of data mining technology. The models can analyze both historical and current data and generate a model in order to predict potential future outcomes.[14]
Regardless of the methodology used, in general, the process of creating predictive models involves the same steps. First, it is necessary to determine the project objectives and desired outcomes and translate these into predictive analytic objectives and tasks. Then, analyze the source data to determine the most appropriate data and model building approach (models are only as useful as the applicable data used to build them). Select and transform the data in order to create models. Create and test models in order to evaluate if they are valid and will be able to meet project goals and metrics. Apply the model's results to appropriate business processes (identifying patterns in the data doesn't necessarily mean a business will understand how to take advantage or capitalize on it). Afterward, manage and maintain models in order to standardize and improve performance (demand will increase for model management in order to meet new compliance regulations).[3]
Generally, regression analysis uses structural data along with the past values of independent variables and the relationship between them and the dependent variable to form predictions.[8]
Inlinear regression, a plot is constructed with the previous values of the dependent variable plotted on the Y-axis and the independent variable that is being analyzed plotted on the X-axis. A regression line is then constructed by a statistical program representing the relationship between the independent and dependent variables which can be used to predict values of the dependent variable based only on the independent variable. With the regression line, the program also shows a slope intercept equation for the line which includes an addition for the error term of the regression, where the higher the value of the error term the less precise the regression model is. In order to decrease the value of the error term, other independent variables are introduced to the model, and similar analyses are performed on these independent variables.[8][15]
An important aspect of auditing includes analytical review. In analytical review, the reasonableness of reported account balances being investigated is determined. Auditors accomplish this process through predictive modeling to form predictions called conditional expectations of the balances being audited using autoregressive integrated moving average (ARIMA) methods and general regression analysis methods,[8]specifically through the Statistical Technique for Analytical Review (STAR) methods.[16]
The ARIMA method for analytical review uses time-series analysis on past audited balances in order to create the conditional expectations. These conditional expectations are then compared to the actual balances reported on the audited account in order to determine how close the reported balances are to the expectations. If the reported balances are close to the expectations, the accounts are not audited further. If the reported balances are very different from the expectations, there is a higher possibility of a material accounting error and a further audit is conducted.[16]
Regression analysis methods are deployed in a similar way, except the regression model used assumes the availability of only one independent variable. The materiality of the independent variable contributing to the audited account balances are determined using past account balances along with present structural data.[8]Materiality is the importance of an independent variable in its relationship to the dependent variable.[17]In this case, the dependent variable is the account balance. Through this the most important independent variable is used in order to create the conditional expectation and, similar to the ARIMA method, the conditional expectation is then compared to the account balance reported and a decision is made based on the closeness of the two balances.[8]
The STAR methods operate using regression analysis, and fall into two methods. The first is the STAR monthly balance approach, and the conditional expectations made and regression analysis used are both tied to one month being audited. The other method is the STAR annual balance approach, which happens on a larger scale by basing the conditional expectations and regression analysis on one year being audited. Besides the difference in the time being audited, both methods operate the same, by comparing expected and reported balances to determine which accounts to further investigate.[16]
As we move into a world of technological advances where more and more data is created and stored digitally, businesses are looking for ways to take advantage of this opportunity and use this information to help generate profits. Predictive analytics can be used and is capable of providing many benefits to a wide range of businesses, including asset management firms, insurance companies, communication companies, and many other firms. In a study conducted by IDC Analyze the Future, Dan Vasset and Henry D. Morris explain how an asset management firm used predictive analytics to develop a better marketing campaign. They went from a mass marketing approach to a customer-centric approach, where instead of sending the same offer to each customer, they would personalize each offer based on their customer. Predictive analytics was used to predict the likelihood that a possible customer would accept a personalized offer. Due to the marketing campaign and predictive analytics, the firm's acceptance rate skyrocketed, with three times the number of people accepting their personalized offers.[18]
Technological advances in predictive analytics have increased its value to firms. One technological advancement is more powerful computers, and with this predictive analytics has become able to create forecasts on large data sets much faster. With the increased computing power also comes more data and applications, meaning a wider array of inputs to use with predictive analytics. Another technological advance includes a more user-friendly interface, allowing a smaller barrier of entry and less extensive training required for employees to utilize the software and applications effectively. Due to these advancements, many more corporations are adopting predictive analytics and seeing the benefits in employee efficiency and effectiveness, as well as profits.[19]
ARIMAunivariate and multivariate models can be used in forecasting a company's futurecash flows, with its equations and calculations based on the past values of certain factors contributing to cash flows. Using time-series analysis, the values of these factors can be analyzed and extrapolated to predict the future cash flows for a company. For the univariate models, past values of cash flows are the only factor used in the prediction. Meanwhile the multivariate models use multiple factors related to accrual data, such as operating income before depreciation.[20]
Another model used in predicting cash-flows was developed in 1998 and is known as the Dechow, Kothari, and Watts model, or DKW (1998). DKW (1998) uses regression analysis in order to determine the relationship between multiple variables and cash flows. Through this method, the model found that cash-flow changes and accruals are negatively related, specifically through current earnings, and using this relationship predicts the cash flows for the next period. The DKW (1998) model derives this relationship through the relationships of accruals and cash flows to accounts payable and receivable, along with inventory.[21]
Some child welfare agencies have started using predictive analytics to flag high risk cases.[22]For example, inHillsborough County, Florida, the child welfare agency's use of a predictive modeling tool has prevented abuse-related child deaths in the target population.[23]
The predicting of the outcome ofjuridical decisionscan be done by AI programs. These programs can be used as assistive tools for professions in this industry.[24][25]
Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power.[26][27]
Many businesses have to account for risk exposure due to their different services and determine the costs needed to cover the risk. Predictive analytics can helpunderwritethese quantities by predicting the chances of illness,default,bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Predictive analytics can be used to mitigate moral hazard and prevent accidents from occurring.[28]
|
https://en.wikipedia.org/wiki/Predictive_analytics#Classification_and_regression_trees_(CART)
|
Indecision tree learning,ID3(Iterative Dichotomiser 3) is analgorithminvented byRoss Quinlan[1]used to generate adecision treefrom a dataset. ID3 is the precursor to theC4.5 algorithm, and is typically used in themachine learningandnatural language processingdomains.
The ID3 algorithm begins with the original setS{\displaystyle S}as theroot node. On eachiterationof the algorithm, it iterates through every unusedattributeof the setS{\displaystyle S}and calculates theentropyH(S){\displaystyle \mathrm {H} {(S)}}or theinformation gainIG(S){\displaystyle IG(S)}of that attribute. It then selects the attribute which has the smallest entropy (or largest information gain) value. The setS{\displaystyle S}is then split orpartitionedby the selected attribute to produce subsets of the data. (For example, a node can be split intochild nodesbased upon the subsets of the population whose ages are less than 50, between 50 and 100, and greater than 100.) The algorithm continues torecurseon each subset, considering only attributes never selected before.
Recursion on a subset maystopin one of these cases:
Throughout the algorithm, the decision tree is constructed with each non-terminal node (internal node) representing the selectedattributeon which the data was split, and terminal nodes (leaf nodes) representing the class label of the final subset of this branch.
ID3 does not guarantee an optimal solution. It can converge uponlocal optima. It uses agreedy strategyby selecting the locally best attribute to split the dataset on each iteration. Thealgorithm's optimalitycan be improved by usingbacktrackingduring the search for the optimal decision tree at the cost of possibly taking longer.
ID3 canoverfitthe training data. To avoid overfitting, smaller decision trees should be preferred over larger ones.[further explanation needed]This algorithm usually produces small trees, but it does not always produce the smallest possible decision tree.
ID3 is harder to use on continuous data than on factored data (factored data has a discrete number of possible values, thus reducing the possible branch points). If the values of any given attribute arecontinuous, then there are many more places to split the data on this attribute, and searching for the best value to split by can be time-consuming.
The ID3 algorithm is used by training on adata setS{\displaystyle S}to produce adecision treewhich is stored inmemory. Atruntime, this decision tree is used toclassifynew test cases (feature vectors) bytraversingthe decision tree using the features of the datum to arrive at a leaf node.
EntropyH(S){\displaystyle \mathrm {H} {(S)}}is a measure of the amount of uncertainty in the (data) setS{\displaystyle S}(i.e. entropy characterizes the (data) setS{\displaystyle S}).
Where,
WhenH(S)=0{\displaystyle \mathrm {H} {(S)}=0}, the setS{\displaystyle S}is perfectly classified (i.e. all elements inS{\displaystyle S}are of the same class).
In ID3, entropy is calculated for each remaining attribute. The attribute with thesmallestentropy is used to split the setS{\displaystyle S}on this iteration. Entropy ininformation theorymeasures how much information isexpectedto be gained uponmeasuringarandom variable; as such, it can also be used to quantify the amount to which thedistributionof the quantity's values is unknown. Aconstantquantity has zero entropy, as its distribution isperfectly known. In contrast, a uniformly distributed random variable (discretelyorcontinuouslyuniform) maximizes entropy. Therefore, the greater the entropy at a node, the less information is known about the classification of data at this stage of the tree; and therefore, the greater the potential to improve the classification here.
As such, ID3 is agreedyheuristicperforming abest-first searchforlocally optimalentropy values. Its accuracy can be improved by preprocessing the data.
Information gainIG(A){\displaystyle IG(A)}is the measure of the difference in entropy from before to after the setS{\displaystyle S}is split on an attributeA{\displaystyle A}. In other words, how much uncertainty inS{\displaystyle S}was reduced after splitting setS{\displaystyle S}on attributeA{\displaystyle A}.
Where,
In ID3, information gain can be calculated (instead of entropy) for each remaining attribute. The attribute with thelargestinformation gain is used to split the setS{\displaystyle S}on this iteration.
|
https://en.wikipedia.org/wiki/ID3_algorithm
|
C4.5is an algorithm used to generate adecision treedeveloped byRoss Quinlan.[1]C4.5 is an extension of Quinlan's earlierID3 algorithm. The decision trees generated by C4.5 can be used for classification, and for this reason, C4.5 is often referred to as astatistical classifier. In 2011, authors of theWekamachine learning software described the C4.5 algorithm as "a landmark decision tree program that is probably the machine learning workhorse most widely used in practice to date".[2]
It became quite popular after ranking #1 in theTop 10 Algorithms in Data Miningpre-eminent paper published bySpringerLNCSin 2008.[3]
C4.5 builds decision trees from a set of training data in the same way asID3, using the concept ofinformation entropy. The training data is a setS=s1,s2,...{\displaystyle S={s_{1},s_{2},...}}of already classified samples. Each samplesi{\displaystyle s_{i}}consists of a p-dimensional vector(x1,i,x2,i,...,xp,i){\displaystyle (x_{1,i},x_{2,i},...,x_{p,i})}, where thexj{\displaystyle x_{j}}represent attribute values orfeaturesof the sample, as well as the class in whichsi{\displaystyle s_{i}}falls.
At each node of the tree, C4.5 chooses the attribute of the data that most effectively splits its set of samples into subsets enriched in one class or the other. The splitting criterion is the normalizedinformation gain(difference in entropy). The attribute with the highest normalized information gain is chosen to make the decision. The C4.5 algorithm thenrecurseson thepartitionedsublists.
This algorithm has a fewbase cases.
Inpseudocode, the general algorithm for building decision trees is:[4]
J48is anopen sourceJavaimplementation of the C4.5 algorithm in theWekadata miningtool.
C4.5 made a number of improvements to ID3. Some of these are:
Quinlan went on to create C5.0 and See5 (C5.0 for Unix/Linux, See5 for Windows) which he markets commercially. C5.0 offers a number of improvements on C4.5. Some of these are:[6][7]
Source for a single-threaded Linux version of C5.0 is available under theGNU General Public License(GPL).
|
https://en.wikipedia.org/wiki/C4.5_algorithm
|
Adecision stumpis amachine learning modelconsisting of a one-leveldecision tree.[1]That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature. Sometimes they are also called1-rules.[2]
Depending on the type of the inputfeature, several variations are possible. For nominal features, one may build a stump which contains a leaf for each possible feature value[3][4]or a stump with the two leaves, one of which corresponds to some chosen category, and the other leaf to all the other categories.[5]For binary featuresthese two schemes are identical. A missing value may be treated as a yet another category.[5]
For continuous features, usually, some threshold feature value is selected, and the stump contains two leaves — for values below and above the threshold. However, rarely, multiple thresholds may be chosen and the stump therefore contains three or more leaves.
Decision stumps are often[6]used as components (called "weak learners" or "base learners") inmachine learning ensembletechniques such asbaggingandboosting. For example, aViola–Jonesface detection algorithm employsAdaBoostwith decision stumps as weak learners.[7]
The term "decision stump" was coined in a 1992ICMLpaper by Wayne Iba and Pat Langley.[1][8]
|
https://en.wikipedia.org/wiki/Decision_stump
|
AdaBoost(short forAdaptiveBoosting) is astatistical classificationmeta-algorithmformulated byYoav FreundandRobert Schapirein 1995, who won the 2003Gödel Prizefor their work. It can be used in conjunction with many types of learning algorithm to improve performance. The output of multipleweak learnersis combined into a weighted sum that represents the final output of the boosted classifier. Usually, AdaBoost is presented forbinary classification, although it can be generalized to multiple classes or bounded intervals ofrealvalues.[1][2]
AdaBoost is adaptive in the sense that subsequent weak learners (models) are adjusted in favor of instances misclassified by previous models. In some problems, it can be less susceptible tooverfittingthan other learning algorithms. The individual learners can be weak, but as long as the performance of each one is slightly better than random guessing, the final model can be proven to converge to a strong learner.
Although AdaBoost is typically used to combine weak base learners (such asdecision stumps), it has been shown to also effectively combine strong base learners (such as deeperdecision trees), producing an even more accurate model.[3]
Every learning algorithm tends to suit some problem types better than others, and typically has many different parameters and configurations to adjust before it achieves optimal performance on adataset. AdaBoost (with decision trees as the weak learners) is often referred to as the best out-of-the-box classifier.[4][5]When used with decision tree learning, information gathered at each stage of the AdaBoost algorithm about the relative 'hardness' of each training sample is fed into the tree-growing algorithm such that later trees tend to focus on harder-to-classify examples.
AdaBoost refers to a particular method of training a boosted classifier. A boosted classifier is a classifier of the formFT(x)=∑t=1Tft(x){\displaystyle F_{T}(x)=\sum _{t=1}^{T}f_{t}(x)}where eachft{\displaystyle f_{t}}is a weak learner that takes an objectx{\displaystyle x}as input and returns a value indicating the class of the object. For example, in the two-class problem, the sign of the weak learner's output identifies the predicted object class and the absolute value gives the confidence in that classification.
Each weak learner produces an output hypothesish{\displaystyle h}which fixes a predictionh(xi){\displaystyle h(x_{i})}for each sample in the training set. At each iterationt{\displaystyle t}, a weak learner is selected and assigned a coefficientαt{\displaystyle \alpha _{t}}such that the total training errorEt{\displaystyle E_{t}}of the resultingt{\displaystyle t}-stage boosted classifier is minimized.
Et=∑iE[Ft−1(xi)+αth(xi)]{\displaystyle E_{t}=\sum _{i}E[F_{t-1}(x_{i})+\alpha _{t}h(x_{i})]}
HereFt−1(x){\displaystyle F_{t-1}(x)}is the boosted classifier that has been built up to the previous stage of training andft(x)=αth(x){\displaystyle f_{t}(x)=\alpha _{t}h(x)}is the weak learner that is being considered for addition to the final classifier.
At each iteration of the training process, a weightwi,t{\displaystyle w_{i,t}}is assigned to each sample in the training set equal to the current errorE(Ft−1(xi)){\displaystyle E(F_{t-1}(x_{i}))}on that sample. These weights can be used in the training of the weak learner. For instance, decision trees can be grown which favor the splitting of sets of samples with large weights.
This derivation follows Rojas (2009):[6]
Suppose we have a data set{(x1,y1),…,(xN,yN)}{\displaystyle \{(x_{1},y_{1}),\ldots ,(x_{N},y_{N})\}}where each itemxi{\displaystyle x_{i}}has an associated classyi∈{−1,1}{\displaystyle y_{i}\in \{-1,1\}}, and a set of weak classifiers{k1,…,kL}{\displaystyle \{k_{1},\ldots ,k_{L}\}}each of which outputs a classificationkj(xi)∈{−1,1}{\displaystyle k_{j}(x_{i})\in \{-1,1\}}for each item. After the(m−1){\displaystyle (m-1)}-th iteration our boosted classifier is a linear combination of the weak classifiers of the form:C(m−1)(xi)=α1k1(xi)+⋯+αm−1km−1(xi),{\displaystyle C_{(m-1)}(x_{i})=\alpha _{1}k_{1}(x_{i})+\cdots +\alpha _{m-1}k_{m-1}(x_{i}),}where the class will be the sign ofC(m−1)(xi){\displaystyle C_{(m-1)}(x_{i})}. At them{\displaystyle m}-th iteration we want to extend this to a better boosted classifier by adding another weak classifierkm{\displaystyle k_{m}}, with another weightαm{\displaystyle \alpha _{m}}:Cm(xi)=C(m−1)(xi)+αmkm(xi){\displaystyle C_{m}(x_{i})=C_{(m-1)}(x_{i})+\alpha _{m}k_{m}(x_{i})}
So it remains to determine which weak classifier is the best choice forkm{\displaystyle k_{m}}, and what its weightαm{\displaystyle \alpha _{m}}should be. We define the total errorE{\displaystyle E}ofCm{\displaystyle C_{m}}as the sum of itsexponential losson each data point, given as follows:E=∑i=1Ne−yiCm(xi)=∑i=1Ne−yiC(m−1)(xi)e−yiαmkm(xi){\displaystyle E=\sum _{i=1}^{N}e^{-y_{i}C_{m}(x_{i})}=\sum _{i=1}^{N}e^{-y_{i}C_{(m-1)}(x_{i})}e^{-y_{i}\alpha _{m}k_{m}(x_{i})}}
Lettingwi(1)=1{\displaystyle w_{i}^{(1)}=1}andwi(m)=e−yiCm−1(xi){\displaystyle w_{i}^{(m)}=e^{-y_{i}C_{m-1}(x_{i})}}form>1{\displaystyle m>1}, we have:E=∑i=1Nwi(m)e−yiαmkm(xi){\displaystyle E=\sum _{i=1}^{N}w_{i}^{(m)}e^{-y_{i}\alpha _{m}k_{m}(x_{i})}}
We can split this summation between those data points that are correctly classified bykm{\displaystyle k_{m}}(soyikm(xi)=1{\displaystyle y_{i}k_{m}(x_{i})=1}) and those that are misclassified (soyikm(xi)=−1{\displaystyle y_{i}k_{m}(x_{i})=-1}):E=∑yi=km(xi)wi(m)e−αm+∑yi≠km(xi)wi(m)eαm=∑i=1Nwi(m)e−αm+∑yi≠km(xi)wi(m)(eαm−e−αm){\displaystyle {\begin{aligned}E&=\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}e^{-\alpha _{m}}+\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}e^{\alpha _{m}}\\&=\sum _{i=1}^{N}w_{i}^{(m)}e^{-\alpha _{m}}+\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}\left(e^{\alpha _{m}}-e^{-\alpha _{m}}\right)\end{aligned}}}
Since the only part of the right-hand side of this equation that depends onkm{\displaystyle k_{m}}is∑yi≠km(xi)wi(m){\textstyle \sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}, we see that thekm{\displaystyle k_{m}}that minimizesE{\displaystyle E}is the one in the set{k1,…,kL}{\displaystyle \{k_{1},\ldots ,k_{L}\}}that minimizes∑yi≠km(xi)wi(m){\textstyle \sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}[assuming thatαm>0{\displaystyle \alpha _{m}>0}], i.e. the weak classifier with the lowest weighted error (with weightswi(m)=e−yiCm−1(xi){\displaystyle w_{i}^{(m)}=e^{-y_{i}C_{m-1}(x_{i})}}).
To determine the desired weightαm{\displaystyle \alpha _{m}}that minimizesE{\displaystyle E}with thekm{\displaystyle k_{m}}that we just determined, we differentiate:dEdαm=d(∑yi=km(xi)wi(m)e−αm+∑yi≠km(xi)wi(m)eαm)dαm{\displaystyle {\frac {dE}{d\alpha _{m}}}={\frac {d(\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}e^{-\alpha _{m}}+\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}e^{\alpha _{m}})}{d\alpha _{m}}}}
Luckily the minimum occurs when setting this to zero, then solving forαm{\displaystyle \alpha _{m}}yields:αm=12ln(∑yi=km(xi)wi(m)∑yi≠km(xi)wi(m)){\displaystyle \alpha _{m}={\frac {1}{2}}\ln \left({\frac {\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}}{\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}}\right)}
dEdαm=−∑yi=km(xi)wi(m)e−αm+∑yi≠km(xi)wi(m)eαm=0{\displaystyle {\frac {dE}{d\alpha _{m}}}=-\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}e^{-\alpha _{m}}+\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}e^{\alpha _{m}}=0}becausee−αm{\displaystyle e^{-\alpha _{m}}}does not depend oni{\displaystyle i}e−αm∑yi=km(xi)wi(m)=eαm∑yi≠km(xi)wi(m){\displaystyle e^{-\alpha _{m}}\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}=e^{\alpha _{m}}\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}−αm+ln(∑yi=km(xi)wi(m))=αm+ln(∑yi≠km(xi)wi(m)){\displaystyle -\alpha _{m}+\ln \left(\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}\right)=\alpha _{m}+\ln \left(\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}\right)}−2αm=ln(∑yi≠km(xi)wi(m)∑yi=km(xi)wi(m)){\displaystyle -2\alpha _{m}=\ln \left({\dfrac {\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}{\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}}}\right)}αm=−12ln(∑yi≠km(xi)wi(m)∑yi=km(xi)wi(m)){\displaystyle \alpha _{m}=-{\dfrac {1}{2}}\ln \left({\dfrac {\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}{\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}}}\right)}αm=12ln(∑yi=km(xi)wi(m)∑yi≠km(xi)wi(m)){\displaystyle \alpha _{m}={\dfrac {1}{2}}\ln \left({\dfrac {\sum _{y_{i}=k_{m}(x_{i})}w_{i}^{(m)}}{\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}}\right)}
We calculate the weighted error rate of the weak classifier to beϵm=∑yi≠km(xi)wi(m)∑i=1Nwi(m){\displaystyle \epsilon _{m}={\frac {\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}{\sum _{i=1}^{N}w_{i}^{(m)}}}}, so it follows that:αm=12ln(1−ϵmϵm){\displaystyle \alpha _{m}={\frac {1}{2}}\ln \left({\frac {1-\epsilon _{m}}{\epsilon _{m}}}\right)}which is the negative logit function multiplied by 0.5. Due to the convexity ofE{\displaystyle E}as a function ofαm{\displaystyle \alpha _{m}}, this new expression forαm{\displaystyle \alpha _{m}}gives the global minimum of the loss function.
Note: This derivation only applies whenkm(xi)∈{−1,1}{\displaystyle k_{m}(x_{i})\in \{-1,1\}}, though it can be a good starting guess in other cases, such as when the weak learner is biased (km(x)∈{a,b},a≠−b{\displaystyle k_{m}(x)\in \{a,b\},a\neq -b}), has multiple leaves (km(x)∈{a,b,…,n}{\displaystyle k_{m}(x)\in \{a,b,\dots ,n\}}) or is some other functionkm(x)∈R{\displaystyle k_{m}(x)\in \mathbb {R} }.
Thus we have derived the AdaBoost algorithm: At each iteration, choose the classifierkm{\displaystyle k_{m}}, which minimizes the total weighted error∑yi≠km(xi)wi(m){\textstyle \sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}, use this to calculate the error rateϵm=∑yi≠km(xi)wi(m)∑i=1Nwi(m){\displaystyle \epsilon _{m}={\frac {\sum _{y_{i}\neq k_{m}(x_{i})}w_{i}^{(m)}}{\sum _{i=1}^{N}w_{i}^{(m)}}}}, use this to calculate the weightαm=12ln(1−ϵmϵm){\displaystyle \alpha _{m}={\frac {1}{2}}\ln \left({\frac {1-\epsilon _{m}}{\epsilon _{m}}}\right)}, and finally use this to improve the boosted classifierCm−1{\displaystyle C_{m-1}}toCm=C(m−1)+αmkm{\displaystyle C_{m}=C_{(m-1)}+\alpha _{m}k_{m}}.
Boosting is a form of linearregressionin which the features of each samplexi{\displaystyle x_{i}}are the outputs of some weak learnerh{\displaystyle h}applied toxi{\displaystyle x_{i}}.
While regression tries to fitF(x){\displaystyle F(x)}toy(x){\displaystyle y(x)}as precisely as possible without loss of generalization, typically usingleast squareerrorE(f)=(y(x)−f(x))2{\displaystyle E(f)=(y(x)-f(x))^{2}}, whereas the AdaBoost error functionE(f)=e−y(x)f(x){\displaystyle E(f)=e^{-y(x)f(x)}}takes into account the fact that only the sign of the final result is used, thus|F(x)|{\displaystyle |F(x)|}can be far larger than 1 without increasing error. However, the exponential increase in the error for samplexi{\displaystyle x_{i}}as−y(xi)f(xi){\displaystyle -y(x_{i})f(x_{i})}increases, resulting in excessive weights being assigned to outliers.
One feature of the choice of exponential error function is that the error of the final additive model is the product of the error of each stage, that is,e∑i−yif(xi)=∏ie−yif(xi){\displaystyle e^{\sum _{i}-y_{i}f(x_{i})}=\prod _{i}e^{-y_{i}f(x_{i})}}. Thus it can be seen that the weight update in the AdaBoost algorithm is equivalent to recalculating the error onFt(x){\displaystyle F_{t}(x)}after each stage.
There is a lot of flexibility allowed in the choice of loss function. As long as the loss function ismonotonicandcontinuously differentiable, the classifier is always driven toward purer solutions.[7]Zhang (2004) provides a loss function based on least squares, a modifiedHuber loss function:ϕ(y,f(x))={−4yf(x)ifyf(x)<−1,(yf(x)−1)2if−1≤yf(x)≤1.0ifyf(x)>1{\displaystyle \phi (y,f(x))={\begin{cases}-4yf(x)&{\text{if }}yf(x)<-1,\\(yf(x)-1)^{2}&{\text{if }}-1\leq yf(x)\leq 1.\\0&{\text{if }}yf(x)>1\end{cases}}}
This function is more well-behaved than LogitBoost forf(x){\displaystyle f(x)}close to 1 or -1, does not penalise ‘overconfident’ predictions (yf(x)>1{\displaystyle yf(x)>1}), unlike unmodified least squares, and only penalises samples misclassified with confidence greater than 1 linearly, as opposed to quadratically or exponentially, and is thus less susceptible to the effects of outliers.
Boosting can be seen as minimization of aconvexloss function over aconvex setof functions.[8]Specifically, the loss being minimized by AdaBoost is the exponential loss∑iϕ(i,y,f)=∑ie−yif(xi),{\displaystyle \sum _{i}\phi (i,y,f)=\sum _{i}e^{-y_{i}f(x_{i})},}whereas LogitBoost performs logistic regression, minimizing∑iϕ(i,y,f)=∑iln(1+e−yif(xi)).{\displaystyle \sum _{i}\phi (i,y,f)=\sum _{i}\ln \left(1+e^{-y_{i}f(x_{i})}\right).}
In the gradient descent analogy, the output of the classifier for each training point is considered a point(Ft(x1),…,Ft(xn)){\displaystyle \left(F_{t}(x_{1}),\dots ,F_{t}(x_{n})\right)}in n-dimensional space, where each axis corresponds to a training sample, each weak learnerh(x){\displaystyle h(x)}corresponds to a vector of fixed orientation and length, and the goal is to reach the target point(y1,…,yn){\displaystyle (y_{1},\dots ,y_{n})}(or any region where the value of loss functionET(x1,…,xn){\displaystyle E_{T}(x_{1},\dots ,x_{n})}is less than the value at that point), in the fewest steps. Thus AdaBoost algorithms perform eitherCauchy(findh(x){\displaystyle h(x)}with the steepest gradient, chooseα{\displaystyle \alpha }to minimize test error) orNewton(choose some target point, findαh(x){\displaystyle \alpha h(x)}that bringsFt{\displaystyle F_{t}}closest to that point) optimization of training error.
With:
Fort{\displaystyle t}in1…T{\displaystyle 1\dots T}:
The output of decision trees is a class probability estimatep(x)=P(y=1|x){\displaystyle p(x)=P(y=1|x)}, the probability thatx{\displaystyle x}is in the positive class.[7]Friedman, Hastie and Tibshirani derive an analytical minimizer fore−y(Ft−1(x)+ft(p(x))){\displaystyle e^{-y\left(F_{t-1}(x)+f_{t}(p(x))\right)}}for some fixedp(x){\displaystyle p(x)}(typically chosen using weighted least squares error):
Thus, rather than multiplying the output of the entire tree by some fixed value, each leaf node is changed to output half thelogittransform of its previous value.
LogitBoost represents an application of establishedlogistic regressiontechniques to the AdaBoost method. Rather than minimizing error with respect to y, weak learners are chosen to minimize the (weighted least-squares) error offt(x){\displaystyle f_{t}(x)}with respect tozt=y∗−pt(x)2pt(x)(1−pt(x)),{\displaystyle z_{t}={\frac {y^{*}-p_{t}(x)}{2p_{t}(x)(1-p_{t}(x))}},}wherept(x)=eFt−1(x)eFt−1(x)+e−Ft−1(x),{\displaystyle p_{t}(x)={\frac {e^{F_{t-1}(x)}}{e^{F_{t-1}(x)}+e^{-F_{t-1}(x)}}},}wt=pt(x)(1−pt(x)){\displaystyle w_{t}=p_{t}(x)(1-p_{t}(x))}y∗=y+12.{\displaystyle y^{*}={\frac {y+1}{2}}.}
That iszt{\displaystyle z_{t}}is theNewton–Raphsonapproximation of the minimizer of the log-likelihood error at staget{\displaystyle t}, and the weak learnerft{\displaystyle f_{t}}is chosen as the learner that best approximateszt{\displaystyle z_{t}}by weighted least squares.
As p approaches either 1 or 0, the value ofpt(xi)(1−pt(xi)){\displaystyle p_{t}(x_{i})(1-p_{t}(x_{i}))}becomes very small and thezterm, which is large for misclassified samples, can becomenumerically unstable, due to machine precision rounding errors. This can be overcome by enforcing some limit on the absolute value ofzand the minimum value ofw
While previous boosting algorithms chooseft{\displaystyle f_{t}}greedily, minimizing the overall test error as much as possible at each step, GentleBoost features a bounded step size.ft{\displaystyle f_{t}}is chosen to minimize∑iwt,i(yi−ft(xi))2{\textstyle \sum _{i}w_{t,i}(y_{i}-f_{t}(x_{i}))^{2}}, and no further coefficient is applied. Thus, in the case where a weak learner exhibits perfect classification performance, GentleBoost choosesft(x)=αtht(x){\displaystyle f_{t}(x)=\alpha _{t}h_{t}(x)}exactly equal toy{\displaystyle y}, while steepest descent algorithms try to setαt=∞{\displaystyle \alpha _{t}=\infty }. Empirical observations about the good performance of GentleBoost appear to back up Schapire and Singer's remark that allowing excessively large values ofα{\displaystyle \alpha }can lead to poor generalization performance.[9][10]
A technique for speeding up processing of boosted classifiers, early termination refers to only testing each potential object with as many layers of the final classifier necessary to meet some confidence threshold, speeding up computation for cases where the class of the object can easily be determined. One such scheme is the object detection framework introduced by Viola and Jones:[11]in an application with significantly more negative samples than positive, a cascade of separate boost classifiers is trained, the output of each stage biased such that some acceptably small fraction of positive samples is mislabeled as negative, and all samples marked as negative after each stage are discarded. If 50% of negative samples are filtered out by each stage, only a very small number of objects would pass through the entire classifier, reducing computation effort. This method has since been generalized, with a formula provided for choosing optimal thresholds at each stage to achieve some desired false positive and false negative rate.[12]
In the field of statistics, where AdaBoost is more commonly applied to problems of moderate dimensionality,early stoppingis used as a strategy to reduceoverfitting.[13]A validation set of samples is separated from the training set, performance of the classifier on the samples used for training is compared to performance on the validation samples, and training is terminated if performance on the validation sample is seen to decrease even as performance on the training set continues to improve.
For steepest descent versions of AdaBoost, whereαt{\displaystyle \alpha _{t}}is chosen at each layertto minimize test error, the next layer added is said to bemaximally independentof layert:[14]it is unlikely to choose a weak learnert+1that is similar to learnert. However, there remains the possibility thatt+1produces similar information to some other earlier layer. Totally corrective algorithms, such asLPBoost, optimize the value of every coefficient after each step, such that new layers added are always maximally independent of every previous layer. This can be accomplished by backfitting,linear programmingor some other method.
Pruning is the process of removing poorly performing weak classifiers to improve memory and execution-time cost of the boosted classifier. The simplest methods, which can be particularly effective in conjunction with totally corrective training, are weight- or margin-trimming: when the coefficient, or the contribution to the total test error, of some weak classifier falls below a certain threshold, that classifier is dropped. Margineantu & Dietterich[15]suggested an alternative criterion for trimming: weak classifiers should be selected such that the diversity of the ensemble is maximized. If two weak learners produce very similar outputs, efficiency can be improved by removing one of them and increasing the coefficient of the remaining weak learner.[16]
|
https://en.wikipedia.org/wiki/AdaBoost
|
Anincremental decision treealgorithm is anonlinemachine learningalgorithm that outputs adecision tree. Manydecision treemethods, such asC4.5, construct a tree using a complete dataset. Incremental decision tree methods allow an existing tree to be updated using only new individual data instances, without having to re-process past instances. This may be useful in situations where the entire dataset is not available when the tree is updated (i.e. the data was not stored), the original data set is too large to process or the characteristics of the data change over time.
Here is a short list of incremental decision tree methods, organized by their (usually non-incremental) parent algorithms.
CART[1](1984) is a nonincremental decision tree inducer for both classification and regression problems. developed in the mathematics and statistics communities. CART traces its roots to AID (1963)[2]
ID3(1986)[4]andC4.5(1993)[5]were developed byQuinlanand have roots in Hunt's Concept Learning System (CLS, 1966)[6]The ID3 family of tree inducers was developed in the engineering and computer science communities.
note: ID6NB (2009)[12]is not incremental.
There were severalincrementalconcept learning systems that did not build decision trees, but which predated and influenced the development of the earliest incremental decision tree learners, notably ID4.[7]Notable among these was Schlimmer and Granger's STAGGER (1986),[13]which learned disjunctive concepts incrementally. STAGGER was developed to examine concepts that changed over time (concept drift). Prior to STAGGER, Michalski and Larson (1978)[14]investigated an incremental variant of AQ (Michalski, 1973),[15]a supervised system for learning concepts indisjunctive normal form(DNF). Experience with these earlier systems and others, to include incremental tree-structured unsupervised learning, contributed to a conceptual framework for evaluating incremental decision tree learners specifically, and incremental concept learning generally, along four dimensions that reflect the inherent tradeoffs between learning cost and quality:[7](1) cost of knowledge base update, (2) the number of observations that are required to converge on a knowledge base with given characteristics, (3) the total effort (as a function of the first two dimensions) that a system exerts, and the (4) quality (often consistency) of the final knowledge base. Some of the historical context in which incremental decision tree learners emerged is given in Fisher and Schlimmer (1988),[16]and which also expands on the four factor framework that was used to evaluate and designincremental learningsystems.
Very Fast Decision Trees learner reduces training time for large incremental data sets by subsampling the incoming data stream.
|
https://en.wikipedia.org/wiki/Incremental_decision_tree
|
Analternating decision tree(ADTree) is amachine learningmethod for classification. It generalizesdecision treesand has connections toboosting.
An ADTree consists of an alternation of decision nodes, which specify a predicate condition, and prediction nodes, which contain a single number. An instance is classified by an ADTree by following all paths for which all decision nodes are true, and summing any prediction nodes that are traversed.
ADTrees were introduced byYoav Freundand Llew Mason.[1]However, the algorithm as presented had several typographical errors. Clarifications and optimizations were later presented by Bernhard Pfahringer, Geoffrey Holmes and Richard Kirkby.[2]Implementations are available inWekaand JBoost.
Originalboostingalgorithms typically used eitherdecision stumpsor decision trees as weak hypotheses. As an example, boostingdecision stumpscreates
a set ofT{\displaystyle T}weighted decision stumps (whereT{\displaystyle T}is the number of boosting iterations), which then vote on the final classification according to their weights. Individual decision stumps are weighted according to their ability to classify the data.
Boosting a simple learner results in an unstructured set ofT{\displaystyle T}hypotheses, making it difficult to infercorrelationsbetween attributes. Alternating decision trees introduce structure to the set of hypotheses by requiring that they build off a hypothesis that was produced in an earlier iteration. The resulting set of hypotheses can be visualized in a tree based on the relationship between a hypothesis and its "parent."
Another important feature of boosted algorithms is that the data is given a differentdistributionat each iteration. Instances that are misclassified are given a larger weight while accurately classified instances are given reduced weight.
An alternating decision tree consists of decision nodes and prediction nodes.Decision nodesspecify a predicate condition.Prediction nodescontain a single number. ADTrees always have prediction nodes as both root and leaves. An instance is classified by an ADTree by following all paths for which all decision nodes are true and summing any prediction nodes that are traversed. This is different from binary classification trees such as CART (Classification and regression tree) orC4.5in which an instance follows only one path through the tree.
The following tree was constructed using JBoost on the spambase dataset[3](available from the UCI Machine Learning Repository).[4]In this example, spam is coded as1and regular email is coded as−1.
The following table contains part of the information for a single instance.
The instance is scored by summing all of the prediction nodes through which it passes. In the case of the instance above, the score is
calculated as
The final score of0.657is positive, so the instance is classified as spam. The magnitude of the value is a measure of confidence in the prediction. The original authors list three potential levels of interpretation for the set of attributes identified by an ADTree:
Care must be taken when interpreting individual nodes as the scores reflect a re weighting of the data in each iteration.
The inputs to the alternating decision tree algorithm are:
The fundamental element of the ADTree algorithm is the rule. A single
rule consists of a precondition, a condition, and two scores. A
condition is a predicate of the form "attribute <comparison> value."
A precondition is simply alogical conjunctionof conditions.
Evaluation of a rule involves a pair of nested if statements:
Several auxiliary functions are also required by the algorithm:
The algorithm is as follows:
The setP{\displaystyle {\mathcal {P}}}grows by two preconditions in each iteration, and it is possible to derive the tree structure of a set of rules by making note of the precondition that is used in each successive rule.
Figure 6 in the original paper[1]demonstrates that ADTrees are typically as robust as boosted decision trees and boosteddecision stumps. Typically, equivalent accuracy can be achieved with a much simpler tree structure than recursive partitioning algorithms.
|
https://en.wikipedia.org/wiki/Alternating_decision_tree
|
Structured data analysisis thestatistical data analysisof structured data. This can arise either in the form of ana prioristructure such as multiple-choice questionnaires or in situations with the need to search forstructurethat fits the given data, either exactly or approximately. This structure can then be used for making comparisons, predictions, manipulations etc.[1][2]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Structured_data_analysis_(statistics)
|
Incomputer science, alogistic model tree(LMT) is aclassificationmodel with an associatedsupervised trainingalgorithmthat combineslogistic regression(LR) anddecision tree learning.[1][2]
Logistic model trees are based on the earlier idea of a model tree: a decision tree that haslinear regressionmodels at its leaves to provide apiecewise linearregression model (where ordinary decision trees with constants at their leaves would produce a piecewise constant model).[1]In the logistic variant, theLogitBoostalgorithm is used to produce an LR model at every node in the tree; the node is then split using theC4.5criterion. Each LogitBoost invocation is warm-started[vague]from its results in the parent node. Finally, the tree is pruned.[3]
The basic LMT induction algorithm usescross-validationto find a number of LogitBoost iterations that does notoverfitthe training data. A faster version has been proposed that uses theAkaike information criterionto control LogitBoost stopping.[3]
|
https://en.wikipedia.org/wiki/Logistic_model_tree
|
Indata miningandstatistics,hierarchical clustering(also calledhierarchical cluster analysisorHCA) is a method ofcluster analysisthat seeks to build ahierarchyof clusters. Strategies for hierarchical clustering generally fall into two categories:
In general, the merges and splits are determined in agreedymanner. The results of hierarchical clustering[3]are usually presented in adendrogram.
Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is amatrix of distances. On the other hand, except for the special case of single-linkage distance, none of the algorithms (except exhaustive search inO(2n){\displaystyle {\mathcal {O}}(2^{n})}) can be guaranteed to find the optimum solution.[citation needed]
The standard algorithm forhierarchical agglomerative clustering(HAC) has atime complexityofO(n3){\displaystyle {\mathcal {O}}(n^{3})}and requiresΩ(n2){\displaystyle \Omega (n^{2})}memory, which makes it too slow for even medium data sets. However, for some special cases, optimal efficient agglomerative methods (of complexityO(n2){\displaystyle {\mathcal {O}}(n^{2})}) are known:SLINK[4]forsingle-linkageand CLINK[5]forcomplete-linkage clustering. With aheap, the runtime of the general case can be reduced toO(n2logn){\displaystyle {\mathcal {O}}(n^{2}\log n)}, an improvement on the aforementioned bound ofO(n3){\displaystyle {\mathcal {O}}(n^{3})}, at the cost of further increasing the memory requirements. In many cases, the memory overheads of this approach are too large to make it practically usable. Methods exist which usequadtreesthat demonstrateO(n2){\displaystyle {\mathcal {O}}(n^{2})}total running time withO(n){\displaystyle {\mathcal {O}}(n)}space.[6]
Divisive clustering with an exhaustive search isO(2n){\displaystyle {\mathcal {O}}(2^{n})}, but it is common to use faster heuristics to choose splits, such ask-means.
In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriatedistanced, such as the Euclidean distance, betweensingleobservations of the data set, and a linkage criterion, which specifies the dissimilarity ofsetsas a function of the pairwise distances of observations in the sets. The choice of metric as well as linkage can have a major impact on the result of the clustering, where the lower level metric determines which objects are mostsimilar, whereas the linkage criterion influences the shape of the clusters[7]. For example, complete-linkage tends to produce more spherical clusters than single-linkage.
The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations.
Some commonly used linkage criteria between two sets of observationsAandBand a distancedare:[8][9]
Some of these can only be recomputed recursively (WPGMA, WPGMC), for many a recursive computation with Lance-Williams-equations is more efficient, while for other (Hausdorff, Medoid) the distances have to be computed with the slower full formula. Other linkage criteria include:
For example, suppose this data is to be clustered, and theEuclidean distanceis thedistance metric.
The hierarchical clusteringdendrogramwould be:
Cutting the tree at a given height will give a partitioning clustering at a selected precision. In this example, cutting after the second row (from the top) of thedendrogramwill yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a smaller number but larger clusters.
This method builds the hierarchy from the individual elements by progressively merging clusters. In our example, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, according to the chosen distance.
Optionally, one can also construct adistance matrixat this stage, where the number in thei-th rowj-th column is the distance between thei-th andj-th elements. Then, as clustering progresses, rows and columns are merged as the clusters are merged and the distances updated. This is a common way to implement this type of clustering, and has the benefit of caching distances between clusters. A simple agglomerative clustering algorithm is described in thesingle-linkage clusteringpage; it can easily be adapted to different types of linkage (see below).
Suppose we have merged the two closest elementsbandc, we now have the following clusters {a}, {b,c}, {d}, {e} and {f}, and want to merge them further. To do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters.
Usually the distance between two clustersA{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}is one of the following:
In case of tied minimum distances, a pair is randomly chosen, thus being able to generate several structurally different dendrograms. Alternatively, all tied pairs may be joined at the same time, generating a unique dendrogram.[21]
One can always decide to stop clustering when there is a sufficiently small number of clusters (number criterion). Some linkages may also guarantee that agglomeration occurs at a greater distance between clusters than the previous agglomeration, and then one can stop clustering when the clusters are too far apart to be merged (distance criterion). However, this is not the case of, e.g., the centroid linkage where the so-called reversals[22](inversions, departures from ultrametricity) may occur.
The basic principle of divisive clustering was published as the DIANA (DIvisive ANAlysis clustering) algorithm.[23]Initially, all data is in the same cluster, and the largest cluster is split until every object is separate.
Because there existO(2n){\displaystyle O(2^{n})}ways of splitting each cluster, heuristics are needed. DIANA chooses the object with the maximum average dissimilarity and then moves all objects to this cluster that are more similar to the new cluster than to the remainder.
Informally, DIANA is not so much a process of "dividing" as it is of "hollowing out": each iteration, an existing cluster (e.g. the initial cluster of the entire dataset) is chosen to form a new cluster inside of it. Objects progressively move to this nested cluster, and hollow out the existing cluster. Eventually, all that's left inside a cluster is nested clusters that grew there, without it owning any loose objects by itself.
Formally, DIANA operates in the following steps:
Intuitively,D(i){\displaystyle D(i)}above measures how strongly an object wants to leave its current cluster, but it is attenuated when the object wouldn't fit in the splinter group either. Such objects will likely start their own splinter group eventually.
The dendrogram of DIANA can be constructed by letting the splinter groupCnew{\displaystyle C_{\textrm {new}}}be a child of the hollowed-out clusterC∗{\displaystyle C_{*}}each time. This constructs a tree withC0{\displaystyle C_{0}}as its root andn{\displaystyle n}unique single-object clusters as its leaves.
Hierarchical clustering is often described as a greedy algorithm because it makes a series of locally optimal choices without reconsidering previous steps. At each iteration, it merges the two clusters that are closest together based on a selected distance metric, always choosing the best immediate option available. This approach is "greedy" because it seeks to optimize the current decision rather than planning for the best possible overall clustering[1]. Once two clusters are merged, the decision is final and irreversible, without the possibility of backtracking, which can lead to suboptimal results if earlier choices were not ideal. Despite this, the greedy nature of hierarchical clustering makes it computationally efficient and simple to implement, though it may not always capture the true underlying structure of complex datasets[7].
Hierarchical clustering, particularly in its standard agglomerative form, presents several notable limitations: (a) Time Complexity: Hierarchical clustering, especially in its basic agglomerative form, has a high time complexity of O(n³). This becomes a significant bottleneck for large datasets, limiting its scalability[24]. (b) Scalability: Due to the time and space complexity, hierarchical clustering algorithms struggle to handle very large datasets efficiently[25]. (c) Sensitivity to Noise and Outliers: Hierarchical clustering methods can be sensitive to noise and outliers in the data, which can lead to the formation of inaccurate or misleading cluster hierarchies[26]. (d) Difficulty with High-Dimensional Data: In high-dimensional spaces, hierarchical clustering can face challenges due to the curse of dimensionality, where data points become sparse, and distance measures become less meaningful. This can result in poorly defined clusters[10][27]. (e) Inability to Handle Non-Convex Shapes and Varying Densities: Traditional hierarchical clustering methods, like many other clustering algorithms, often assume that clusters are convex and have similar densities. They may struggle to accurately identify clusters with non-convex shapes or varying densities[7].
|
https://en.wikipedia.org/wiki/Hierarchical_clustering
|
Automatic clustering algorithmsare algorithms that can perform clustering without prior knowledge of data sets. In contrast with othercluster analysistechniques, automatic clustering algorithms can determine the optimal number of clusters even in the presence of noise and outlier points.[1][needs context]
Given a set ofnobjects, centroid-based algorithms createkpartitions based on a dissimilarity function, such thatk≤n. A major problem in applying this type of algorithm is determining the appropriate number of clusters for unlabeled data. Therefore, most research in clustering analysis has been focused on the automation of the process.
Automated selection ofkin aK-means clustering algorithm, one of the most used centroid-based clustering algorithms, is still a major problem in machine learning. The most accepted solution to this problem is theelbow method. It consists of runningk-means clustering to the data set with a range of values, calculating the sum of squared errors for each, and plotting them in a line chart. If the chart looks like an arm, the best value ofkwill be on the "elbow".[2]
Another method that modifies thek-means algorithm for automatically choosing the optimal number of clusters is theG-means algorithm. It was developed from the hypothesis that a subset of the data follows a Gaussian distribution. Thus,kis increased until eachk-means center's data is Gaussian. This algorithm only requires the standard statistical significance level as a parameter and does not set limits for the covariance of the data.[3]
Connectivity-based clustering orhierarchical clusteringis based on the idea that objects have more similarities to other nearby objects than to those further away. Therefore, the generated clusters from this type of algorithm will be the result of the distance between the analyzed objects.
Hierarchical models can either be divisive, where partitions are built from the entire data set available, or agglomerating, where each partition begins with a single object and additional objects are added to the set.[4]Although hierarchical clustering has the advantage of allowing any valid metric to be used as the defined distance, it is sensitive to noise and fluctuations in the data set and is more difficult to automate.
Methods have been developed to improve and automate existing hierarchical clustering algorithms[5]such as an automated version of single linkage hierarchical cluster analysis (HCA). This computerized method bases its success on a self-consistent outlier reduction approach followed by the building of a descriptive function which permits defining natural clusters. Discarded objects can also be assigned to these clusters. Essentially, one needs not to resort to external parameters to identify natural clusters. Information gathered from HCA, automated and reliable, can be resumed in adendrogramwith the number of natural clusters and the corresponding separation, an option not found in classical HCA. This method includes the two following steps: outliers being removed (this is applied in many filtering applications) and an optional classification allowing expanding clusters with the whole set of objects.[6]
BIRCH(balanced iterative reducing and clustering using hierarchies) is an algorithm used to perform connectivity-based clustering for large data-sets.[7]It is regarded as one of the fastest clustering algorithms, but it is limited because it requires the number of clusters as an input. Therefore, new algorithms based on BIRCH have been developed in which there is no need to provide the cluster count from the beginning, but that preserves the quality and speed of the clusters. The main modification is to remove the final step of BIRCH, where the user had to input the cluster count, and to improve the rest of the algorithm, referred to as tree-BIRCH, by optimizing a threshold parameter from the data. In this resulting algorithm, the threshold parameter is calculated from the maximum cluster radius and the minimum distance between clusters, which are often known. This method proved to be efficient for data sets of tens of thousands of clusters. If going beyond that amount, a supercluster splitting problem is introduced. For this, other algorithms have been developed, like MDB-BIRCH, which reduces super cluster splitting with relatively high speed.[8]
Unlike partitioning and hierarchical methods,density-based clusteringalgorithms are able to find clusters of any arbitrary shape, not only spheres.
The density-based clustering algorithm uses autonomous machine learning that identifies patterns regarding geographical location and distance to a particular number of neighbors. It is considered autonomous because a priori knowledge on what is a cluster is not required.[9]This type of algorithm provides different methods to find clusters in the data. The fastest method isDBSCAN, which uses a defined distance to differentiate between dense groups of information and sparser noise. Moreover, HDBSCAN can self-adjust by using a range of distances instead of a specified one. Lastly, the methodOPTICScreates a reachability plot based on the distance from neighboring features to separate noise from clusters of varying density.
These methods still require the user to provide the cluster center and cannot be considered automatic. The Automatic Local Density Clustering Algorithm (ALDC) is an example of the new research focused on developing automatic density-based clustering. ALDC works out local density and distance deviation of every point, thus expanding the difference between the potential cluster center and other points. This expansion allows the machine to work automatically. The machine identifies cluster centers and assigns the points that are left by their closest neighbor of higher density.[10]
In the automation of data density to identify clusters, research has also been focused on artificially generating the algorithms. For instance, the Estimation of Distribution Algorithms guarantees the generation of valid algorithms by thedirected acyclic graph(DAG), in which nodes represent procedures (building block) and edges represent possible execution sequences between two nodes. Building Blocks determine the EDA's alphabet or, in other words, any generated algorithm. Clustering algorithms artificially generated are compared to DBSCAN, a manual algorithm, in experimental results.[11]
|
https://en.wikipedia.org/wiki/Automatic_clustering_algorithms
|
Balanced clusteringis a special case ofclusteringwhere, in the strictest sense, cluster sizes are constrained to⌊nk⌋{\displaystyle \lfloor {n \over k}\rfloor }or⌈nk⌉{\displaystyle \lceil {n \over k}\rceil }, wheren{\displaystyle n}is the number of points andk{\displaystyle k}is the number of clusters.[1]A typical algorithm is balancedk-means, which minimizesmean square error (MSE). Another type of balanced clustering called balance-driven clustering has a two-objective cost function that minimizes both the imbalance and the MSE. Typical cost functions are ratio cut[2]and Ncut.[3]Balanced clustering can be used for example in scenarios where freight has to be delivered ton{\displaystyle n}locations withk{\displaystyle k}cars. It is then preferred that each car delivers to an equal number of locations.
There exists implementations for balanced k-means[4]and Ncut[5]
Levin, M. Sh. (2017). "On Balanced Clustering (Indices, Models, Examples)".Journal of Communications Technology and Electronics.62(12):1506–1515.doi:10.1134/S1064226917120105.S2CID255277095.
|
https://en.wikipedia.org/wiki/Balanced_clustering
|
Conceptual clusteringis amachine learningparadigm forunsupervised classificationthat has been defined byRyszard S. Michalskiin 1980 (Fisher 1987, Michalski 1980) and developed mainly during the 1980s. It is distinguished from ordinarydata clusteringby generating aconcept descriptionfor each generated class. Most conceptual clustering methods are capable of generating hierarchical category structures; seeCategorizationfor more information on hierarchy. Conceptual clustering is closely related toformal concept analysis,decision tree learning, andmixture modellearning.
Conceptual clustering is obviously closely related to data clustering; however, in conceptual clustering it is not only the inherent structure of the data that drives cluster formation, but also theDescription languagewhich is available to the learner. Thus, a statistically strong grouping in the data may fail to be extracted by the learner if the prevailing concept description language is incapable of describing that particularregularity. In most implementations, the description language has been limited to featureconjunction, although in COBWEB (see "COBWEB" below), the feature language isprobabilistic.
A fair number of algorithms have been proposed for conceptual clustering. Some examples are given below:
More general discussions and reviews of conceptual clustering can be found in the following publications:
This section discusses the rudiments of the conceptual clustering algorithm COBWEB. There are many other algorithms using different heuristics and "category goodness" or category evaluation criteria, but COBWEB is one of the best known. The reader is referred to thebibliographyfor other methods.
The COBWEB data structure is a hierarchy (tree) wherein each node represents a givenconcept. Each concept represents a set (actually, amultisetor bag) of objects, each object being represented as a binary-valued property list. The data associated with each tree node (i.e., concept) are the integer property counts for the objects in that concept. For example, (see figure), let a conceptC1{\displaystyle C_{1}}contain the following four objects (repeated objects being permitted).
The three properties might be, for example,[is_male, has_wings, is_nocturnal]. Then what is stored at this concept node is the property count[1 3 3], indicating that 1 of the objects in the concept is male, 3 of the objects have wings, and 3 of the objects are nocturnal. The conceptdescriptionis the category-conditional probability (likelihood) of the properties at the node. Thus, given that an object is a member of category (concept)C1{\displaystyle C_{1}}, the likelihood that it is male is1/4=0.25{\displaystyle 1/4=0.25}. Likewise, the likelihood that the object has wings and likelihood that the object is nocturnal or both is3/4=0.75{\displaystyle 3/4=0.75}. The concept description can therefore simply be given as[.25 .75 .75], which corresponds to theC1{\displaystyle C_{1}}-conditional feature likelihood, i.e.,p(x|C1)=(0.25,0.75,0.75){\displaystyle p(x|C_{1})=(0.25,0.75,0.75)}.
The figure to the right shows a concept tree with five concepts.C0{\displaystyle C_{0}}is the root concept, which contains all ten objects in the data set. ConceptsC1{\displaystyle C_{1}}andC2{\displaystyle C_{2}}are the children ofC0{\displaystyle C_{0}}, the former containing four objects, and the later containing six objects. ConceptC2{\displaystyle C_{2}}is also the parent of conceptsC3{\displaystyle C_{3}},C4{\displaystyle C_{4}}, andC5{\displaystyle C_{5}}, which contain three, two, and one object, respectively. Note that each parent node (relative superordinate concept) contains all the objects contained by its child nodes (relative subordinate concepts). In Fisher's (1987) description of COBWEB, he indicates that only the total attribute counts (not conditional probabilities, and not object lists) are stored at the nodes. Any probabilities are computed from the attribute counts as needed.
The description language of COBWEB is a "language" only in a loose sense, because being fully probabilistic it is capable of describing any concept. However, if constraints are placed on the probability ranges which concepts may represent, then a stronger language is obtained. For example, we might permit only concepts wherein at least one probability differs from 0.5 by more thanα{\displaystyle \alpha }. Under this constraint, withα=0.3{\displaystyle \alpha =0.3}, a concept such as[.6 .5 .7]could not be constructed by the learner; however a concept such as[.6 .5 .9]would be accessible because at least one probability differs from 0.5 by more thanα{\displaystyle \alpha }. Thus, under constraints such as these, we obtain something like a traditional concept language. In the limiting case whereα=0.5{\displaystyle \alpha =0.5}for every feature, and thus every probability in a concept must be 0 or 1, the result is a feature language base on conjunction; that is, every concept that can be represented can then be described as a conjunction of features (and their negations), and concepts that cannot be described in this way cannot be represented.
In Fisher's (1987) description of COBWEB, the measure he uses to evaluate the quality of the hierarchy is Gluck and Corter's (1985)category utility(CU) measure, which he re-derives in his paper. The motivation for the measure is highly similar to the "information gain" measure introduced by Quinlan for decision tree learning. It has previously been shown that the CU for feature-based classification is the same as themutual informationbetween the feature variables and the class variable (Gluck & Corter, 1985; Corter & Gluck, 1992), and since this measure is much better known, we proceed here with mutual information as the measure of category "goodness".
What we wish to evaluate is the overall utility of grouping the objects into a particular hierarchical categorization structure. Given a set of possible classification structures, we need to determine whether one is better than another.
|
https://en.wikipedia.org/wiki/Conceptual_clustering
|
Consensus clusteringis a method of aggregating (potentially conflicting) results from multipleclustering algorithms. Also calledcluster ensembles[1]or aggregation of clustering (or partitions), it refers to the situation in which a number of different (input) clusterings have been obtained for a particular dataset and it is desired to find a single (consensus) clustering which is a better fit in some sense than the existing clusterings.[2]Consensus clustering is thus the problem of reconciling clustering information about the same data set coming from different sources or from different runs of the same algorithm. When cast as an optimization problem, consensus clustering is known as median partition, and has been shown to beNP-complete,[3]even when the number of input clusterings is three.[4]Consensus clustering for unsupervised learning is analogous toensemble learningin supervised learning.
There are potential shortcomings for all existing clustering techniques. This may cause interpretation of results to become difficult, especially when there is no knowledge about the number of clusters. Clustering methods are also very sensitive to the initial clustering settings, which can cause non-significant data to be amplified in non-reiterative methods. An extremely important issue in cluster analysis is the validation of the clustering results, that is, how to gain confidence about the significance of the clusters provided by the clustering technique (cluster numbers and cluster assignments). Lacking an external objective criterion (the equivalent of a known class label in supervised analysis), this validation becomes somewhat elusive.
Iterative descent clustering methods, such as theSOMandk-means clusteringcircumvent some of the shortcomings ofhierarchical clusteringby providing for univocally defined clusters and cluster boundaries. Consensus clustering provides a method that represents the consensus across multiple runs of a clustering algorithm, to determine the number of clusters in the data, and to assess the stability of the discovered clusters. The method can also be used to represent the consensus over multiple runs of a clustering algorithm with random restart (such as K-means, model-based Bayesian clustering, SOM, etc.), so as to account for its sensitivity to the initial conditions. It can provide data for a visualization tool to inspect cluster number, membership, and boundaries. However, they lack the intuitive and visual appeal of hierarchical clustering dendrograms, and the number of clusters must be chosen a priori.
The Monti consensus clustering algorithm[5]is one of the most popular consensus clustering algorithms and is used to determine the number of clusters,K{\displaystyle K}. Given a dataset ofN{\displaystyle N}total number of points to cluster, this algorithm works by resampling and clustering the data, for eachK{\displaystyle K}and aN×N{\displaystyle N\times N}consensus matrix is calculated, where each element represents the fraction of times two samples clustered together. A perfectly stable matrix would consist entirely of zeros and ones, representing all sample pairs always clustering together or not together over all resampling iterations. The relative stability of the consensus matrices can be used to infer the optimalK{\displaystyle K}.
More specifically, given a set of points to cluster,D={e1,e2,...eN}{\displaystyle D=\{e_{1},e_{2},...e_{N}\}}, letD1,D2,...,DH{\displaystyle D^{1},D^{2},...,D^{H}}be the list ofH{\displaystyle H}perturbed (resampled) datasets of the original datasetD{\displaystyle D}, and letMh{\displaystyle M^{h}}denote theN×N{\displaystyle N\times N}connectivity matrix resulting from applying a clustering algorithm to the datasetDh{\displaystyle D^{h}}. The entries ofMh{\displaystyle M^{h}}are defined as follows:
Mh(i,j)={1,ifpoints i and j belong to the same cluster0,otherwise{\displaystyle M^{h}(i,j)={\begin{cases}1,&{\text{if}}{\text{ points i and j belong to the same cluster}}\\0,&{\text{otherwise}}\end{cases}}}
LetIh{\displaystyle I^{h}}be theN×N{\displaystyle N\times N}identicator matrix where the(i,j){\displaystyle (i,j)}-th entry is equal to 1 if pointsi{\displaystyle i}andj{\displaystyle j}are in the same perturbed datasetDh{\displaystyle D^{h}}, and 0 otherwise. The indicator matrix is used to keep track of which samples were selected during each resampling iteration for the normalisation step. The consensus matrixC{\displaystyle C}is defined as the normalised sum of all connectivity matrices of all the perturbed datasets and a different one is calculated for everyK{\displaystyle K}.
C(i,j)=(∑h=1HMh(i,j)∑h=1HIh(i,j)){\displaystyle C(i,j)=\left({\frac {\textstyle \sum _{h=1}^{H}M^{h}(i,j)\displaystyle }{\sum _{h=1}^{H}I^{h}(i,j)}}\right)}
That is the entry(i,j){\displaystyle (i,j)}in the consensus matrix is the number of times pointsi{\displaystyle i}andj{\displaystyle j}were clustered together divided by the total number of times they were selected together. The matrix is symmetric and each element is defined within the range[0,1]{\displaystyle [0,1]}. A consensus matrix is calculated for eachK{\displaystyle K}to be tested, and the stability of each matrix, that is how far the matrix is towards a matrix of perfect stability (just zeros and ones) is used to determine the optimalK{\displaystyle K}. One way of quantifying the stability of theK{\displaystyle K}th consensus matrix is examining its CDF curve (see below).
Monti consensus clustering can be a powerful tool for identifying clusters, but it needs to be applied with caution as shown by Şenbabaoğluet al.[6]It has been shown that the Monti consensus clustering algorithm is able to claim apparent stability of chance partitioning of null datasets drawn from a unimodal distribution, and thus has the potential to lead to over-interpretation of cluster stability in a real study.[6][7]If clusters are not well separated, consensus clustering could lead one to conclude apparent structure when there is none, or declare cluster stability when it is subtle. Identifying false positive clusters is a common problem throughout cluster research,[8]and has been addressed by methods such as SigClust[8]and the GAP-statistic.[9]However, these methods rely on certain assumptions for the null model that may not always be appropriate.
Şenbabaoğluet al[6]demonstrated the original delta K metric to decideK{\displaystyle K}in the Monti algorithm performed poorly, and proposed a new superior metric for measuring the stability of consensus matrices using their CDF curves. In the CDF curve of a consensus matrix, the lower left portion represents sample pairs rarely clustered together, the upper right portion represents those almost always clustered together, whereas the middle segment represent those with ambiguous assignments in different clustering runs. The proportion of ambiguous clustering (PAC) score measure quantifies this middle segment; and is defined as the fraction of sample pairs with consensus indices falling in the interval (u1, u2) ∈ [0, 1] where u1is a value close to 0 and u2is a value close to 1 (for instance u1=0.1 and u2=0.9). A low value of PAC indicates a flat middle segment, and a low rate of discordant assignments across permuted clustering runs. One can therefore infer the optimal number of clusters by theK{\displaystyle K}value having the lowest PAC.[6][7]
This approach byStrehlandGhoshintroduces the problem of combining multiple partitionings of a set of objects into a single consolidated clustering without accessing the features or algorithms that determined these partitionings. They discuss three approaches towards solving this problem to obtain high quality consensus functions. Their techniques have low computational costs and this makes it feasible to evaluate each of the techniques discussed below and arrive at the best solution by comparing the results against the objective function.
PuneraandGhoshextended the idea of hard clustering ensembles to the soft clustering scenario. Each instance in a soft ensemble is represented by a concatenation ofrposterior membership probability distributions obtained from the constituent clustering algorithms. We can define a distance measure between two instances using theKullback–Leibler (KL) divergence, which calculates the "distance" between two probability distributions.[15]
|
https://en.wikipedia.org/wiki/Consensus_clustering
|
Incomputer science,constrained clusteringis a class ofsemi-supervised learningalgorithms. Typically, constrained clustering incorporates either a set of must-link constraints, cannot-link constraints, or both, with adata clusteringalgorithm.[1]A cluster in which the members conform to all must-link and cannot-link constraints is called achunklet.
Both a must-link and a cannot-link constraint define a relationship between two data instances. Together, the sets of these constraints act as a guide for which a constrained clustering algorithm will attempt to find chunklets (clusters in the dataset which satisfy the specified constraints).
Some constrained clustering algorithms will abort if no such clustering exists which satisfies the specified constraints. Others will try to minimize the amount of constraint violation should it be impossible to find a clustering which satisfies the constraints. Constraints could also be used to guide the selection of a clustering model among several possible solutions.[2]
Examples of constrained clustering algorithms include:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Constrained_clustering
|
In the study ofcomplex networks, a network is said to havecommunity structureif the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. In the particular case ofnon-overlappingcommunity finding, this implies that the network divides naturally into groups of nodes with dense connections internally and sparser connections between groups. Butoverlappingcommunities are also allowed. The more general definition is based on the principle that pairs of nodes are more likely to be connected if they are both members of the same community(ies), and less likely to be connected if they do not share communities. A related but different problem iscommunity search, where the goal is to find a community that a certain vertex belongs to.
In the study ofnetworks, such as computer and information networks, social networks and biological networks, a number of different characteristics have been found to occur commonly, including thesmall-world property,heavy-taileddegree distributions, andclustering, among others. Another common characteristic is community structure.[1][2][3][4][5]In the context of networks, community structure refers to the occurrence of groups of nodes in a network that are more densely connected internally than with the rest of the network, as shown in the example image to the right. This inhomogeneity of connections suggests that the network has certain natural divisions within it.
Communities are often defined in terms of thepartition of the setof vertices, that is each node is put into one and only one community, just as in the figure. This is a useful simplification and most community detection methods find this type of community structure. However, in some cases a better representation could be one where vertices are in more than one community. This might happen in a social network where each vertex represents a person, and the communities represent the different groups of friends: one community for family, another community for co-workers, one for friends in the same sports club, and so on. The use ofcliques for community detectiondiscussed below is just one example of how such overlapping community structure can be found.
Some networks may not have any meaningful community structure. Many basic network models, for example, such as therandom graphand theBarabási–Albert model, do not display community structure.
Community structures are quite common in real networks. Social networks include community groups (the origin of the term, in fact) based on common location, interests, occupation, etc.[5][6]
Finding an underlying community structure in a network, if it exists, is important for a number of reasons. Communities allow us to create a large scale map of a network since individual communities act like meta-nodes in the network which makes its study easier.[7]
Individual communities also shed light on the function of the system represented by the network since communities often correspond to functional units of the system. In metabolic networks, such functional groups correspond to cycles or pathways whereas in theprotein interaction network, communities correspond to proteins with similar functionality inside a biological cell. Similarly, citation networks form communities by research topic.[1]Being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other. Such insight can be useful in improving some algorithms on graphs such asspectral clustering.[8]
Importantly, communities often have very different properties than the average properties of the networks. Thus, only concentrating on the average properties usually misses many important and interesting features inside the networks. For example, in a given social network, both gregarious and reticent groups might exists simultaneously.[7]
Existence of communities also generally affects various processes like rumour spreading or epidemic spreading happening on a network. Hence to properly understand such processes, it is important to detect communities and also to study how they affect the spreading processes in various settings.
Finally, an important application that community detection has found in network science is the prediction of missing links and the identification of false links in the network. During the measurement process, some links may not get observed for a number of reasons. Similarly, some links could falsely enter into the data because of the errors in the measurement. Both these cases are well handled by community detection algorithm since it allows one to assign the probability of existence of an edge between a given pair of nodes.[9]
Finding communities within an arbitrary network can be acomputationallydifficult task. The number of communities, if any, within the network is typically unknown and the communities are often of unequal size and/or density. Despite these difficulties, however, several methods for community finding have been developed and employed with varying levels of success.[4]
One of the oldest algorithms for dividing networks into parts is theminimum cutmethod (and variants such as ratio cut and normalized cut). This method sees use, for example, in load balancing forparallel computingin order to minimize communication between processor nodes.
In the minimum-cut method, the network is divided into a predetermined number of parts, usually of approximately the same size, chosen such that the number of edges between groups is minimized. The method works well in many of the applications for which it was originally intended but is less than ideal for finding community structure in general networks since it will find communities regardless of whether they are implicit in the structure, and it will find only a fixed number of them.[10]
Another method for finding community structures in networks ishierarchical clustering. In this method one defines asimilarity measurequantifying some (usually topological) type of similarity between node pairs. Commonly used measures include thecosine similarity, theJaccard index, and theHamming distancebetween rows of theadjacency matrix. Then one groups similar nodes into communities according to this measure. There are several common schemes for performing the grouping, the two simplest beingsingle-linkage clustering, in which two groups are considered separate communities if and only if all pairs of nodes in different groups have similarity lower than a given threshold, andcomplete linkage clustering, in which all nodes within every group have similarity greater than a threshold. An important step is how to determine the threshold to stop the agglomerative clustering, indicating a near-to-optimal community structure. A common strategy consist to build one or several metrics monitoring global properties of the network, which peak at given step of the clustering. An interesting approach in this direction is the use of various similarity or dissimilarity measures, combined throughconvex sums,.[11]Another approximation is the computation of a quantity monitoring the density of edges within clusters with respect to the density between clusters, such as the partition density, which has been proposed when the similarity metric is defined between edges (which permits the definition of overlapping communities),[12]and extended when the similarity is defined between nodes, which allows to consider alternative definitions of communities such as guilds (i.e. groups of nodes sharing a similar number of links with respect to the same neighbours but not necessarily connected themselves).[13]These methods can be extended to consider multidimensional networks, for instance when we are dealing with networks having nodes with different types of links.[13]
Another commonly used algorithm for finding communities is theGirvan–Newman algorithm.[1]This algorithm identifies edges in a network that lie between communities and then removes them, leaving behind just the communities themselves. The identification is performed by employing the graph-theoretic measurebetweenness centrality, which assigns a number to each edge which is large if the edge lies "between" many pairs of nodes.
The Girvan–Newman algorithm returns results of reasonable quality and is popular because it has been implemented in a number of standard software packages. But it also runs slowly, taking time O(m2n) on a network ofnvertices andmedges, making it impractical for networks of more than a few thousand nodes.[14]
In spite of its known drawbacks, one of the most widely used methods for community detection is modularity maximization.[14]Modularityis a benefit function that measures the quality of a particular division of a network into communities. The modularity maximization method detects communities by searching over possible divisions of a network for one or more that have particularly high modularity. Since exhaustive search over all possible divisions is usually intractable, practical algorithms are based on approximate optimization methods such as greedy algorithms, simulated annealing, or spectral optimization, with different approaches offering different balances between speed and accuracy.[15][16]A popular modularity maximization approach is theLouvain method, which iteratively optimizes local communities until global modularity can no longer be improved given perturbations to the current community state.[17][18]
The usefulness of modularity optimization is questionable, as it has been shown that modularity optimization often fails to detect clusters smaller than some scale, depending on the size of the network (resolution limit[19]); on the other hand the landscape of modularity values is characterized by a huge degeneracy of partitions with high modularity, close to the absolute maximum, which may be very different from each other.[20]
Methods based onstatistical inferenceattempt to fit agenerative modelto the network data, which encodes the community structure. The overall advantage of this approach compared to the alternatives is its more principled nature, and the capacity to inherently address issues ofstatistical significance. Most methods in the literature are based on thestochastic block model[21]as well as variants including mixed membership,[22][23]degree-correction,[24]and hierarchical structures.[25]Model selectioncan be performed using principled approaches such asminimum description length[26][27](or equivalently,Bayesian model selection[28]) andlikelihood-ratio test.[29]Currently many algorithms exist to perform efficient inference of stochastic block models, includingbelief propagation[30][31]and agglomerativeMonte Carlo.[32]
In contrast to approaches that attempt to cluster a network given an objective function, this class of methods is based on generative models, which not only serve as a description of the large-scale structure of the network, but also can be used togeneralizethe data and predict the occurrence of missing or spurious links in the network.[33][34]
Cliquesare subgraphs in which every node is connected to every other node in the clique. As nodes can not be more tightly connected than this, it is not surprising that there are many approaches to community detection in networks based on the detection of cliques in a graph and the analysis of how these overlap. Note that as a node can be a member of more than one clique, a node can be a member of more than one community in these methods giving an "overlapping community structure".
One approach is to find the "maximal cliques". That is to find the cliques which are not the subgraph of any other clique. The classic algorithm to find these is theBron–Kerbosch algorithm. The overlap of these can be used to define communities in several ways. The simplest is to consider only maximal cliques bigger than a minimum size (number of nodes). The union of these cliques then defines a subgraph whose components (disconnected parts) then define communities.[35]Such approaches are often implemented insocial network analysis softwaresuch as UCInet.
The alternative approach is to use cliques of fixed sizek{\displaystyle k}. The overlap of these can be used to define a type ofk{\displaystyle k}-regularhypergraphor a structure which is a generalisation of theline graph(the case whenk=2{\displaystyle k=2}) known as a "Clique graph".[36]The clique graphs have vertices which represent the cliques in the original graph while the edges of the clique graph record the overlap of the clique in the original graph. Applying any of the previous community detection methods (which assign each node to a community) to the clique graph then assigns each clique to a community. This can then be used to determine community membership of nodes in the cliques. Again as a node may be in several cliques, it can be a member of several communities.
For instance theclique percolation method[37]defines communities aspercolation clustersofk{\displaystyle k}-cliques. To do this it
finds allk{\displaystyle k}-cliques in a network, that is all the complete sub-graphs ofk{\displaystyle k}-nodes.
It then defines twok{\displaystyle k}-cliques to be adjacent if they sharek−1{\displaystyle k-1}nodes, that is this is used to define edges in a clique graph. A community is then defined to be the maximal union ofk{\displaystyle k}-cliques in which we can reach anyk{\displaystyle k}-clique from any otherk{\displaystyle k}-clique through series ofk{\displaystyle k}-clique adjacencies. That is communities are just the connected components in the clique graph. Since a node can belong to several differentk{\displaystyle k}-clique percolation clusters at the same time, the communities can overlap with each other.
A network can be represented or projected onto alatent spaceviarepresentation learningmethods to efficiently represent a system. Then, variousclusteringmethods can be employed to detect community structures. For Euclidean spaces, methods like embedding-based Silhouette community detection[38]can be utilized. For Hypergeometric latent spaces, critical gap method or modified density-based, hierarchical, or partitioning-based clustering methods can be utilized.[39]
The evaluation of algorithms, to detect which are better at detecting community structure, is still an open question. It must be based on analyses of networks of known structure. A typical example is the "four groups" test, in which a network is divided into four equally-sized groups (usually of 32 nodes each) and the probabilities of connection within and between groups varied to create more or less challenging structures for the detection algorithm. Such benchmark graphs are a special case of theplanted l-partition model[40]ofCondonandKarp, or more generally of "stochastic block models", a general class of random network models containing community structure. Other more flexible benchmarks have been proposed that allow for varying group sizes and nontrivial degree distributions, such asLFR benchmark[41][42]which is an extension of the four groups benchmark that includes heterogeneous distributions of node degree and community size, making it a more severe test of community detection methods.[43][44]
Commonly used computer-generated benchmarks start with a network of well-defined communities. Then, this structure is degraded by rewiring or removing links and it gets harder and harder for the algorithms to detect the original partition. At the end, the network reaches a point where it is essentially random. This kind of benchmark may be called "open". The performance on these benchmarks is evaluated by measures such as normalizedmutual informationorvariation of information. They compare the solution obtained by an algorithm[42]with the original community structure, evaluating the similarity of both partitions.
During recent years, a rather surprising result has been obtained by various groups which shows that a phase transition exists in the community detection problem, showing that as the density of connections inside communities and between communities become more and more equal or both become smaller (equivalently, as the community structure becomes too weak or the network becomes too sparse), suddenly the communities become undetectable. In a sense, the communities themselves still exist, since the presence and absence of edges is still correlated with the community memberships of their endpoints; but it becomes information-theoretically impossible to label the nodes better than chance, or even distinguish the graph from one generated by a null model such as theErdos–Renyi modelwithout community structure. This transition is independent of the type of algorithm being used to detect communities, implying that there exists a fundamental limit on our ability to detect communities in networks, even with optimal Bayesian inference (i.e., regardless of our computational resources).[45][46][47]
Consider astochastic block modelwith totaln{\displaystyle n}nodes,q=2{\displaystyle q=2}groups of equal size, and letpin{\displaystyle p_{\text{in}}}andpout{\displaystyle p_{\text{out}}}be the connection probabilities inside and between the groups respectively. Ifpin>pout{\displaystyle p_{\text{in}}>p_{\text{out}}}, the network would possess community structure since the link density inside the groups would be more than the density of links between the groups. In the sparse case,pin{\displaystyle p_{\text{in}}}andpout{\displaystyle p_{\text{out}}}scale asO(1/n){\displaystyle O(1/n)}so that the average degree is constant:
Then it becomes impossible to detect the communities when:[46]
|
https://en.wikipedia.org/wiki/Community_structure#Algorithms_for_finding_communities
|
Incomputer science,data stream clusteringis defined as theclusteringof data that arrive continuously such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually studied as astreaming algorithmand the objective is, given a sequence of points, to construct a good clustering of the stream, using a small amount of memory and time.
Data stream clustering has recently attracted attention for emerging applications that involve large amounts of streaming data. For clustering,k-meansis a widely used heuristic but alternate algorithms have also been developed such ask-medoids,CUREand the popular[citation needed]BIRCH. For data streams, one of the first results appeared in 1980[1]but the model was formalized in 1998.[2]
The problem of data stream clustering is defined as:
Input:a sequence ofnpoints in metric space and an integerk.Output:kcenters in the set of thenpoints so as to minimize the sum of distances from data points to their closest cluster centers.
This is the streaming version of the k-median problem.
STREAM is an algorithm for clustering data streams described by Guha, Mishra, Motwani and O'Callaghan[3]which achieves aconstant factor approximationfor the k-Median problem in a single pass and using small space.
Theorem—STREAM can solve thek-Median problem on a data stream in a single pass, with timeO(n1+e) and spaceθ(nε) up to a factor 2O(1/e), wherenthe number of points ande<1/2{\displaystyle e<1/2}.
To understand STREAM, the first step is to show that clustering can take place in small space (not caring about the number of passes). Small-Space is adivide-and-conquer algorithmthat divides the data,S, intoℓ{\displaystyle \ell }pieces, clusters each one of them (usingk-means) and then clusters the centers obtained.
Algorithm Small-Space(S)
Where, if in Step 2 we run a bicriteria(a,b){\displaystyle (a,b)}-approximation algorithmwhich outputs at mostakmedians with cost at mostbtimes the optimum k-Median solution and in Step 4 we run ac-approximation algorithm then the approximation factor of Small-Space() algorithm is2c(1+2b)+2b{\displaystyle 2c(1+2b)+2b}. We can also generalize Small-Space so that it recursively calls itselfitimes on a successively smaller set of weighted centers and achieves a constant factor approximation to thek-median problem.
The problem with the Small-Space is that the number of subsetsℓ{\displaystyle \ell }that we partitionSinto is limited, since it has to store in memory the intermediate medians inX. So, ifMis the size of memory, we need to partitionSintoℓ{\displaystyle \ell }subsets such that each subset fits in memory, (n/ℓ{\displaystyle n/\ell }) and so that the weightedℓk{\displaystyle \ell k}centers also fit in memory,ℓk<M{\displaystyle \ell k<M}. But such anℓ{\displaystyle \ell }may not always exist.
The STREAM algorithm solves the problem of storing intermediate medians and achieves better running time and space requirements. The algorithm works as follows:[3]
Other well-known algorithms used for data stream clustering are:
|
https://en.wikipedia.org/wiki/Data_stream_clustering
|
TheHCS (Highly Connected Subgraphs) clustering algorithm[1](also known as theHCS algorithm, and other names such asHighly Connected Clusters/Components/Kernels) is an algorithm based on graph connectivity forcluster analysis. It works by representing the similarity data in a similarity graph, and then finding all the highly connected subgraphs. It does not make any prior assumptions on the number of the clusters. This algorithm was published by Erez Hartuv andRon Shamirin 2000.
The HCS algorithm gives a clustering solution, which is inherently meaningful in the application domain, since each solution cluster must have diameter 2 while a union of two solution clusters will have diameter 3.
The goal of cluster analysis is to group elements into disjoint subsets, or clusters, based on similarity between elements, so that elements in the same cluster are highly similar to each other (homogeneity), while elements from different clusters have low similarity to each other (separation). Similarity graph is one of the models to represent the similarity between elements, and in turn facilitate generating of clusters. To construct a similarity graph from similarity data, represent elements as vertices, and elicit edges between vertices when the similarity value between them is above some threshold.
In the similarity graph, the more edges exist for a given number of vertices, the more similar such a set of vertices are between each other. In other words, if we try to disconnect a similarity graph by removing edges, the more edges we need to remove before the graph becomes disconnected, the more similar the vertices in this graph.Minimum cutis a minimum set of edges without which the graph will become disconnected.
HCS clustering algorithm finds all the subgraphs with n vertices such that the minimum cut of those subgraphs contain more than n/2 edges, and identifies them as clusters. Such a subgraph is called a Highly Connected Subgraph (HCS). Single vertices are not considered clusters and are grouped into a singletons set S.
Given a similarity graph G(V,E), HCS clustering algorithm will check if it is already highly connected, if yes, returns G, otherwise uses the minimum cut of G to partition G into two subgraphs H and H', and recursively run HCS clustering algorithm on H and H'.
The following animation shows how the HCS clustering algorithm partitions a similarity graph into three clusters.
The step of finding the minimum cut on graphGis a subroutine that can be implemented using different algorithms for this problem. See below for an example algorithm for finding minimum cut using randomization.
The running time of the HCS clustering algorithm is bounded byN× f(n, m). f(n, m) is the time complexity of computing a minimum cut in a graph with n vertices and m edges, andNis the number of clusters found. In many applications N << n.
For fast algorithms for finding a minimum cut in an unweighted graph:
The clusters produced by the HCS clustering algorithm possess several properties, which can demonstrate the homogeneity and separation of the solution.
Theorem 1The diameter of every highly connected graph is at most two.
Proof:Let n=|G|. If G has a vertex x with degree <= n/2, then G has a minimum cut (that isolates x) with edges <= n/2, so G is not highly connected. So if G is highly connected, every vertex has degree >= n/2. There is a famous theorem in graph theory that says that if every vertex has degree >= n/2, then the diameter of G (the longest path between any two nodes) <= 2.
Theorem 2(a) The number of edges in a highly connected graph is quadratic. (b) The number of edges removed by each iteration of the HCS algorithm is at most linear.
Proof:(a) From Theorem 1 we know that every vertex has degree >= n/2. Therefore, the number of edges in a highly connected graph must be at least (n × n/2)/2, where we sum the degrees of each vertex and divide by 2.
(b) By definition, each iteration removes a minimum cut with <= n/2 edges.
Theorems 1 and 2a provide a strong indication of a final cluster's homogeneity. Doing better approaches the case where all vertices of a cluster are connected, which is both too stringent and alsoNP-hard.
Theorem 2b indicates separation since any two final clusters C1 and C2 would not have been separated unless there were at most O(C1+C2) edges between them (contrast with the quadratic edges within clusters).
Singletons adoption: Elements left as singletons by the initial clustering process can be "adopted" by clusters based on similarity to the cluster. If the maximum number of neighbors to a specific cluster is large enough, then it can be added to that cluster.
Removing Low Degree Vertices: When the input graph has vertices with low degrees, it is not worthy to run the algorithm since it is computationally expensive and not informative. Alternatively, a refinement of the algorithm can first remove all vertices with a degree lower than certain threshold.
|
https://en.wikipedia.org/wiki/HCS_clustering_algorithm
|
Inbioinformatics,sequence clusteringalgorithmsattempt to groupbiological sequencesthat are somehow related. The sequences can be either ofgenomic, "transcriptomic" (ESTs) orproteinorigin.
For proteins,homologous sequencesare typically grouped intofamilies. For EST data, clustering is important to group sequences originating from the samegenebefore the ESTs areassembledto reconstruct the originalmRNA.
Some clustering algorithms usesingle-linkage clustering, constructing atransitive closureof sequences with asimilarityover a particular threshold. UCLUST[1]and CD-HIT[2]use agreedy algorithmthat identifies arepresentative sequencefor each cluster and assigns a new sequence to that cluster if it is sufficiently similar to the representative; if a sequence is not matched then it becomes the representative sequence for a new cluster. The similarity score is often based onsequence alignment. Sequence clustering is often used to make anon-redundantset ofrepresentative sequences.
Sequence clusters are often synonymous with (but not identical to)protein families. Determining a representativetertiary structurefor each sequence cluster is the aim of manystructural genomicsinitiatives.
|
https://en.wikipedia.org/wiki/Sequence_clustering
|
Inmultivariate statistics,spectral clusteringtechniques make use of thespectrum(eigenvalues) of thesimilarity matrixof the data to performdimensionality reductionbeforeclusteringin fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.
In application to image segmentation, spectral clustering is known assegmentation-based object categorization.
Given an enumerated set of data points, thesimilarity matrixmay be defined as a symmetric matrixA{\displaystyle A}, whereAij≥0{\displaystyle A_{ij}\geq 0}represents a measure of the similarity between data points with indicesi{\displaystyle i}andj{\displaystyle j}. The general approach to spectral clustering is to use a standardclusteringmethod (there are many such methods,k-means is discussedbelow) on relevanteigenvectorsof aLaplacian matrixofA{\displaystyle A}. There are many different ways to define a Laplacian which have different mathematical interpretations, and so the clustering will also have different interpretations. The eigenvectors that are relevant are the ones that correspond to several smallest eigenvalues of the Laplacian except for the smallest eigenvalue which will have a value of 0. For computational efficiency, these eigenvectors are often computed as the eigenvectors corresponding to the largest several eigenvalues of a function of the Laplacian.
Spectral clustering is well known to relate to partitioning of a mass-spring system, where each mass is associated with a data point and each spring stiffness corresponds to a weight of an edge describing a similarity of the two related data points, as in thespring system. Specifically, the classical reference[1]explains that the eigenvalue problem describing transversal vibration modes of a mass-spring system is exactly the same as the eigenvalue problem for the graphLaplacian matrixdefined as
whereD{\displaystyle D}is thediagonal matrix
and A is theadjacency matrix.
The masses that are tightly connected by the springs in the mass-spring system evidently move together from the equilibrium position in low-frequency vibration modes, so that the components of the eigenvectors corresponding to the smallest eigenvalues of the graph Laplacian can be used for meaningful clustering of the masses. For example, assuming that all the springs and the masses are identical in the 2-dimensional spring system pictured, one would intuitively expect that the loosest connected masses on the right-hand side of the system would move with the largest amplitude and in the opposite direction to the rest of the masses when the system is shaken — and this expectation will be confirmed by analyzing components of the eigenvectors of the graph Laplacian corresponding to the smallest eigenvalues, i.e., the smallestvibration frequencies.
The goal of normalization is making the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.
A popular normalized spectral clustering technique is thenormalized cuts algorithmorShi–Malik algorithmintroduced by Jianbo Shi andJitendra Malik,[2]commonly used forimage segmentation. It partitions points into two sets(B1,B2){\displaystyle (B_{1},B_{2})}based on theeigenvectorv{\displaystyle v}corresponding to the second-smallesteigenvalueof thesymmetric normalized Laplaciandefined as
The vectorv{\displaystyle v}is also theeigenvectorcorresponding to the second-largesteigenvalueof the symmetrically normalizedadjacency matrixD−1/2AD−1/2.{\displaystyle D^{-1/2}AD^{-1/2}.}
Therandom walk (or left) normalized Laplacianis defined as
and can also be used for spectral clustering. A mathematically equivalent algorithm[3]takes theeigenvectoru{\displaystyle u}corresponding to the largesteigenvalueof therandom walk normalized adjacencymatrixP=D−1A{\displaystyle P=D^{-1}A}.
The eigenvectorv{\displaystyle v}of the symmetrically normalized Laplacian and the eigenvectoru{\displaystyle u}of the left normalized Laplacian are related by the identityD−1/2v=u.{\displaystyle D^{-1/2}v=u.}
Knowing then{\displaystyle n}-by-k{\displaystyle k}matrixV{\displaystyle V}of selected eigenvectors, mapping — called spectral embedding — of the originaln{\displaystyle n}data points is performed to ak{\displaystyle k}-dimensional vector space using the rows ofV{\displaystyle V}. Now the analysis is reduced to clustering vectors withk{\displaystyle k}components, which may be done in various ways.
In the simplest casek=1{\displaystyle k=1}, the selected single eigenvectorv{\displaystyle v}, called theFiedler vector, corresponds to the second smallest eigenvalue. Using the components ofv,{\displaystyle v,}one can place all points whose component inv{\displaystyle v}is positive in the setB+{\displaystyle B_{+}}and the rest inB−{\displaystyle B_{-}}, thus bi-partitioning the graph and labeling the data points with two labels. This sign-based approach follows the intuitive explanation of spectral clustering via the mass-spring model — in the low frequency vibration mode that theFiedler vectorv{\displaystyle v}represents, one cluster data points identified with mutually strongly connected masses would move together in one direction, while in the complement cluster data points identified with remaining masses would move together in the opposite direction. The algorithm can be used forhierarchical clusteringby repeatedly partitioning the subsets in the same fashion.
In the general casek>1{\displaystyle k>1}, any vector clustering technique can be used, e.g.,DBSCAN.
If the similarity matrixA{\displaystyle A}has not already been explicitly constructed, the efficiency of spectral clustering may be improved if the solution to the corresponding eigenvalue problem is performed in amatrix-free fashion(without explicitly manipulating or even computing the similarity matrix), as in theLanczos algorithm.
For large-sized graphs, the second eigenvalue of the (normalized) graphLaplacian matrixis oftenill-conditioned, leading to slow convergence of iterative eigenvalue solvers.Preconditioningis a key technology accelerating the convergence, e.g., in the matrix-freeLOBPCGmethod. Spectral clustering has been successfully applied on large graphs by first identifying theircommunity structure, and then clustering communities.[4]
Spectral clustering is closely related tononlinear dimensionality reduction, and dimension reduction techniques such as locally-linear embedding can be used to reduce errors from noise or outliers.[5]
Denoting the number of the data points byn{\displaystyle n}, it is important to estimate thememory footprintand compute time, or number of arithmetic operations (AO) performed, as a function ofn{\displaystyle n}. No matter the algorithm of the spectral clustering, the two main costly items are the construction of the graph Laplacian and determining itsk{\displaystyle k}eigenvectors for the spectral embedding. The last step — determining the labels from then{\displaystyle n}-by-k{\displaystyle k}matrix of eigenvectors — is typically the least expensive requiring onlykn{\displaystyle kn}AO and creating just an{\displaystyle n}-by-1{\displaystyle 1}vector of the labels in memory.
The need to construct the graph Laplacian is common for all distance- or correlation-based clustering methods. Computing the eigenvectors is specific to spectral clustering only.
The graph Laplacian can be and commonly is constructed from the adjacency matrix. The construction can be performed matrix-free, i.e., without explicitly forming the matrix of the graph Laplacian and no AO. It can also be performed in-place of the adjacency matrix without increasing the memory footprint. Either way, the costs of constructing the graph Laplacian is essentially determined by the costs of constructing then{\displaystyle n}-by-n{\displaystyle n}graph adjacency matrix.
Moreover, a normalized Laplacian has exactly the same eigenvectors as the normalized adjacency matrix, but with the order of the eigenvalues reversed. Thus, instead of computing the eigenvectors corresponding to the smallest eigenvalues of the normalized Laplacian, one can equivalently compute the eigenvectors corresponding to the largest eigenvalues of the normalized adjacency matrix, without even talking about the Laplacian matrix.
Naive constructions of the graphadjacency matrix, e.g., using the RBF kernel, make it dense, thus requiringn2{\displaystyle n^{2}}memory andn2{\displaystyle n^{2}}AO to determine each of then2{\displaystyle n^{2}}entries of the matrix. Nystrom method[6]can be used to approximate the similarity matrix, but the approximate matrix is not elementwise positive,[7]i.e. cannot be interpreted as a distance-based similarity.
Algorithms to construct the graph adjacency matrix as asparse matrixare typically based on anearest neighbor search, which estimate or sample a neighborhood of a given data point for nearest neighbors, and compute non-zero entries of the adjacency matrix by comparing only pairs of the neighbors. The number of the selected nearest neighbors thus determines the number of non-zero entries, and is often fixed so that the memory footprint of then{\displaystyle n}-by-n{\displaystyle n}graph adjacency matrix is onlyO(n){\displaystyle O(n)}, onlyO(n){\displaystyle O(n)}sequential arithmetic operations are needed to compute theO(n){\displaystyle O(n)}non-zero entries, and the calculations can be trivially run in parallel.
The cost of computing then{\displaystyle n}-by-k{\displaystyle k}(withk≪n{\displaystyle k\ll n}) matrix of selected eigenvectors of the graph Laplacian is normally proportional to the cost of multiplication of then{\displaystyle n}-by-n{\displaystyle n}graph Laplacian matrix by a vector, which varies greatly whether the graph Laplacian matrix is dense or sparse. For the dense case the cost thus isO(n2){\displaystyle O(n^{2})}. The very commonly cited in the literature costO(n3){\displaystyle O(n^{3})}comes from choosingk=n{\displaystyle k=n}and is clearly misleading, since, e.g., in a hierarchical spectral clusteringk=1{\displaystyle k=1}as determined by theFiedler vector.
In the sparse case of then{\displaystyle n}-by-n{\displaystyle n}graph Laplacian matrix withO(n){\displaystyle O(n)}non-zero entries, the cost of the matrix-vector product and thus of computing then{\displaystyle n}-by-k{\displaystyle k}withk≪n{\displaystyle k\ll n}matrix of selected eigenvectors isO(n){\displaystyle O(n)}, with the memory footprint also onlyO(n){\displaystyle O(n)}— both are the optimal low bounds of complexity of clusteringn{\displaystyle n}data points. Moreover, matrix-free eigenvalue solvers such asLOBPCGcan efficiently run in parallel, e.g., on multipleGPUswith distributed memory, resulting not only in high quality clusters, which spectral clustering is famous for, but also top performance.[8]
Free software implementing spectral clustering is available in large open source projects likescikit-learn[9]usingLOBPCG[10]withmultigridpreconditioning[11][12]orARPACK,MLlibfor pseudo-eigenvector clustering using thepower iterationmethod,[13]andR.[14]
The ideas behind spectral clustering may not be immediately obvious. It may be useful to highlight relationships with other methods. In particular, it can be described in the context of kernel clustering methods, which reveals several similarities with other approaches.[15]
Spectral clustering is closely related to thek-meansalgorithm, especially in how cluster assignments are ultimately made. Although the two methods differ fundamentally in their initial formulations—spectral clustering being graph-based and k-means being centroid-based—the connection becomes clear when spectral clustering is viewed through the lens ofkernel methods.
In particular,weighted kernel k-meansprovides a key theoretical bridge between the two. Kernel k-means is a generalization of the standard k-means algorithm, where data is implicitly mapped into a high-dimensional feature space through a kernel function, and clustering is performed in that space. Spectral clustering, especially the normalized versions, performs a similar operation by mapping the input data (or graph nodes) to a lower-dimensional space defined by theeigenvectors of the graph Laplacian. These eigenvectors correspond to the solution of arelaxationof thenormalized cutor other graph partitioning objectives.
Mathematically, the objective function minimized by spectral clustering can be shown to be equivalent to the objective function of weighted kernel k-means in this transformed space. This was formally established in works such as[16]where they demonstrated that normalized cuts are equivalent to a weighted version of kernel k-means applied to the rows of the normalized Laplacian’s eigenvector matrix.
Because of this equivalence,spectral clustering can be viewed as performing kernel k-means in the eigenspace defined by the graph Laplacian. This theoretical insight has practical implications: the final clustering step in spectral clustering typically involves running thestandard k-means algorithmon the rows of the matrix formed by the first k eigenvectors of the Laplacian. These rows can be thought of as embedding each data point or node in a low-dimensional space where the clusters are more well-separated and hence, easier for k-means to detect.
Additionally,multi-level methodshave been developed to directly optimize this shared objective function. These methods work by iterativelycoarseningthe graph to reduce problem size, solving the problem on a coarse graph, and thenrefiningthe solution on successively finer graphs. This leads to more efficient optimization for large-scale problems, while still capturing the global structure preserved by the spectral embedding.[17]
Spectral clustering is also conceptually related toDBSCAN(Density-Based Spatial Clustering of Applications with Noise), particularly in the special case where the spectral method is used to identifyconnected graph componentsof a graph. In this trivial case—where the goal is to identify subsets of nodes withno interconnecting edgesbetween them—the spectral method effectively reduces to a connectivity-based clustering approach, much like DBSCAN.[18]
DBSCAN operates by identifyingdensity-connected regionsin the input space: points that are reachable from one another via a sequence of neighboring points within a specified radius (ε), and containing a minimum number of points (minPts). The algorithm excels at discovering clusters of arbitrary shape and separating out noise without needing to specify the number of clusters in advance.
In spectral clustering, when the similarity graph is constructed using ahard connectivity criterion(i.e., binary adjacency based on whether two nodes are within a threshold distance), and no normalization is applied to the Laplacian, the resulting eigenstructure of the graph Laplacian directly revealsdisconnected componentsof the graph. This mirrors DBSCAN's ability to isolatedensity-connected components. The zeroth eigenvectors of the unnormalized Laplacian correspond to these components, with one eigenvector per connected region.
This connection is most apparent when spectral clustering is used not to optimize a soft partition (like minimizing the normalized cut), but toidentify exact connected components—which corresponds to the most extreme form of “density-based” clustering, where only directly or transitively connected nodes are grouped together. Therefore, spectral clustering in this regime behaves like aspectral version of DBSCAN, especially in sparse graphs or when constructing ε-neighborhood graphs.
While DBSCAN operates directly in the data space using density estimates, spectral clustering transforms the data into an eigenspace whereglobal structure and connectivityare emphasized. Both methods are non-parametric in spirit, and neither assumes convex cluster shapes, which further supports their conceptual alignment.
Ravi Kannan, Santosh Vempala and Adrian Vetta[19]proposed a bicriteria measure to define the quality of a given clustering. They said that a clustering was an (α, ε)-clustering if theconductanceof each cluster (in the clustering) was at least α and the weight of the inter-cluster edges was at most ε fraction of the total weight of all the edges in the graph. They also look at two approximation algorithms in the same paper.
Spectral clustering has a long history.[20][21][22][23][24][2][25]Spectral clustering as amachine learningmethod was popularized by Shi & Malik[2]and Ng, Jordan, & Weiss.[25]
Ideas and network measures related to spectral clustering also play an important role in a number of applications apparently different from clustering problems. For instance, networks with stronger spectral partitions take longer to converge in opinion-updating models used in sociology and economics.[26][27]
|
https://en.wikipedia.org/wiki/Spectral_clustering
|
Ahierarchy(fromGreek:ἱεραρχία,hierarkhia, 'rule of a high priest', fromhierarkhes, 'president of sacred rites') is an arrangement of items (objects, names, values, categories, etc.) that are represented as being "above", "below", or "at the same level as" one another. Hierarchy is an important concept in a wide variety of fields, such asarchitecture,philosophy,design,mathematics,computer science,organizational theory,systems theory,systematic biology, and thesocial sciences(especiallypolitical science).
A hierarchy can link entities either directly or indirectly, and either vertically or diagonally. The only direct links in a hierarchy, insofar as they are hierarchical, are to one's immediate superior or to one of one'ssubordinates, although a system that is largely hierarchical can also incorporate alternative hierarchies. Hierarchical links can extend "vertically" upwards or downwards via multiple links in the same direction, following apath. All parts of the hierarchy that are not linked vertically to one another nevertheless can be "horizontally" linked through a path by traveling up the hierarchy to find a common direct or indirect superior, and then down again. This is akin to twoco-workersorcolleagues; each reports to a common superior, but they have the same relative amount of authority. Organizational forms exist that are both alternative and complementary to hierarchy.Heterarchyis one such form.
Hierarchies have their own special vocabulary. These terms are easiest to understand when a hierarchy is diagrammed (seebelow).
In an organizational context, the following terms are often used related to hierarchies:[2][3]
In a mathematical context (ingraph theory), thegeneral terminologyused is different.
Most hierarchies use a more specific vocabulary pertaining to their subject, but the idea behind them is the same. For example, withdata structures, objects are known asnodes, superiors are calledparentsand subordinates are calledchildren. In a business setting, a superior is asupervisor/bossand a peer is acolleague.
Degreeofbranchingrefers to the number of directsubordinatesor children an object has (in graph theory, equivalent to the number of otherverticesconnected to via outgoing arcs, in a directed graph) a node has. Hierarchies can be categorized based on the "maximum degree", the highest degree present in the system as a whole. Categorization in this way yields two broad classes:linearandbranching.
In alinear hierarchy, the maximum degree is 1.[2]In other words, all of the objects can be visualized in a line-up, and each object (excluding the top and bottom ones) has exactly one direct subordinate and one direct superior. This is referring to theobjectsand not thelevels; every hierarchy has this property with respect to levels, but normally each level can have an infinite number of objects.
In abranching hierarchy, one or more objects has a degree of 2 or more (and therefore the minimum degree is 2 or higher).[2]For many people, the word "hierarchy" automatically evokes an image of a branching hierarchy.[2]Branching hierarchies are present within numerous systems, includingorganizationsandclassification schemes. The broad category of branching hierarchies can be further subdivided based on the degree.
Aflat hierarchy(also known for companies asflat organization) is a branching hierarchy in which the maximum degree approaches infinity, i.e., that has a wide span.[3]Most often, systems intuitively regarded as hierarchical have at most a moderate span. Therefore, a flat hierarchy is often not viewed as a hierarchy at all. For example,diamondsandgraphiteare flat hierarchies of numerouscarbonatoms that can be further decomposed into subatomic particles.
Anoverlapping hierarchyis a branching hierarchy in which at least one object has two parent objects.[2]For example, agraduate studentcan have twoco-supervisorsto whom the student reports directly and equally, and who have the same level of authority within theuniversityhierarchy (i.e., they have the samepositionortenurestatus).
Possibly the first use of the English wordhierarchycited by theOxford English Dictionarywas in 1881, when it was used in reference to the three orders of three angels as depicted byPseudo-Dionysius the Areopagite(5th–6th centuries). Pseudo-Dionysius used the relatedGreekword (ἱεραρχία,hierarchia) both in reference to thecelestial hierarchyand theecclesiastical hierarchy.[4]The Greek termhierarchiameans 'rule of a high priest',[5]fromhierarches(ἱεράρχης, 'president of sacred rites, high-priest')[6]and that fromhiereus(ἱερεύς, 'priest')[7]andarche(ἀρχή, 'first place or power, rule').[8]Dionysius is credited with first use of it as an abstract noun.
Since hierarchical churches, such as theRoman Catholic(seeCatholic Church hierarchy) andEastern Orthodoxchurches, had tables of organization that were "hierarchical" in the modern sense of the word (traditionally withGodas the pinnacle or head of the hierarchy), the term came to refer to similar organizational methods insecularsettings.
A hierarchy is typically depicted as apyramid, where the height of a level represents that level's status and width of a level represents the quantity of items at that level relative to the whole.[9]For example, the fewDirectorsof a company could be at theapex, and thebasecould be thousands of people who have no subordinates.
These pyramids are oftendiagrammedwith atrianglediagram which serves to emphasize the size differences between the levels (but not all triangle/pyramid diagrams are hierarchical; for example, the 1992USDA food guide pyramid). An example of a triangle diagram appears to the right.
Another common representation of a hierarchical scheme is as atree diagram.Phylogenetic trees,chartsshowing the structure of§ Organizations, andplayoff bracketsin sports are often illustrated this way.
More recently, as computers have allowed the storage and navigation of ever larger data sets, various methods have been developed to represent hierarchies in a manner that makes more efficient use of the available space on a computer's screen. Examples includefractalmaps,TreeMapsandRadial Trees.
In the design field, mainly graphic design, successful layouts and formatting of the content on documents are heavily dependent on the rules ofvisual hierarchy. Visual hierarchy is also important for proper organization of files on computers.
An example of visually representing hierarchy is through nested clusters. Nested clusters represent hierarchical relationships using layers of information. The child element is within the parent element, such as in aVenn diagram. This structure is most effective in representing simple hierarchical relationships. For example, when directing someone to open a file on a computer desktop, one may first direct them towards the main folder, then the subfolders within the main folder. They will keep opening files within the folders until the designated file is located.
For more complicated hierarchies, the stair structure represents hierarchical relationships through the use of visual stacking. Visually imagine the top of a downward staircase beginning at the left and descending on the right. Child elements are towards the bottom of the stairs and parent elements are at the top. This structure represents hierarchical relationships through the use of visual stacking.
In plain English, a hierarchy can be thought of as asetin which:[2]
The first requirement is also interpreted to mean that a hierarchy can have nocircular relationships; the association between two objects is alwaystransitive.
The second requirement asserts that a hierarchy must have a leader orrootthat is common to all of the objects.
Mathematically, in its most general form, a hierarchy is apartially ordered setorposet.[10]Thesystemin this case is the entire poset, which is constituted of elements. Within this system, each element shares a particular unambiguous property. Objects with the same property value are grouped together, and each of those resultinglevelsis referred to as aclass.
"Hierarchy" is particularly used to refer to a poset in which the classes are organized in terms of increasing complexity.
Operations such as addition, subtraction, multiplication and division are often performed in a certain sequence or order. Usually, addition and subtraction are performed after multiplication and division has already been applied to a problem. The use of parentheses is also a representation of hierarchy, for they show which operation is to be done prior to the following ones. For example:
(2 + 5) × (7 - 4).
In this problem, typically one would multiply 5 by 7 first, based on the rules of mathematical hierarchy. But when the parentheses are placed, one will know to do the operations within the parentheses first before continuing on with the problem. These rules are largely dominant in algebraic problems, ones that include several steps to solve. The use of hierarchy in mathematics is beneficial to quickly and efficiently solve a problem without having to go through the process of slowly dissecting the problem. Most of these rules are now known as the proper way into solving certain equations.
A nested hierarchy orinclusion hierarchyis a hierarchical ordering ofnested sets.[11]The concept of nesting is exemplified in Russianmatryoshka dolls. Each doll is encompassed by another doll, all the way to the outer doll. The outer doll holds all of the inner dolls, the next outer doll holds all the remaining inner dolls, and so on. Matryoshkas represent a nested hierarchy where each level contains only one object, i.e., there is only one of each size of doll; a generalized nested hierarchy allows for multiple objects within levels but with each object having only one parent at each level. The general concept is both demonstrated and mathematically formulated in the following example:
A square can always also be referred to as a quadrilateral, polygon or shape. In this way, it is a hierarchy. However, consider the set of polygons using this classification. A square canonlybe a quadrilateral; it can never be atriangle,hexagon, etc.
Nested hierarchies are the organizational schemes behindtaxonomiesand systematic classifications. For example, using the originalLinnaean taxonomy(the version he laid out in the 10th edition ofSystema Naturae), a human can be formulated as:[12]
Taxonomies may change frequently (as seen inbiological taxonomy), but the underlying concept of nested hierarchies is always the same.
In many programming taxonomies and syntax models (as well as fractals in mathematics), nested hierarchies, including Russian dolls, are also used to illustrate the properties ofself-similarityandrecursion. Recursion itself is included as a subset of hierarchical programming, and recursive thinking can be synonymous with a form of hierarchical thinking and logic.[13]
A containment hierarchy is a direct extrapolation of thenested hierarchyconcept. All of the ordered sets are still nested, but every set must be "strict"—no two sets can be identical. The shapes example above can be modified to demonstrate this:
The notationx⊊y{\displaystyle x\subsetneq y\,}meansxis a subset ofybut is not equal toy.
A general example of a containment hierarchy is demonstrated inclass inheritanceinobject-oriented programming.
Two types of containment hierarchies are thesubsumptivecontainment hierarchy and thecompositionalcontainment hierarchy. A subsumptive hierarchy "subsumes" its children, and a compositional hierarchy is "composed" of its children. A hierarchy can also be both subsumptiveandcompositional[example needed].[14]
Asubsumptivecontainment hierarchy is a classification of object classes from the general to the specific. Other names for this type of hierarchy are "taxonomic hierarchy" and "IS-Ahierarchy".[10][15][16]The last term describes the relationship between each level—a lower-level object "is a" member of the higher class. The taxonomical structure outlined above is a subsumptive containment hierarchy. Using again the example of Linnaean taxonomy, it can be seen that an object that is a member of the levelMammalia"is a" member of the levelAnimalia; more specifically, a human "is a" primate, a primate "is a" mammal, and so on. A subsumptive hierarchy can also be defined abstractly as a hierarchy of "concepts".[16]For example, with the Linnaean hierarchy outlined above, an entity name likeAnimaliais a way to group all the species that fit theconceptualizationof an animal.
Acompositionalcontainment hierarchy is an ordering of the parts that make up a system—the system is "composed" of these parts.[17]Most engineered structures, whether natural or artificial, can be broken down in this manner.
The compositional hierarchy that every person encounters at every moment is thehierarchy of life. Every person can be reduced toorgan systems, which are composed oforgans, which are composed oftissues, which are composed ofcells, which are composed ofmolecules, which are composed ofatoms. In fact, the last two levels apply to allmatter, at least at themacroscopic scale. Moreover, each of these levels inherit all the properties of theirchildren.
In this particular example, there are alsoemergent properties—functions that are not seen at the lower level (e.g.,cognitionis not a property ofneuronsbut is of thebrain)—and a scalar quality (molecules are bigger than atoms, cells are bigger than molecules, etc.). Both of these concepts commonly exist in compositional hierarchies, but they are not a required general property. Theselevel hierarchiesare characterized by bi-directionalcausation.[11]Upward causationinvolves lower-level entities causing some property of a higher level entity; children entities may interact to yield parent entities, and parents are composed at least partly by their children.Downward causationrefers to the effect that the incorporation of entityxinto a higher-level entity can have onx's properties and interactions. Furthermore, the entities found at each level areautonomous.
Kulish (2002) suggests that almost every system of organization which humans apply to the world is arranged hierarchically.[18][need quotation to verify]Some conventional definitions of the terms "nation"[19][failed verification]and "government"[20][failed verification]suggest that everynationhas a government and that every government is hierarchical. Sociologists can analyse socioeconomic systems in terms of stratification into a social hierarchy (thesocial stratificationof societies), and allsystematic classification schemes(taxonomies) are hierarchical.[21]Mostorganized religions, regardless of their internal governance structures, operate as a hierarchy underdeitiesandpriesthoods. ManyChristian denominationshave anautocephalousecclesiastical hierarchyofleadership. Families can be viewed as hierarchical structures in terms ofcousinship(e.g., first cousin once removed, second cousin, etc.),ancestry(as depicted in afamily tree) andinheritance(successionandheirship). All the requisites of a well-rounded life andlifestylecan be organized usingMaslow's hierarchy of human needs- according to Maslow's hierarchy of human needs.Learningsteps often follow a hierarchical scheme—to masterdifferential equationsone must first learncalculus; to learn calculus one must first learnelementary algebra; and so on.Natureoffers hierarchical structures, as numerous schemes such asLinnaean taxonomy, theorganization of life, andbiomass pyramidsattempt to document.[22][need quotation to verify][23]
While the above examples are often[quantify]clearly depicted in a hierarchical form and are classic examples, hierarchies exist in numerous systems where this branching structure is not immediately apparent. For example, mostpostal-codesystems are hierarchical. Using theCanadian postal code systemas an example, the top level's binding concept, the"postal district", consists of 18 objects (letters). The next level down is the "zone", where the objects are the digits 0–9. This is an example of anoverlapping hierarchy, because each of these 10 objects has 18 parents. The hierarchy continues downward to generate, in theory, 7,200,000 unique codes of the formatA0A 0A0(the second and third letter positions allow 20 objects each). Mostlibrary classificationsystems are also hierarchical. TheDewey Decimal Systemis infinitely hierarchical because there is no finite bound on the number of digits can be used after the decimal point.[24]
Organizationscan be structured as adominance hierarchy. In an organizational hierarchy, there is a single person or group with the mostpowerorauthority, and each subsequent level represents a lesser authority. Most organizations are structured in this manner,[25]includinggovernments,companies,armed forces,militiaandorganized religions. The units or persons within an organization may be depicted hierarchically in anorganizational chart.
In areverse hierarchy, the conceptualpyramidof authority is turned upside-down, so that the apex is at the bottom and the base is at the top. This mode represents the idea that members of the higher rankings are responsible for the members of the lower rankings.
Empirically, when we observe in nature a large proportion of the (complex) biological systems, they exhibit hierarchic structure.[26]On theoretical grounds we could expect complex systems to be hierarchies in a world in which complexity had to evolve from simplicity.[27]Systemhierarchies analysis performed in the 1950s,[28][29]laid the empirical foundations for afieldthat would become, from the 1980s,hierarchical ecology.[30][31][32][33][34]
The theoretical foundations are summarized bythermodynamics.
Whenbiological systemsare modeled asphysical systems, in the most general abstraction, they arethermodynamic open systemsthat exhibitself-organisedbehavior, and theset/subsetrelations betweendissipative structurescan be characterized[by whom?]in a hierarchy.
Other hierarchical representations related to biology includeecological pyramidswhich illustrate energy flow ortrophic levelsinecosystems, andtaxonomichierarchies, including theLinnean classificationscheme andphylogenetic treesthat reflect inferred patterns of evolutionary relationship among living and extinct species.
CGIandcomputer-animationprogramsmostly use hierarchies for models. On a3Dmodel of ahumanfor example, thechestis aparentof the upper left arm, which is a parent of the lower left arm, which is a parent of thehand. This pattern is used inmodelingandanimationfor almost everything built as a 3Ddigitalmodel.
Many grammatical theories, such asphrase-structure grammar, involve hierarchy.
Direct–inverse languagessuch asCreeandMapudungundistinguish subject and object onverbsnot by different subject and object markers, but via a hierarchy of persons.
In this system, the three (or four withAlgonquian languages) persons occur in a hierarchy ofsalience. To distinguish which is subject and which object,inverse markersare used if the object outranks the subject.
On the other hand, languages include a variety of phenomena that are not hierarchical. For example, the relationship between a pronoun and a prior noun-phrase to which it refers commonly crosses grammatical boundaries in non-hierarchical ways.
The structure of a musical composition is often understood hierarchically (for example byHeinrich Schenker(1768–1835, seeSchenkerian analysis), and in the (1985)Generative Theory of Tonal Music, by composerFred Lerdahland linguist RayJackendoff). The sum of all notes in a piece is understood to be an all-inclusive surface, which can be reduced to successively more sparse and more fundamental types of motion. The levels of structure that operate in Schenker's theory are the foreground, which is seen in all the details of the musical score; the middle ground, which is roughly a summary of an essential contrapuntal progression and voice-leading; and the background orUrsatz, which is one of only a few basic "long-range counterpoint" structures that are shared in the gamut of tonal music literature.
Thepitchesandformoftonalmusic are organized hierarchically, all pitches deriving their importance from their relationship to atonickey, and secondary themes in otherkeysare brought back to the tonic in a recapitulation of the primary theme.
In the work of diverse theorists such asWilliam James(1842 to 1910),Michel Foucault(1926 to 1984) andHayden White(1928 to 2018), important critiques of hierarchicalepistemologyare advanced. James famously asserts in his workRadical Empiricismthat clear distinctions of type and category are a constant but unwritten goal of scientific reasoning, so that when they are discovered, success is declared. But if aspects of the world are organized differently, involving inherent and intractable ambiguities, then scientific questions are often considered unresolved.
Feminists,Marxists,anarchists,communists,critical theoristsand others, all of whom have multiple interpretations, criticize the hierarchies commonly found within human society, especially in social relationships. Hierarchies are present in all parts of society: in businesses, schools, families, etc. These relationships are often viewed as necessary. Entities that stand in hierarchical arrangements are animals, humans, plants, etc.
Inethics, variousvirtuesare enumerated and sometimes organized hierarchically according to certain brands ofvirtue theory.
In some of these random examples, there is an asymmetry of 'compositional' significance between levels of structure, so that small parts of the whole hierarchical array depend, for their meaning, on their membership in larger parts. There is a hierarchy of activities in human life: productive activity serves or is guided by the moral life; the moral life is guided by practical reason; practical reason (used in moral and political life) serves contemplative reason (whereby we contemplate God). Practical reason sets aside time and resources for contemplative reason.
(For example, in§ Subtypes)
|
https://en.wikipedia.org/wiki/Hierarchy
|
Inmathematics,computer scienceandnetwork science,network theoryis a part ofgraph theory. It definesnetworksasgraphswhere the vertices or edges possess attributes. Network theory analyses these networks over thesymmetric relationsorasymmetric relationsbetween their (discrete) components.
Network theory has applications in many disciplines, includingstatistical physics,particle physics, computer science,electrical engineering,[1][2]biology,[3]archaeology,[4]linguistics,[5][6][7]economics,finance,operations research,climatology,ecology,public health,[8][9][10]sociology,[11]psychology,[12]andneuroscience.[13][14][15]Applications of network theory includelogisticalnetworks, theWorld Wide Web,Internet,gene regulatory networks, metabolic networks,social networks,epistemologicalnetworks, etc.; seeList of network theory topicsfor more examples.
Euler's solution of theSeven Bridges of Königsberg problemis considered to be the first true proof in the theory of networks.
Network problems that involve finding an optimal way of doing something are studied ascombinatorial optimization. Examples includenetwork flow,shortest path problem,transport problem,transshipment problem,location problem,matching problem,assignment problem,packing problem,routing problem,critical path analysis, andprogram evaluation and review technique.
The analysis of electric power systems could be conducted using network theory from two main points of view:
Social network analysisexamines the structure of relationships between social entities.[17]These entities are often persons, but may also begroups,organizations,nation states,web sites, orscholarly publications.
Since the 1970s, the empirical study of networks has played a central role in social science, and many of themathematicalandstatisticaltools used for studying networks have been first developed insociology.[18]Amongst many other applications, social network analysis has been used to understand thediffusion of innovations, news and rumors.[19]Similarly, it has been used to examine the spread of bothdiseasesandhealth-related behaviors.[20]It has also been applied to thestudy of markets, where it has been used to examine the role of trust inexchange relationshipsand of social mechanisms in setting prices.[21]It has been used to study recruitment intopolitical movements, armed groups, and other social organizations.[22]It has also been used to conceptualize scientific disagreements[23]as well as academic prestige.[24]More recently, network analysis (and its close cousintraffic analysis) has gained a significant use in military intelligence,[25]for uncovering insurgent networks of both hierarchical andleaderlessnature.[citation needed]
With the recent explosion of publicly available high throughputbiological data, the analysis of molecular networks has gained significant interest.[26]The type of analysis in this context is closely related to social network analysis, but often focusing on local patterns in the network. For example,network motifsare small subgraphs that are over-represented in the network. Similarly,activity motifsare patterns in the attributes of nodes and edges in the network that are over-represented given the network structure. Using networks to analyze patterns in biological systems, such as food-webs, allows us to visualize the nature and strength of interactions between species. The analysis ofbiological networkswith respect to diseases has led to the development of the field ofnetwork medicine.[27]Recent examples of application of network theory in biology include applications to understanding thecell cycle[28]as well as a quantitative framework for developmental processes.[29]
The automatic parsing oftextual corporahas enabled the extraction of actors and their relational networks on a vast scale. The resultingnarrative networks, which can contain thousands of nodes, are then analyzed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes.[31]This automates the approach introduced by Quantitative Narrative Analysis,[32]whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.[30]
Link analysisis a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed, and financial transactions that they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of police investigation. Link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed bybanksandinsuranceagencies infrauddetection, by telecommunication operators in telecommunication network analysis, by medical sector inepidemiologyandpharmacology, in law enforcementinvestigations, bysearch enginesforrelevancerating (and conversely by thespammersforspamdexingand by business owners forsearch engine optimization), and everywhere else where relationships between many objects have to be analyzed. Links are also derived from similarity of time behavior in both nodes. Examples include climate networks where the links between two locations (nodes) are determined, for example, by the similarity of the rainfall or temperature fluctuations in both sites.[33][34]
SeveralWeb searchrankingalgorithms use link-based centrality metrics, includingGoogle'sPageRank, Kleinberg'sHITS algorithm, theCheiRankandTrustRankalgorithms. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example, the analysis might be of the interlinking between politicians' websites or blogs. Another use is for classifying pages according to their mention in other pages.[35]
Information about the relative importance of nodes and edges in a graph can be obtained throughcentralitymeasures, widely used in disciplines likesociology. For example,eigenvector centralityuses theeigenvectorsof theadjacency matrixcorresponding to a network, to determine nodes that tend to be frequently visited. Formally established measures of centrality aredegree centrality,closeness centrality,betweenness centrality,eigenvector centrality,subgraph centrality, andKatz centrality. The purpose or objective of analysis generally determines the type of centrality measure to be used. For example, if one is interested in dynamics on networks or the robustness of a network to node/link removal, often thedynamical importance[36]of a node is the most relevant centrality measure.
These concepts are used to characterize the linking preferences of hubs in a network. Hubs are nodes which have a large number of links. Some hubs tend to link to other hubs while others avoid connecting to hubs and prefer to connect to nodes with low connectivity. We say a hub is assortative when it tends to connect to other hubs. A disassortative hub avoids connecting to other hubs. If hubs have connections with the expected random probabilities, they are said to be neutral. There are three methods to quantify degree correlations.[37]
The recurrence matrix of arecurrence plotcan be considered as the adjacency matrix of an undirected and unweighted network. This allows for the analysis of time series by network measures. Applications range from detection of regime changes over characterizing dynamics to synchronization analysis.[38][39][40]
Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain neural networks. Several models for spatial networks have been developed.[41]
Other networks emphasise the evolution over time of systems of nodes and their interconnections. Temporal networks are used for example to study how financial risk has spread across countries.[42]In this study, temporal networks are used to also visually trace the intricate dynamics of financial contagion during crises. Unlike traditional network approaches that aggregate or analyze static snapshots, the study uses a time-respecting path methodology to preserve the sequence and timing of financial crises contagion events. This enables the identification of nodes as sources, transmitters, or receivers of financial stress, avoiding mischaracterizations inherent in static or aggregated methods. Following this approach, banks are found to serve as key intermediaries in contagion paths, and temporal analysis pinpoints smaller countries like Greece and Italy as significant origins of shocks during crises—insights obscured by static approaches that overemphasize large economies like the US or Japan.
Temporal networks can also be used to explore how cooperation evolves in dynamic, real-world population structures where interactions are time-dependent.[43]Here the authors find that network temporality enhances cooperation compared to static networks, even though "bursty" interaction patterns typically hinder it. This finding also shows how cooperation and other emergent behaviours can thrive in realistic, time-varying population structures, challenging conventional assumptions rooted in static models.
In psychology, temporal networks enable the understanding of psychological disorders by framing them as dynamic systems of interconnected symptoms rather than outcomes of a single underlying cause. Using "nodes" to represent symptoms and "edges" to signify their direct interactions, symptoms like insomnia and fatigue are shown how they influence each other over time; also, disorders such as depression are shown not to be fixed entities but evolving networks, where identifying "bridge symptoms" like concentration difficulties can explain comorbidity between conditions such as depression and anxiety.[44]
Lastly, temporal networks enable a better understanding and controlling of the spread of infectious diseases.[45]Unlike traditional static networks, which assume continuous, unchanging connections, temporal networks account for the precise timing and duration of interactions between individuals. This dynamic approach reveals critical nuances, such as how diseases can spread via time-sensitive pathways that static models miss. Temporal data, such as interactions captured through Bluetooth sensors or in hospital wards, can improve predictions of outbreak speed and extent. Overlooking temporal correlations can lead to significant errors in estimating epidemic dynamics, emphasizing the need for a temporal framework to develop more accurate strategies for disease control.
Content in acomplex networkcan spread via two major methods: conserved spread and non-conserved spread.[46]In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes. Here, the pitcher represents the original source and the water is the content being spread. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the amount of content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes. Here, the amount of water from the original source is infinite. Also, any funnels that have been exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of mostinfectious diseases, neural excitation, information and rumors, etc.
The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks[47]since for this casepc{\displaystyle pc}is relatively high and fewer nodes are needed to be immunized.
However, in most realistic networks the global structure is not available and the largest degree nodes are unknown.
|
https://en.wikipedia.org/wiki/Network_theory
|
Theclique gameis apositional gamewhere two players alternately pick edges, trying to occupy a completecliqueof a given size.
The game is parameterized by two integersn>k. The game-board is the set of all edges of acomplete graphonnvertices. The winning-sets are all the cliques onkvertices. There are several variants of this game:
The clique game (in its strong-positional variant) was first presented byPaul ErdősandJohn Selfridge, who attributed it to Simmons.[1]They called it theRamsey game, since it is closely related toRamsey's theorem(see below).
Ramsey's theoremimplies that, whenever we color a graph with 2 colors, there is at least one monochromatic clique. Moreover, for every integerk, there exists an integerR(k,k)such that, in every graph withn≥R2(k,k){\displaystyle n\geq R_{2}(k,k)}vertices, any 2-coloring contains a monochromatic clique of size at leastk. This means that, ifn≥R2(k,k){\displaystyle n\geq R_{2}(k,k)}, the clique game can never end in a draw. aStrategy-stealing argumentimplies that the first player can always force at least a draw; therefore, ifn≥R2(k,k){\displaystyle n\geq R_{2}(k,k)}, Maker wins. By substituting known bounds for the Ramsey number we get that Maker wins wheneverk≤log2n2{\displaystyle k\leq {\log _{2}n \over 2}}.
On the other hand, the Erdos-Selfridge theorem[1]implies that Breaker wins wheneverk≥2log2n{\displaystyle k\geq {2\log _{2}n}}.
Beckimproved these bounds as follows:[2]
Instead of playing on complete graphs, the clique game can also be played on complete hypergraphs of higher orders. For example, in the clique game on triplets, the game-board is the set of triplets of integers 1,...,n(so its size is(n3){\displaystyle {n \choose 3}}), and winning-sets are all sets of triplets ofkintegers (so the size of any winning-set in it is(k3){\displaystyle {k \choose 3}}).
ByRamsey's theoremon triples, ifn≥R3(k,k){\displaystyle n\geq R_{3}(k,k)}, Maker wins. The currently known upper bound onR3(k,k){\displaystyle R_{3}(k,k)}is very large,2k2/6<R3(k,k)<224k−10{\displaystyle 2^{k^{2}/6}<R_{3}(k,k)<2^{2^{4k-10}}}. In contrast,Beck[3]proves that2k2/6<R3∗(k,k)<k42k3/6{\displaystyle 2^{k^{2}/6}<R_{3}^{*}(k,k)<k^{4}2^{k^{3}/6}}, whereR3∗(k,k){\displaystyle R_{3}^{*}(k,k)}is the smallest integer such that Maker has a winning strategy. In particular, ifk42k3/6<n{\displaystyle k^{4}2^{k^{3}/6}<n}then the game is Maker's win.
|
https://en.wikipedia.org/wiki/Clique_game
|
GloVe, coined from Global Vectors, is a model for distributed word representation. The model is anunsupervised learningalgorithm for obtainingvector representationsfor words. This is achieved by mapping words into a meaningful space where the distance between words is related to semantic similarity.[1]Training is performed on aggregated global word-wordco-occurrencestatisticsfrom a corpus, and the resulting representations showcase interesting linear substructures of theword vector space. As log-bilinear regression model for unsupervised learning of word representations, it combines the features of two model families, namely the global matrix factorization and local context window methods.
It is developed as anopen-sourceproject atStanford[2]and was launched in 2014. It was designed as a competitor toword2vec, and the original paper noted multiple improvements of GloVe over word2vec. As of 2022[update], both approaches are outdated, andTransformer-based models, such asBERT, which add multiple neural-network attention layers on top of a word embedding model similar to Word2vec, have come to be regarded as the state of the art in NLP.[3]
You shall know a word by the company it keeps (Firth, J. R. 1957:11)[4]
The idea of GloVe is to construct, for each wordi{\displaystyle i}, two vectorswi,w~i{\displaystyle w_{i},{\tilde {w}}_{i}}, such that the relative positions of the vectors capture part of the statistical regularities of the wordi{\displaystyle i}. The statistical regularity is defined as the co-occurrence probabilities. Words that resemble each other in meaning should also resemble each other in co-occurrence probabilities.
Let thevocabularybeV{\displaystyle V}, the set of all possible words (aka "tokens"). Punctuation is either ignored, or treated as vocabulary, and similarly for capitalization and other typographical details.[1]
If two words occur close to each other, then we say that they occur in thecontextof each other. For example, if the context length is 3, then we say that in the following sentence
GloVe1, coined2from3Global4Vectors5, is6a7model8for9distributed10word11representation12
the word "model8" is in the context of "word11" but not the context of "representation12".
A word is not in the context of itself, so "model8" is not in the context of the word "model8", although, if a word appears again in the same context, then it does count.
LetXij{\displaystyle X_{ij}}be the number of times that the wordj{\displaystyle j}appears in the context of the wordi{\displaystyle i}over the entire corpus. For example, if the corpus is just "I don't think that that is a problem." we haveXthat,that=2{\displaystyle X_{{\text{that}},{\text{that}}}=2}since the first "that" appears in the second one's context, and vice versa.
LetXi=∑j∈VXij{\displaystyle X_{i}=\sum _{j\in V}X_{ij}}be the number of words in the context of all instances of wordi{\displaystyle i}. By counting, we haveXi=2×(context size)×#(occurrences of wordi){\displaystyle X_{i}=2\times ({\text{context size}})\times \#({\text{occurrences of word }}i)}(except for words occurring right at the start and end of the corpus)
LetPik:=P(k|i):=XikXi{\displaystyle P_{ik}:=P(k|i):={\frac {X_{ik}}{X_{i}}}}be theco-occurrence probability. That is, if one samples a random occurrence of the wordi{\displaystyle i}in the entire document, and a random word within its context, that word isk{\displaystyle k}with probabilityPik{\displaystyle P_{ik}}. Note thatPik≠Pki{\displaystyle P_{ik}\neq P_{ki}}in general. For example, in a typical modern English corpus,Pado,much{\displaystyle P_{{\text{ado}},{\text{much}}}}is close to one, butPmuch,ado{\displaystyle P_{{\text{much}},{\text{ado}}}}is close to zero. This is because the word "ado" is almost only used in the context of the archaic phrase "much ado about", but the word "much" occurs in all kinds of contexts.
For example, in a 6 billion token corpus, we have
Inspecting the table, we see that the words "ice" and "steam" are indistinguishable along the "water" (often co-occurring with both) and "fashion" (rarely co-occurring with either), but distinguishable along the "solid" (co-occurring more with ice) and "gas" (co-occurring more with "steam").
The idea is to learn two vectorswi,w~i{\displaystyle w_{i},{\tilde {w}}_{i}}for each wordi{\displaystyle i}, such that we have amultinomial logistic regression:wiTw~j+bi+b~j≈lnPij{\displaystyle w_{i}^{T}{\tilde {w}}_{j}+b_{i}+{\tilde {b}}_{j}\approx \ln P_{ij}}and the termsbi,b~j{\displaystyle b_{i},{\tilde {b}}_{j}}are unimportant parameters.
This means that if the wordsi,j{\displaystyle i,j}have similar co-occurrence probabilities(Pik)k∈V≈(Pjk)k∈V{\displaystyle (P_{ik})_{k\in V}\approx (P_{jk})_{k\in V}}, then their vectors should also be similar:wi≈wj{\displaystyle w_{i}\approx w_{j}}.
Naively, logistic regression can be run by minimizing the squared loss:L=∑i,j∈V(wiTw~j+bi+b~j−lnPij)2{\displaystyle L=\sum _{i,j\in V}(w_{i}^{T}{\tilde {w}}_{j}+b_{i}+{\tilde {b}}_{j}-\ln P_{ij})^{2}}However, this would be noisy for rare co-occurrences. To fix the issue, the squared loss is weighted so that the loss is slowly ramped-up as the absolute number of co-occurrencesXij{\displaystyle X_{ij}}increases:L=∑i,j∈Vf(Xij)(wiTw~j+bi+b~j−lnPij)2{\displaystyle L=\sum _{i,j\in V}f(X_{ij})(w_{i}^{T}{\tilde {w}}_{j}+b_{i}+{\tilde {b}}_{j}-\ln P_{ij})^{2}}wheref(x)={(x/xmax)αifx<xmax1otherwise{\displaystyle f(x)=\left\{{\begin{array}{cc}\left(x/x_{\max }\right)^{\alpha }&{\text{ if }}x<x_{\max }\\1&{\text{ otherwise }}\end{array}}\right.}andxmax,α{\displaystyle x_{\max },\alpha }arehyperparameters. In the original paper, the authors found thatxmax=100,α=3/4{\displaystyle x_{\max }=100,\alpha =3/4}seem to work well in practice.
Once a model is trained, we have 4 trained parameters for each word:wi,w~i,bi,b~i{\displaystyle w_{i},{\tilde {w}}_{i},b_{i},{\tilde {b}}_{i}}. The parametersbi,b~i{\displaystyle b_{i},{\tilde {b}}_{i}}are irrelevant, and onlywi,w~i{\displaystyle w_{i},{\tilde {w}}_{i}}are relevant.
The authors recommended usingwi+w~i{\displaystyle w_{i}+{\tilde {w}}_{i}}as the final representation vector for wordi{\displaystyle i}, because empirically it worked better thanwi{\displaystyle w_{i}}orw~i{\displaystyle {\tilde {w}}_{i}}alone.
GloVe can be used to find relations between words like synonyms, company-product relations, zip codes and cities, etc. However, the unsupervised learning algorithm is not effective in identifying homographs, i.e., words with the same spelling and different meanings. This is as the unsupervised learning algorithm calculates a single set of vectors for words with the same morphological structure.[5]The algorithm is also used by theSpaCylibrary to build semantic word embedding features, while computing the top list words that match with distance measures such ascosine similarityandEuclidean distanceapproach.[6]GloVe was also used as the word representation framework for the online and offline systems designed to detect psychological distress in patient interviews.[7]
|
https://en.wikipedia.org/wiki/GloVe
|
Closenessis a basic concept intopologyand related areas inmathematics. Intuitively, we say two sets are close if they are arbitrarily near to each other. The concept can be defined naturally in ametric spacewhere a notion of distance between elements of the space is defined, but it can be generalized totopological spaceswhere we have no concrete way to measure distances.
Theclosure operatorclosesa given set by mapping it to aclosed setwhich contains the original set and all points close to it. The concept of closeness is related tolimit point.
Given ametric space(X,d){\displaystyle (X,d)}a pointp{\displaystyle p}is calledcloseornearto a setA{\displaystyle A}if
where the distance between a point and a set is defined as
where inf stands forinfimum. Similarly a setB{\displaystyle B}is calledcloseto a setA{\displaystyle A}if
where
LetV{\displaystyle V}be some set. A relation between the points ofV{\displaystyle V}and the subsets ofV{\displaystyle V}is a closeness relation if it satisfies the following conditions:
LetA{\displaystyle A}andB{\displaystyle B}be two subsets ofV{\displaystyle V}andp{\displaystyle p}a point inV{\displaystyle V}.[1]
Topological spaces have a closeness relationship built into them: defining a pointp{\displaystyle p}to be close to a subsetA{\displaystyle A}if and only ifp{\displaystyle p}is in the closure ofA{\displaystyle A}satisfies the above conditions. Likewise, given a set with a closeness relation, defining a pointp{\displaystyle p}to be in the closure of a subsetA{\displaystyle A}if and only ifp{\displaystyle p}is close toA{\displaystyle A}satisfies theKuratowski closure axioms. Thus, defining a closeness relation on a set is exactly equivalent to defining a topology on that set.
LetA{\displaystyle A},B{\displaystyle B}andC{\displaystyle C}be sets.
The closeness relation between a set and a point can be generalized to any topological space. Given a topological space and a pointp{\displaystyle p},p{\displaystyle p}is calledcloseto a setA{\displaystyle A}ifp∈cl(A)=A¯{\displaystyle p\in \operatorname {cl} (A)={\overline {A}}}.
To define a closeness relation between two sets the topological structure is too weak and we have to use auniform structure. Given auniform space, setsA{\displaystyle A}andB{\displaystyle B}are calledcloseto each other if they intersect allentourages, that is, for any entourageU{\displaystyle U},(A×B)∩U{\displaystyle (A\times B)\cap U}is non-empty.
|
https://en.wikipedia.org/wiki/Closeness_(mathematics)
|
Agraphoidis a set of statements of the form, "Xis irrelevant toYgiven that we knowZ" whereX,YandZare sets of variables. The notion of "irrelevance" and "given that we know" may obtain different interpretations, includingprobabilistic,relationaland correlational, depending on the application. These interpretations share common properties that can be captured by paths in graphs (hence the name "graphoid"). The theory of graphoids characterizes these properties in a finite set ofaxiomsthat are common to informational irrelevance and its graphical representations.
Judea PearlandAzaria Paz[1]coined the term "graphoids" after discovering that a set of axioms that governconditional independenceinprobability theoryis shared byundirected graphs. Variables are represented as nodes in a graph in such a way that variable setsXandYare independent conditioned onZin the distribution whenever node setZseparatesXfromYin the graph. Axioms for conditional independence in probability were derived earlier byA. Philip Dawid[2]andWolfgang Spohn.[3]The correspondence between dependence and graphs was later extended todirected acyclic graphs(DAGs)[4][5][6]and to other models of dependency.[1][7]
A dependency modelMis a subset of triplets (X,Z,Y) for which the predicateI(X,Z,Y):Xis independent ofYgivenZ, is true. A graphoid is defined as a dependency model that is closed under the following five axioms:
A semi-graphoid is a dependency model closed under 1–4. These five axioms together are known as the graphoid axioms.[8]Intuitively, the weak union and contraction properties mean that irrelevant information should not alter the relevance status of other propositions in the system; what was relevant remains relevant and what was irrelevant remains irrelevant.[8]
Conditional independence, defined as
is a semi-graphoid which becomes a full graphoid whenPis strictly positive.[1][7]
A dependency model is a correlational graphoid if in some probability function we have,
whereρxy.z{\displaystyle \rho _{xy.z}}is thepartial correlationbetweenxandygiven setZ.
In other words, the linear estimation error of the variables inXusing measurements onZwould not be reduced by adding measurements of the variables inY, thus makingYirrelevant to the estimation ofX. Correlational and probabilistic dependency models coincide for normal distributions.[1][7]
A dependency model is a relational graphoid if it satisfies
In words, the range of values permitted forXis not restricted by the choice ofY, onceZis fixed. Independence statements belonging to this model are similar toembedded multi-valued dependencies (EMVDs)in databases.[1][7]
If there exists an undirected graphGsuch that,
then the graphoid is called graph-induced.
In other words, there exists an undirected graphGsuch that every independence statement inMis reflected as a vertex separation inGand vice versa. A necessary and sufficient condition for a dependency model to be a graph-induced graphoid is that it satisfies the following axioms: symmetry, decomposition, intersection, strong union and transitivity.
Strong union states that
Transitivity states that
The axioms symmetry, decomposition, intersection, strong union and transitivity constitute a complete characterization of
undirected graphs.[9]
A graphoid is termed DAG-induced if there exists a directed acyclic graphDsuch thatI(X,Z,Y)⇔⟨X,Z,Y⟩D{\displaystyle I(X,Z,Y)\Leftrightarrow \langle X,Z,Y\rangle _{D}}where⟨X,Z,Y⟩D{\displaystyle \langle X,Z,Y\rangle _{D}}stands ford-separationinD.d-separation (d-connotes "directional") extends the notion of vertex separation from undirected graphs to directed acyclic graphs. It permits the reading of conditional independencies from the structure ofBayesian networks. However, conditional independencies in a DAG cannot be completely characterized by a finite set of axioms.[10]
Graph-induced and DAG-induced graphoids are
both contained in probabilistic graphoids.[11]This means that for every graphGthere exists a probability distributionPsuch that every conditional independence inPis represented inG, and vice versa. The same is true for DAGs.
However, there are probabilistic distributions that are not graphoids and, moreover, there is no finite axiomatization for probabilistic conditional dependencies.[12]
Thomas Verma showed that every semi-graphoid has a recursive way of constructing a DAG in which everyd-separation is valid.[13]The construction is similar to that used inBayes networksand goes as follows:
The DAG created by this construction will represent all the conditional independencies that
follow from those used in the construction. Furthermore, everyd-separation shown in the DAG will be a valid conditional independence in the graphoid used in the construction.
|
https://en.wikipedia.org/wiki/Graphoid
|
Inprobability theory,conditional dependenceis a relationship between two or moreeventsthat aredependentwhen a third event occurs.[1][2]For example, ifA{\displaystyle A}andB{\displaystyle B}are two events that individually increase the probability of a third eventC,{\displaystyle C,}and do not directly affect each other, then initially (when it has not been observed whether or not the eventC{\displaystyle C}occurs)[3][4]P(A∣B)=P(A)andP(B∣A)=P(B){\displaystyle \operatorname {P} (A\mid B)=\operatorname {P} (A)\quad {\text{ and }}\quad \operatorname {P} (B\mid A)=\operatorname {P} (B)}(AandB{\displaystyle A{\text{ and }}B}are independent).
But suppose that nowC{\displaystyle C}is observed to occur. If eventB{\displaystyle B}occurs then the probability of occurrence of the eventA{\displaystyle A}will decrease because its positive relation toC{\displaystyle C}is less necessary as an explanation for the occurrence ofC{\displaystyle C}(similarly, eventA{\displaystyle A}occurring will decrease the probability of occurrence ofB{\displaystyle B}). Hence, now the two eventsA{\displaystyle A}andB{\displaystyle B}are conditionally negatively dependent on each other because the probability of occurrence of each is negatively dependent on whether the other occurs. We have[5]P(A∣CandB)<P(A∣C).{\displaystyle \operatorname {P} (A\mid C{\text{ and }}B)<\operatorname {P} (A\mid C).}
Conditional dependence of A and B given C is the logical negation ofconditional independence((A⊥⊥B)∣C){\displaystyle ((A\perp \!\!\!\perp B)\mid C)}.[6]In conditional independence two events (which may be dependent or not) become independent given the occurrence of a third event.[7]
In essenceprobabilityis influenced by a person's information about the possible occurrence of an event. For example, let the eventA{\displaystyle A}be 'I have a new phone'; eventB{\displaystyle B}be 'I have a new watch'; and eventC{\displaystyle C}be 'I am happy'; and suppose that having either a new phone or a new watch increases the probability of my being happy. Let us assume that the eventC{\displaystyle C}has occurred – meaning 'I am happy'. Now if another person sees my new watch, he/she will reason that my likelihood of being happy was increased by my new watch, so there is less need to attribute my happiness to a new phone.
To make the example more numerically specific, suppose that there are four possible statesΩ={s1,s2,s3,s4},{\displaystyle \Omega =\left\{s_{1},s_{2},s_{3},s_{4}\right\},}given in the middle four columns of the following table, in which the occurrence of eventA{\displaystyle A}is signified by a1{\displaystyle 1}in rowA{\displaystyle A}and its non-occurrence is signified by a0,{\displaystyle 0,}and likewise forB{\displaystyle B}andC.{\displaystyle C.}That is,A={s2,s4},B={s3,s4},{\displaystyle A=\left\{s_{2},s_{4}\right\},B=\left\{s_{3},s_{4}\right\},}andC={s2,s3,s4}.{\displaystyle C=\left\{s_{2},s_{3},s_{4}\right\}.}The probability ofsi{\displaystyle s_{i}}is1/4{\displaystyle 1/4}for everyi.{\displaystyle i.}
and so
In this example,C{\displaystyle C}occursif and only ifat least one ofA,B{\displaystyle A,B}occurs. Unconditionally (that is, without reference toC{\displaystyle C}),A{\displaystyle A}andB{\displaystyle B}areindependentof each other becauseP(A){\displaystyle \operatorname {P} (A)}—the sum of the probabilities associated with a1{\displaystyle 1}in rowA{\displaystyle A}—is12,{\displaystyle {\tfrac {1}{2}},}whileP(A∣B)=P(AandB)/P(B)=1/41/2=12=P(A).{\displaystyle \operatorname {P} (A\mid B)=\operatorname {P} (A{\text{ and }}B)/\operatorname {P} (B)={\tfrac {1/4}{1/2}}={\tfrac {1}{2}}=\operatorname {P} (A).}But conditional onC{\displaystyle C}having occurred (the last three columns in the table), we haveP(A∣C)=P(AandC)/P(C)=1/23/4=23{\displaystyle \operatorname {P} (A\mid C)=\operatorname {P} (A{\text{ and }}C)/\operatorname {P} (C)={\tfrac {1/2}{3/4}}={\tfrac {2}{3}}}whileP(A∣CandB)=P(AandCandB)/P(CandB)=1/41/2=12<P(A∣C).{\displaystyle \operatorname {P} (A\mid C{\text{ and }}B)=\operatorname {P} (A{\text{ and }}C{\text{ and }}B)/\operatorname {P} (C{\text{ and }}B)={\tfrac {1/4}{1/2}}={\tfrac {1}{2}}<\operatorname {P} (A\mid C).}Since in the presence ofC{\displaystyle C}the probability ofA{\displaystyle A}is affected by the presence or absence ofB,A{\displaystyle B,A}andB{\displaystyle B}are mutually dependent conditional onC.{\displaystyle C.}
|
https://en.wikipedia.org/wiki/Conditional_dependence
|
Inprobability theory,de Finetti's theoremstates thatexchangeableobservations areconditionally independentrelative to somelatent variable. Anepistemic probabilitydistributioncould then be assigned to this variable. It is named in honor ofBruno de Finetti, and one of its uses is in providing a pragmatic approach to de Finetti's well-known dictum "Probability does not exist".[1]
For the special case of an exchangeable sequence ofBernoullirandom variables it states that such a sequence is a "mixture" of sequences ofindependent and identically distributed(i.i.d.) Bernoulli random variables.
A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of a finite set of indices. In general, while the variables of the exchangeable sequence are notthemselvesindependent, only exchangeable, there is anunderlyingfamily of i.i.d. random variables. That is, there are underlying, generally unobservable, quantities that are i.i.d. – exchangeable sequences are mixtures of i.i.d. sequences.
A Bayesian statistician often seeks the conditional probability distribution of a random quantity given the data. The concept ofexchangeabilitywas introduced by de Finetti. De Finetti's theorem explains a mathematical relationship between independence and exchangeability.[2]
An infinite sequence
of random variables is said to be exchangeable if for anynatural numbernand any finite sequencei1, ...,inand any permutation of the sequence π:{i1, ...,in} → {i1, ...,in},
both have the samejoint probability distribution.
If an identically distributed sequence isindependent, then the sequence is exchangeable; however, the converse is false—there exist exchangeable random variables that are not statistically independent, for example thePólya urn model.
Arandom variableXhas aBernoulli distributionif Pr(X= 1) =pand Pr(X= 0) = 1 −pfor somep∈ (0, 1).
De Finetti's theorem states that the probability distribution of any infinite exchangeable sequence of Bernoulli random variables is a "mixture" of the probability distributions of independent and identically distributed sequences of Bernoulli random variables. "Mixture", in this sense, means a weighted average, but this need not mean a finite or countably infinite (i.e., discrete) weighted average: it can be anintegral over a measurerather than a sum.
More precisely, supposeX1,X2,X3, ... is an infinite exchangeable sequence of Bernoulli-distributed random variables. Then there is some probability measuremon the interval [0, 1] and some random variableYsuch that
SupposeX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\ldots }is an infinite exchangeable sequence of Bernoulli random variables. ThenX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\ldots }are conditionally independent and identically distributed given theexchangeable sigma-algebra(i.e., the sigma-algebra consisting of events that are measurable with respect toX1,X2,…{\displaystyle X_{1},X_{2},\ldots }and invariant under finite permutations of the indices).
According toDavid Spiegelhalter(ref 1) the theorem provides a pragmatic approach to de Finetti's statement that "Probability does not exist". If our view of the probability of a sequence of events is subjective but remains unaffected by the order in which we make our observations, then the sequence can be regarded asexchangeable. De Finetti's theorem then implies that believing the sequence to be exchangeable is mathematically equivalent to acting as if the events areindependentand have an objective underlying probability of occurring, with our uncertainty about what that probability is being expressed by a subjective probability distribution function. According to Spiegelhalter: ″This is remarkable: it shows that, starting from a specific, but purely subjective, expression of convictions, we should act as if events were driven by objective chances."
As a concrete example, we construct a sequence
of random variables, by "mixing" two i.i.d. sequences as follows.
We assumep= 2/3 with probability 1/2 andp= 9/10 with probability 1/2. Given the eventp= 2/3, the conditional distribution of the sequence is that theXiare independent and identically distributed andX1= 1 with probability 2/3 andX1= 0 with probability 1 − 2/3. Given the eventp= 9/10, the conditional distribution of the sequence is that theXiare independent and identically distributed andX1= 1 with probability 9/10 andX1= 0 with probability 1 − 9/10.
This can be interpreted as follows: Make two biased coins, one showing "heads" with 2/3 probability and one showing "heads" with 9/10 probability. Flip a fair coin once to decide which biased coin to use for all flips that are recorded. Here "heads" at flip i means Xi=1.
The independence asserted here isconditionalindependence, i.e. the Bernoulli random variables in the sequence are conditionally independent given the event thatp= 2/3, and are conditionally independent given the event thatp= 9/10. But they are not unconditionally independent; they are positivelycorrelated.
In view of thestrong law of large numbers, we can say that
Rather than concentrating probability 1/2 at each of two points between 0 and 1, the "mixing distribution" can be anyprobability distributionsupported on the interval from 0 to 1; which one it is depends on the joint distribution of the infinite sequence of Bernoulli random variables.
The definition of exchangeability, and the statement of the theorem, also makes sense for finite length sequences
but the theorem is not generally true in that case. It is true if the sequence can be extended to an exchangeable sequence that is infinitely long. The simplest example of an exchangeable sequence of Bernoulli random variables that cannot be so extended is the one in whichX1= 1 −X2andX1is either 0 or 1, each with probability 1/2. This sequence is exchangeable, but cannot be extended to an exchangeable sequence of length 3, let alone an infinitely long one.
De Finetti's theorem can be expressed as acategorical limitin thecategory of Markov kernels.[3][4][5]
Let(X,A){\displaystyle (X,{\mathcal {A}})}be astandard Borel space, and consider the space of sequences onX{\displaystyle X}, the countable productXN{\displaystyle X^{\mathbb {N} }}(equipped with theproduct sigma-algebra).
Given a finitepermutationσ{\displaystyle \sigma }, denote again byσ{\displaystyle \sigma }the permutation action onXN{\displaystyle X^{\mathbb {N} }}, as well as theMarkov kernelXN→XN{\displaystyle X^{\mathbb {N} }\to X^{\mathbb {N} }}induced by it.
In terms ofcategory theory, we have adiagramwith a single object,XN{\displaystyle X^{\mathbb {N} }}, and a countable number of arrows, one for each permutation.
Recall now that aprobability measurep{\displaystyle p}is equivalently aMarkov kernelfrom the one-point measurable space.
Aprobability measurep{\displaystyle p}onXN{\displaystyle X^{\mathbb {N} }}isexchangeableif and only if, as Markov kernels,σ∘p=p{\displaystyle \sigma \circ p=p}for every permutationσ{\displaystyle \sigma }.
More generally, given any standard Borel spaceY{\displaystyle Y}, one can call a Markov kernelk:Y→X{\displaystyle k:Y\to X}exchangeableifσ∘k=k{\displaystyle \sigma \circ k=k}for everyσ{\displaystyle \sigma }, i.e. if the following diagram commutes,
giving acone.
De Finetti's theorem can be now stated as the fact that the spacePX{\displaystyle PX}ofprobability measuresoverX{\displaystyle X}(Giry monad) forms auniversal(orlimit) cone.[4]More in detail, consider the Markov kerneliidN:PX→XN{\displaystyle \mathrm {iid} _{\mathbb {N} }:PX\to X^{\mathbb {N} }}constructed as follows, using theKolmogorov extension theorem:
for all measurable subsetsA1,…,An{\displaystyle A_{1},\dots ,A_{n}}ofX{\displaystyle X}.
Note that we can interpret this kernel as taking a probability measurep∈PX{\displaystyle p\in PX}as input and returning aniid sequenceonXN{\displaystyle X^{\mathbb {N} }}distributed according top{\displaystyle p}. Since iid sequences are exchangeable,iidN:PX→XN{\displaystyle \mathrm {iid} _{\mathbb {N} }:PX\to X^{\mathbb {N} }}is an exchangeable kernel in the sense defined above.
The kerneliidN:PX→XN{\displaystyle \mathrm {iid} _{\mathbb {N} }:PX\to X^{\mathbb {N} }}doesn't just form a cone, but alimitcone: given any exchangeable kernelk:Y→X{\displaystyle k:Y\to X}, there exists a unique kernelk~:Y→PX{\displaystyle {\tilde {k}}:Y\to PX}such thatk=iidN∘k~{\displaystyle k=\mathrm {iid} _{\mathbb {N} }\circ {\tilde {k}}}, i.e. making the following diagram commute:
In particular, for any exchangeable probability measurep{\displaystyle p}onXN{\displaystyle X^{\mathbb {N} }}, there exists a unique probability measurep~{\displaystyle {\tilde {p}}}onPX{\displaystyle PX}(i.e. a probability measure over probability measures) such thatp=iidN∘p~{\displaystyle p=\mathrm {iid} _{\mathbb {N} }\circ {\tilde {p}}}, i.e. such that for all measurable subsetsA1,…,An{\displaystyle A_{1},\dots ,A_{n}}ofX{\displaystyle X},
In other words,p{\displaystyle p}is amixtureofiid measuresonX{\displaystyle X}(the ones formed byq{\displaystyle q}in the integral above).
Versions of de Finetti's theorem forfiniteexchangeable sequences,[6][7]and forMarkov exchangeablesequences[8]have been proved by Diaconis and Freedman and by Kerns and Szekely.
Two notions of partial exchangeability of arrays, known asseparateandjoint exchangeabilitylead to extensions of de Finetti's theorem for arrays by Aldous and Hoover.[9]
The computable de Finetti theorem shows that if an exchangeable sequence of real random variables is given by a computer program, then a program which samples from the mixing measure can be automatically recovered.[10]
In the setting offree probability, there is a noncommutative extension of de Finetti's theorem which characterizes noncommutative sequences invariant under quantum permutations.[11]
Extensions of de Finetti's theorem to quantum states have been found to be useful inquantum information,[12][13][14]in topics likequantum key distribution[15]andentanglementdetection.[16]A multivariate extension of de Finetti’s theorem can be used to deriveBose–Einstein statisticsfrom the statistics of classical (i.e. independent) particles.[17]
|
https://en.wikipedia.org/wiki/De_Finetti%27s_theorem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.